Allowed Publications
LayerRx Mapping ID
220
Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort
Medscape Lead Concept
5000182

US Alcohol-Related Deaths Double Over 2 Decades, With Notable Age and Gender Disparities

Article Type
Changed
Wed, 11/27/2024 - 02:30

TOPLINE:

US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.

METHODOLOGY:

  • Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
  • Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
  • International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.

TAKEAWAY:

  • Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
  • Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
  • American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
  • The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.

IN PRACTICE:

“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.

SOURCE:

The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.

LIMITATIONS:

According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.

DISCLOSURES:

The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.

METHODOLOGY:

  • Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
  • Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
  • International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.

TAKEAWAY:

  • Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
  • Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
  • American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
  • The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.

IN PRACTICE:

“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.

SOURCE:

The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.

LIMITATIONS:

According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.

DISCLOSURES:

The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.

METHODOLOGY:

  • Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
  • Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
  • International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.

TAKEAWAY:

  • Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
  • Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
  • American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
  • The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.

IN PRACTICE:

“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.

SOURCE:

The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.

LIMITATIONS:

According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.

DISCLOSURES:

The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/22/2024 - 14:58
Un-Gate On Date
Fri, 11/22/2024 - 14:58
Use ProPublica
CFC Schedule Remove Status
Fri, 11/22/2024 - 14:58
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/22/2024 - 14:58

Deprescribe Low-Value Meds to Reduce Polypharmacy Harms

Article Type
Changed
Wed, 11/27/2024 - 02:18

— While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.

In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”

 

Curbing Cardiovascular Drugs

The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.

But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.

Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.

“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.

 

Tapering Opioids

Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.

Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”

In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.

Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.

 

Deprescribing Benzodiazepines 

Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.

The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.

Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.

Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.

Young, Kirkwood, and Thomas reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

— While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.

In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”

 

Curbing Cardiovascular Drugs

The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.

But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.

Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.

“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.

 

Tapering Opioids

Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.

Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”

In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.

Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.

 

Deprescribing Benzodiazepines 

Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.

The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.

Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.

Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.

Young, Kirkwood, and Thomas reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

— While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.

In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”

 

Curbing Cardiovascular Drugs

The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.

But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.

Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.

“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.

 

Tapering Opioids

Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.

Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”

In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.

Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.

 

Deprescribing Benzodiazepines 

Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.

The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.

Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.

Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.

Young, Kirkwood, and Thomas reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM FMF 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/22/2024 - 14:29
Un-Gate On Date
Fri, 11/22/2024 - 14:29
Use ProPublica
CFC Schedule Remove Status
Fri, 11/22/2024 - 14:29
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/22/2024 - 14:29

As Populations Age, Occam’s Razor Loses Its Diagnostic Edge

Article Type
Changed
Wed, 11/27/2024 - 03:25

The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.

“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.

But are these two axioms truly in conflict, or is this a false dichotomy?

 

Occam’s Razor and Simple Diagnoses

Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.

For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.

 

Beyond Hickam’s Dictum and Occam’s Razor

A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.

Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:

1. Incidentaloma: An asymptomatic condition discovered accidentally.

2. Preexisting diagnosis: A known condition in the patient’s medical history.

3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.

4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.

Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.

Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.

Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.

 

Practical Insights

The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.

Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.

 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.

“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.

But are these two axioms truly in conflict, or is this a false dichotomy?

 

Occam’s Razor and Simple Diagnoses

Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.

For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.

 

Beyond Hickam’s Dictum and Occam’s Razor

A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.

Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:

1. Incidentaloma: An asymptomatic condition discovered accidentally.

2. Preexisting diagnosis: A known condition in the patient’s medical history.

3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.

4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.

Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.

Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.

Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.

 

Practical Insights

The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.

Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.

 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.

“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.

But are these two axioms truly in conflict, or is this a false dichotomy?

 

Occam’s Razor and Simple Diagnoses

Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.

For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.

 

Beyond Hickam’s Dictum and Occam’s Razor

A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.

Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:

1. Incidentaloma: An asymptomatic condition discovered accidentally.

2. Preexisting diagnosis: A known condition in the patient’s medical history.

3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.

4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.

Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.

Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.

Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.

 

Practical Insights

The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.

Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.

 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 11/21/2024 - 15:06
Un-Gate On Date
Thu, 11/21/2024 - 15:06
Use ProPublica
CFC Schedule Remove Status
Thu, 11/21/2024 - 15:06
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Thu, 11/21/2024 - 15:06

Aliens, Ian McShane, and Heart Disease Risk

Article Type
Changed
Wed, 11/27/2024 - 03:21


This transcript has been edited for clarity

I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.

An alien spaceship parks itself, Independence Day style, above a local office building. 

But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.

 



They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%. 

Let’s assume the aliens are entirely accurate. What do you do with this information?

Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.

But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people. 

In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”

This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.

We all know that genes play a significant role in our health outcomes. Some diseases (Huntington diseasecystic fibrosissickle cell diseasehemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.

The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.

But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total. 

The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.

The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.

There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.

The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.

But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information. 

It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.

The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.

They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.

But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.

This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.

 



The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.

This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed. 

Let’s imagine we have a population of 40 people.

 



Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.

 



Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.

 



Risk score model 3 also correctly classified 30 of 40, but another different 30.



I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem. 

This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.

How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.

It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry. 

It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either. 

Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic. 

 

Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections


This transcript has been edited for clarity

I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.

An alien spaceship parks itself, Independence Day style, above a local office building. 

But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.

 



They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%. 

Let’s assume the aliens are entirely accurate. What do you do with this information?

Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.

But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people. 

In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”

This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.

We all know that genes play a significant role in our health outcomes. Some diseases (Huntington diseasecystic fibrosissickle cell diseasehemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.

The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.

But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total. 

The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.

The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.

There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.

The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.

But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information. 

It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.

The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.

They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.

But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.

This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.

 



The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.

This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed. 

Let’s imagine we have a population of 40 people.

 



Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.

 



Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.

 



Risk score model 3 also correctly classified 30 of 40, but another different 30.



I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem. 

This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.

How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.

It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry. 

It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either. 

Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic. 

 

Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.


This transcript has been edited for clarity

I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.

An alien spaceship parks itself, Independence Day style, above a local office building. 

But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.

 



They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%. 

Let’s assume the aliens are entirely accurate. What do you do with this information?

Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.

But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people. 

In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”

This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.

We all know that genes play a significant role in our health outcomes. Some diseases (Huntington diseasecystic fibrosissickle cell diseasehemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.

The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.

But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total. 

The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.

The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.

There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.

The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.

But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information. 

It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.

The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.

They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.

But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.

This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.

 



The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.

This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed. 

Let’s imagine we have a population of 40 people.

 



Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.

 



Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.

 



Risk score model 3 also correctly classified 30 of 40, but another different 30.



I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem. 

This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.

How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.

It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry. 

It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either. 

Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic. 

 

Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/20/2024 - 15:27
Un-Gate On Date
Wed, 11/20/2024 - 15:27
Use ProPublica
CFC Schedule Remove Status
Wed, 11/20/2024 - 15:27
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/20/2024 - 15:27

Sitting for More Than 10 Hours Daily Ups Heart Disease Risk

Article Type
Changed
Wed, 11/27/2024 - 03:15

TOPLINE:

Sedentary time exceeding 10.6 h/d is linked to an increased risk for atrial fibrillation, heart failure, myocardial infarction, and cardiovascular (CV) mortality, researchers found. The risk persists even in individuals who meet recommended physical activity levels.

METHODOLOGY:

  • Researchers used a validated machine learning approach to investigate the relationships between sedentary behavior and the future risks for CV illness and mortality in 89,530 middle-aged and older adults (mean age, 62 years; 56% women) from the UK Biobank.
  • Participants provided data from a wrist-worn triaxial accelerometer that recorded their movements over a period of 7 days.
  • Machine learning algorithms classified accelerometer signals into four classes of activities: Sleep, sedentary behavior, light physical activity, and moderate to vigorous physical activity.
  • Participants were followed up for a median of 8 years through linkage to national health-related datasets in England, Scotland, and Wales.
  • The median sedentary time was 9.4 h/d.

TAKEAWAY:

  • During the follow-up period, 3638 individuals (4.9%) experienced incident atrial fibrillation, 1854 (2.09%) developed incident heart failure, 1610 (1.84%) experienced incident myocardial infarction, and 846 (0.94%) died from cardiovascular causes.
  • The risks for atrial fibrillation and myocardial infarction increased steadily with an increase in sedentary time, with sedentary time greater than 10.6 h/d showing a modest increase in risk for atrial fibrillation (hazard ratio [HR], 1.11; 95% CI, 1.01-1.21).
  • The risks for heart failure and CV mortality were low until sedentary time surpassed approximately 10.6 h/d, after which they rose by 45% (HR, 1.45; 95% CI, 1.28-1.65) and 62% (HR, 1.62; 95% CI, 1.34-1.96), respectively.
  • The associations were attenuated but remained significant for CV mortality (HR, 1.33; 95% CI: 1.07-1.64) in individuals who met the recommended levels for physical activity yet were sedentary for more than 10.6 h/d. Reallocating 30 minutes of sedentary time to other activities reduced the risk for heart failure (HR, 0.93; 95% CI, 0.90-0.96) among those who were sedentary more than 10.6 h/d.

IN PRACTICE:

The study “highlights a complex interplay between sedentary behavior and physical activity, ultimately suggesting that sedentary behavior remains relevant for CV disease risk even among individuals meeting sufficient” levels of activity, the researchers reported.

“Individuals should move more and be less sedentary to reduce CV risk. ... Being a ‘weekend warrior’ and meeting guideline levels of [moderate to vigorous physical activity] of 150 minutes/week will not completely abolish the deleterious effects of extended sedentary time of > 10.6 hours per day,” Charles B. Eaton, MD, MS, of the Warren Alpert Medical School of Brown University in Providence, Rhode Island, wrote in an editorial accompanying the journal article.

 

SOURCE:

The study was led by Ezimamaka Ajufo, MD, of Brigham and Women’s Hospital in Boston. It was published online on November 15, 2024, in the Journal of the American College of Cardiology.

LIMITATIONS:

Wrist-based accelerometers cannot assess specific contexts for sedentary behavior and may misclassify standing time as sedentary time, and these limitations may have affected the findings. Physical activity was measured for 1 week only, which might not have fully represented habitual activity patterns. The sample included predominantly White participants and was enriched for health and socioeconomic status, which may have limited the generalizability of the findings.

DISCLOSURES:

The authors disclosed receiving research support, grants, and research fellowships and collaborations from various institutions and pharmaceutical companies, as well as serving on their advisory boards.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Sedentary time exceeding 10.6 h/d is linked to an increased risk for atrial fibrillation, heart failure, myocardial infarction, and cardiovascular (CV) mortality, researchers found. The risk persists even in individuals who meet recommended physical activity levels.

METHODOLOGY:

  • Researchers used a validated machine learning approach to investigate the relationships between sedentary behavior and the future risks for CV illness and mortality in 89,530 middle-aged and older adults (mean age, 62 years; 56% women) from the UK Biobank.
  • Participants provided data from a wrist-worn triaxial accelerometer that recorded their movements over a period of 7 days.
  • Machine learning algorithms classified accelerometer signals into four classes of activities: Sleep, sedentary behavior, light physical activity, and moderate to vigorous physical activity.
  • Participants were followed up for a median of 8 years through linkage to national health-related datasets in England, Scotland, and Wales.
  • The median sedentary time was 9.4 h/d.

TAKEAWAY:

  • During the follow-up period, 3638 individuals (4.9%) experienced incident atrial fibrillation, 1854 (2.09%) developed incident heart failure, 1610 (1.84%) experienced incident myocardial infarction, and 846 (0.94%) died from cardiovascular causes.
  • The risks for atrial fibrillation and myocardial infarction increased steadily with an increase in sedentary time, with sedentary time greater than 10.6 h/d showing a modest increase in risk for atrial fibrillation (hazard ratio [HR], 1.11; 95% CI, 1.01-1.21).
  • The risks for heart failure and CV mortality were low until sedentary time surpassed approximately 10.6 h/d, after which they rose by 45% (HR, 1.45; 95% CI, 1.28-1.65) and 62% (HR, 1.62; 95% CI, 1.34-1.96), respectively.
  • The associations were attenuated but remained significant for CV mortality (HR, 1.33; 95% CI: 1.07-1.64) in individuals who met the recommended levels for physical activity yet were sedentary for more than 10.6 h/d. Reallocating 30 minutes of sedentary time to other activities reduced the risk for heart failure (HR, 0.93; 95% CI, 0.90-0.96) among those who were sedentary more than 10.6 h/d.

IN PRACTICE:

The study “highlights a complex interplay between sedentary behavior and physical activity, ultimately suggesting that sedentary behavior remains relevant for CV disease risk even among individuals meeting sufficient” levels of activity, the researchers reported.

“Individuals should move more and be less sedentary to reduce CV risk. ... Being a ‘weekend warrior’ and meeting guideline levels of [moderate to vigorous physical activity] of 150 minutes/week will not completely abolish the deleterious effects of extended sedentary time of > 10.6 hours per day,” Charles B. Eaton, MD, MS, of the Warren Alpert Medical School of Brown University in Providence, Rhode Island, wrote in an editorial accompanying the journal article.

 

SOURCE:

The study was led by Ezimamaka Ajufo, MD, of Brigham and Women’s Hospital in Boston. It was published online on November 15, 2024, in the Journal of the American College of Cardiology.

LIMITATIONS:

Wrist-based accelerometers cannot assess specific contexts for sedentary behavior and may misclassify standing time as sedentary time, and these limitations may have affected the findings. Physical activity was measured for 1 week only, which might not have fully represented habitual activity patterns. The sample included predominantly White participants and was enriched for health and socioeconomic status, which may have limited the generalizability of the findings.

DISCLOSURES:

The authors disclosed receiving research support, grants, and research fellowships and collaborations from various institutions and pharmaceutical companies, as well as serving on their advisory boards.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

Sedentary time exceeding 10.6 h/d is linked to an increased risk for atrial fibrillation, heart failure, myocardial infarction, and cardiovascular (CV) mortality, researchers found. The risk persists even in individuals who meet recommended physical activity levels.

METHODOLOGY:

  • Researchers used a validated machine learning approach to investigate the relationships between sedentary behavior and the future risks for CV illness and mortality in 89,530 middle-aged and older adults (mean age, 62 years; 56% women) from the UK Biobank.
  • Participants provided data from a wrist-worn triaxial accelerometer that recorded their movements over a period of 7 days.
  • Machine learning algorithms classified accelerometer signals into four classes of activities: Sleep, sedentary behavior, light physical activity, and moderate to vigorous physical activity.
  • Participants were followed up for a median of 8 years through linkage to national health-related datasets in England, Scotland, and Wales.
  • The median sedentary time was 9.4 h/d.

TAKEAWAY:

  • During the follow-up period, 3638 individuals (4.9%) experienced incident atrial fibrillation, 1854 (2.09%) developed incident heart failure, 1610 (1.84%) experienced incident myocardial infarction, and 846 (0.94%) died from cardiovascular causes.
  • The risks for atrial fibrillation and myocardial infarction increased steadily with an increase in sedentary time, with sedentary time greater than 10.6 h/d showing a modest increase in risk for atrial fibrillation (hazard ratio [HR], 1.11; 95% CI, 1.01-1.21).
  • The risks for heart failure and CV mortality were low until sedentary time surpassed approximately 10.6 h/d, after which they rose by 45% (HR, 1.45; 95% CI, 1.28-1.65) and 62% (HR, 1.62; 95% CI, 1.34-1.96), respectively.
  • The associations were attenuated but remained significant for CV mortality (HR, 1.33; 95% CI: 1.07-1.64) in individuals who met the recommended levels for physical activity yet were sedentary for more than 10.6 h/d. Reallocating 30 minutes of sedentary time to other activities reduced the risk for heart failure (HR, 0.93; 95% CI, 0.90-0.96) among those who were sedentary more than 10.6 h/d.

IN PRACTICE:

The study “highlights a complex interplay between sedentary behavior and physical activity, ultimately suggesting that sedentary behavior remains relevant for CV disease risk even among individuals meeting sufficient” levels of activity, the researchers reported.

“Individuals should move more and be less sedentary to reduce CV risk. ... Being a ‘weekend warrior’ and meeting guideline levels of [moderate to vigorous physical activity] of 150 minutes/week will not completely abolish the deleterious effects of extended sedentary time of > 10.6 hours per day,” Charles B. Eaton, MD, MS, of the Warren Alpert Medical School of Brown University in Providence, Rhode Island, wrote in an editorial accompanying the journal article.

 

SOURCE:

The study was led by Ezimamaka Ajufo, MD, of Brigham and Women’s Hospital in Boston. It was published online on November 15, 2024, in the Journal of the American College of Cardiology.

LIMITATIONS:

Wrist-based accelerometers cannot assess specific contexts for sedentary behavior and may misclassify standing time as sedentary time, and these limitations may have affected the findings. Physical activity was measured for 1 week only, which might not have fully represented habitual activity patterns. The sample included predominantly White participants and was enriched for health and socioeconomic status, which may have limited the generalizability of the findings.

DISCLOSURES:

The authors disclosed receiving research support, grants, and research fellowships and collaborations from various institutions and pharmaceutical companies, as well as serving on their advisory boards.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/20/2024 - 11:37
Un-Gate On Date
Wed, 11/20/2024 - 11:37
Use ProPublica
CFC Schedule Remove Status
Wed, 11/20/2024 - 11:37
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/20/2024 - 11:37

An Epidemiologist’s Guide to Debunking Nutritional Research

Article Type
Changed
Wed, 11/27/2024 - 04:13
Or How to Seem Clever at Dinner Parties

You’re invited to a dinner party but you struggle to make small talk. Do not worry; you can use your knowledge of study design and epidemiology to impress people with your savoir faire regarding popular food myths that will invariably crop up over cocktails. Because all journalism has been reduced to listicles, here are four ways to seem clever at dinner parties.

1. The Predinner Cocktails: A Lesson in Reverse Causation

Wine connoisseurs sniff, swirl, and gently swish the wine in their mouths before spitting out and cleansing their palates to better appreciate the subtlety of each vintage. If you’re not an oenophile, no matter. Whenever somebody claims that moderate amounts of alcohol are good for your heart, this is your moment to pounce. Interject yourself in the conversation and tell everybody about reverse causation.

Reverse causation, also known as protopathic bias, involves misinterpreting the directionality of an association. You assume that X leads to Y, when in fact Y leads to X. Temporal paradoxes are useful plot devices in science fiction movies, but they have no place in medical research. In our bland world, cause must precede effect. As such, smoking leads to lung cancer; lung cancer doesn’t make you smoke more. 

But with alcohol, directionality is less obvious. Many studies of alcohol and cardiovascular disease have demonstrated a U-shaped association, with risk being lowest among those who drink moderate amounts of alcohol (usually one to two drinks per day) and higher in those who drink more and also those who drink very little.

But one must ask why some people drink little or no alcohol. There is an important difference between former drinkers and never drinkers. Former drinkers cut back for a reason. More likely than not, the reason for this newfound sobriety was medical. A new cancer diagnosis, the emergence of atrial fibrillation, the development of diabetes, or rising blood pressure are all good reasons to reduce or eliminate alcohol. A cross-sectional study will fail to capture that alcohol consumption changes over time — people who now don’t drink may have imbibed alcohol in years past. It was not abstinence that led to an increased risk for heart disease; it was the increased risk for heart disease that led to abstinence.

You see the same phenomenon with the so-called obesity paradox. The idea that being a little overweight is good for you may appeal when you no longer fit into last year’s pants. But people who are underweight are so for a reason. Malnutrition, cachexia from cancer, or some other cause is almost certainly driving up the risk at the left-hand side of the U-shaped curve that makes the middle part seem better than it actually is.

Food consumption changes over time. A cross-sectional survey at one point in time cannot accurately capture past habits and distant exposures, especially for diseases such as heart disease and cancer that develop slowly over time. Studies on alcohol that try to overcome these shortcomings by eliminating former drinkers, or by using Mendelian randomization to better account for past exposure, do not show a cardiovascular benefit for moderate red wine drinking.

 

2. The Hors D’oeuvres — The Importance of RCTs

Now that you have made yourself the center of attention, it is time to cement your newfound reputation as a font of scientific knowledge. Most self-respecting hosts will serve smoked salmon as an amuse-bouche before the main meal. When someone mentions the health benefits of fish oils, you should take the opportunity to teach them about confounding.

Fish, especially cold-water fish from northern climates, have relatively high amounts of omega-3 fatty acids. Despite the plethora of observational studies suggesting a cardiovascular benefit, it’s now relatively clear that fish oil or omega-3 supplements have no medical benefit.

This will probably come as a shock to the worried well, but many studies, including VITAL and ASCEND, have demonstrated no cardiovascular or cancer benefit to supplementation with omega-3s. The reason is straightforward and explains why hormone replacement therapy, vitamin D, and myriad purported game-changers never panned out. Confounding is hard to overcome in observational research.

Prior to the publication of the Women’s Health Initiative (WHI) Study, hormone replacement therapy was routinely prescribed to postmenopausal women because numerous observational studies suggested a cardiovascular benefit. But with the publication of the WHI study, it became clear that much of that “benefit” was due to confounding. The women choosing to take hormones were more health conscious at baseline and healthier overall. 

A similar phenomenon occurred during COVID. Patients with low serum vitamin D levels had worse outcomes, prompting many to suggest vitamin D supplementation as a possible treatment. Trials did not support the intervention because we’d overlooked the obvious. People with vitamin D deficiency have underlying health problems that contribute to the vitamin D deficiency. They are probably older, frailer, possibly with a poorer diet. No amount of statistical adjustment can account for all those differences, and some degree of residual confounding will always persist.

The only way to overcome confounding is with randomization. When patients are randomly assigned to one group or another, their baseline differences largely balance out if the randomization was performed properly and the groups were large enough. There is a role for observational research, such as in situations where ethics, cost, and practicality do not allow for a randomized controlled trial. But randomized controlled trials have largely put to rest the purported health benefits of over-the-counter fish oils, omega-3s, and vitamin D.

 

3. The Main Course — Absolute vs Relative Risk

When you get to the main course, all eyes will now be on you. You will almost certainly be called upon to pronounce on the harms or benefits of red meat consumption. Begin by regaling your guests with a little trivia. Ask them if they know the definition of red meat and white meat. When someone says pork is white meat, you can reveal that “pork, the other white meat,” was a marketing slogan with no scientific underpinning. Now that everyone is lulled into a stupefied silence, tell them that red meat comes from mammals and white meat comes from birds. As they process this revelation, you can now launch into the deeply mathematical concept of absolute vs relative risk.

Many etiquette books will caution against bringing up math at a dinner party. These books are wrong. Everyone finds math interesting if they are primed properly. For example, you can point to a study claiming that berries reduce cardiovascular risk in women. Even if true — and there is reason to be cautious, given the observational nature of the research — we need to understand what the authors meant by a 32% risk reduction. (Side note: It was a reduction in hazard, with a hazard ratio of 0.68 (95% CI, 0.49-0.96), but we won’t dwell on the difference between hazard ratios and risk ratios right now.)

This relative risk reduction has to be interpreted carefully. The authors divided the population into quintiles based on their consumption of anthocyanins (the antioxidant in blueberries and strawberries) and compared the bottom fifth (average consumption, 2.5 mg/d) with the top fifth (average consumption, 25 mg/d). The bottom quintile had 126 myocardial infarctions (MIs) over 324,793 patient-years compared with 59 MIs over 332,143 patient-years. Some quick math shows an approximate reduction from 39 to 18 MIs per 100,000 patient-years. Or to put it another way, you must get 4762 women to increase their berry consumption 10-fold for 1 year to prevent one heart attack. Feel free to show people how you calculated this number. They will be impressed by your head for numbers. It is nothing more than 39 minus 18, divided by 100,000, to get the absolute risk reduction. Take the reciprocal of this (ie, 1 divided by this number) to get the number needed to treat.

Describing risks in absolute terms or using number needed to treat (or harm) can help conceptualize statistics that are sometimes hard to wrap your head around.

 

4. Dessert — Funding

By the time the coffee is served, everyone will be hanging on to your every word. This is as it should be, and you should not be afraid of your newfound power and influence. 

Dessert will probably involve some form of chocolate, possibly in cake format. (Anyone who serves fruit as dessert is not someone you should associate with.) Take the opportunity to tell your follow diners that chocolate is not actually good for you and will not boost brain performance.

The health benefits of chocolate are often repeated but rarely scrutinized. In fact, much of the scientific research purporting to show that chocolate is good for you did not actually study chocolate. It usually involved a cocoa bean extract because the chocolate manufacturing process destroys the supposedly health-promoting antioxidants in the cocoa bean. It is true that dark chocolate has more antioxidants than milk chocolate, and that the addition of milk to chocolate further inactivates the potentially healthy antioxidants. But the amount of sugar and fat that has to be added to chocolate to make it palatable precludes any serious consideration about health benefits. Dark chocolate may have less fat and sugar than milk chocolate, but it still has a lot.

But even the cocoa bean extract doesn’t seem to do much for your heart or your brain. The long-awaited COSMOS study was published with surprisingly little fanfare. The largest randomized controlled trial of chocolate (or rather cocoa bean extract) was supposed to settle the issue definitively.

COSMOS showed no cardiovascular or neurocognitive benefit to the cocoa bean extract. But the health halo of chocolate continues to be bolstered by many studies funded by chocolate manufacturers

We are appropriately critical of the pharmaceutical industry’s involvement in drug research. However, we should not forget that any private entity is prone to the same self-interest regardless of its product’s tastiness. How many of you knew that there was an avocado lobby funding research? No matter how many industry-funded observational studies using surrogate endpoints are out there telling you that chocolate is healthy, a randomized trial with hard clinical endpoints such as COSMOS should generally win the day.

 

The Final Goodbyes — Summarizing Your Case

As the party slowly winds down and everyone is saddened that you will soon take your leave, synthesize everything you have taught them over the evening. Like movies, not all studies are good. Some are just bad. They can be prone to reverse causation or confounding, and they may report relative risks when absolute risks would be more telling. Reading research studies critically is essential for separating the wheat from the chaff. With the knowledge you have now imparted to your friends, they will be much better consumers of medical news, especially when it comes to food. 

And they will no doubt thank you for it by never inviting you to another dinner party!

Labos, a cardiologist at Hôpital, Notre-Dame, Montreal, Quebec, Canada, has disclosed no relevant financial relationships. He has a degree in epidemiology.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections
Or How to Seem Clever at Dinner Parties
Or How to Seem Clever at Dinner Parties

You’re invited to a dinner party but you struggle to make small talk. Do not worry; you can use your knowledge of study design and epidemiology to impress people with your savoir faire regarding popular food myths that will invariably crop up over cocktails. Because all journalism has been reduced to listicles, here are four ways to seem clever at dinner parties.

1. The Predinner Cocktails: A Lesson in Reverse Causation

Wine connoisseurs sniff, swirl, and gently swish the wine in their mouths before spitting out and cleansing their palates to better appreciate the subtlety of each vintage. If you’re not an oenophile, no matter. Whenever somebody claims that moderate amounts of alcohol are good for your heart, this is your moment to pounce. Interject yourself in the conversation and tell everybody about reverse causation.

Reverse causation, also known as protopathic bias, involves misinterpreting the directionality of an association. You assume that X leads to Y, when in fact Y leads to X. Temporal paradoxes are useful plot devices in science fiction movies, but they have no place in medical research. In our bland world, cause must precede effect. As such, smoking leads to lung cancer; lung cancer doesn’t make you smoke more. 

But with alcohol, directionality is less obvious. Many studies of alcohol and cardiovascular disease have demonstrated a U-shaped association, with risk being lowest among those who drink moderate amounts of alcohol (usually one to two drinks per day) and higher in those who drink more and also those who drink very little.

But one must ask why some people drink little or no alcohol. There is an important difference between former drinkers and never drinkers. Former drinkers cut back for a reason. More likely than not, the reason for this newfound sobriety was medical. A new cancer diagnosis, the emergence of atrial fibrillation, the development of diabetes, or rising blood pressure are all good reasons to reduce or eliminate alcohol. A cross-sectional study will fail to capture that alcohol consumption changes over time — people who now don’t drink may have imbibed alcohol in years past. It was not abstinence that led to an increased risk for heart disease; it was the increased risk for heart disease that led to abstinence.

You see the same phenomenon with the so-called obesity paradox. The idea that being a little overweight is good for you may appeal when you no longer fit into last year’s pants. But people who are underweight are so for a reason. Malnutrition, cachexia from cancer, or some other cause is almost certainly driving up the risk at the left-hand side of the U-shaped curve that makes the middle part seem better than it actually is.

Food consumption changes over time. A cross-sectional survey at one point in time cannot accurately capture past habits and distant exposures, especially for diseases such as heart disease and cancer that develop slowly over time. Studies on alcohol that try to overcome these shortcomings by eliminating former drinkers, or by using Mendelian randomization to better account for past exposure, do not show a cardiovascular benefit for moderate red wine drinking.

 

2. The Hors D’oeuvres — The Importance of RCTs

Now that you have made yourself the center of attention, it is time to cement your newfound reputation as a font of scientific knowledge. Most self-respecting hosts will serve smoked salmon as an amuse-bouche before the main meal. When someone mentions the health benefits of fish oils, you should take the opportunity to teach them about confounding.

Fish, especially cold-water fish from northern climates, have relatively high amounts of omega-3 fatty acids. Despite the plethora of observational studies suggesting a cardiovascular benefit, it’s now relatively clear that fish oil or omega-3 supplements have no medical benefit.

This will probably come as a shock to the worried well, but many studies, including VITAL and ASCEND, have demonstrated no cardiovascular or cancer benefit to supplementation with omega-3s. The reason is straightforward and explains why hormone replacement therapy, vitamin D, and myriad purported game-changers never panned out. Confounding is hard to overcome in observational research.

Prior to the publication of the Women’s Health Initiative (WHI) Study, hormone replacement therapy was routinely prescribed to postmenopausal women because numerous observational studies suggested a cardiovascular benefit. But with the publication of the WHI study, it became clear that much of that “benefit” was due to confounding. The women choosing to take hormones were more health conscious at baseline and healthier overall. 

A similar phenomenon occurred during COVID. Patients with low serum vitamin D levels had worse outcomes, prompting many to suggest vitamin D supplementation as a possible treatment. Trials did not support the intervention because we’d overlooked the obvious. People with vitamin D deficiency have underlying health problems that contribute to the vitamin D deficiency. They are probably older, frailer, possibly with a poorer diet. No amount of statistical adjustment can account for all those differences, and some degree of residual confounding will always persist.

The only way to overcome confounding is with randomization. When patients are randomly assigned to one group or another, their baseline differences largely balance out if the randomization was performed properly and the groups were large enough. There is a role for observational research, such as in situations where ethics, cost, and practicality do not allow for a randomized controlled trial. But randomized controlled trials have largely put to rest the purported health benefits of over-the-counter fish oils, omega-3s, and vitamin D.

 

3. The Main Course — Absolute vs Relative Risk

When you get to the main course, all eyes will now be on you. You will almost certainly be called upon to pronounce on the harms or benefits of red meat consumption. Begin by regaling your guests with a little trivia. Ask them if they know the definition of red meat and white meat. When someone says pork is white meat, you can reveal that “pork, the other white meat,” was a marketing slogan with no scientific underpinning. Now that everyone is lulled into a stupefied silence, tell them that red meat comes from mammals and white meat comes from birds. As they process this revelation, you can now launch into the deeply mathematical concept of absolute vs relative risk.

Many etiquette books will caution against bringing up math at a dinner party. These books are wrong. Everyone finds math interesting if they are primed properly. For example, you can point to a study claiming that berries reduce cardiovascular risk in women. Even if true — and there is reason to be cautious, given the observational nature of the research — we need to understand what the authors meant by a 32% risk reduction. (Side note: It was a reduction in hazard, with a hazard ratio of 0.68 (95% CI, 0.49-0.96), but we won’t dwell on the difference between hazard ratios and risk ratios right now.)

This relative risk reduction has to be interpreted carefully. The authors divided the population into quintiles based on their consumption of anthocyanins (the antioxidant in blueberries and strawberries) and compared the bottom fifth (average consumption, 2.5 mg/d) with the top fifth (average consumption, 25 mg/d). The bottom quintile had 126 myocardial infarctions (MIs) over 324,793 patient-years compared with 59 MIs over 332,143 patient-years. Some quick math shows an approximate reduction from 39 to 18 MIs per 100,000 patient-years. Or to put it another way, you must get 4762 women to increase their berry consumption 10-fold for 1 year to prevent one heart attack. Feel free to show people how you calculated this number. They will be impressed by your head for numbers. It is nothing more than 39 minus 18, divided by 100,000, to get the absolute risk reduction. Take the reciprocal of this (ie, 1 divided by this number) to get the number needed to treat.

Describing risks in absolute terms or using number needed to treat (or harm) can help conceptualize statistics that are sometimes hard to wrap your head around.

 

4. Dessert — Funding

By the time the coffee is served, everyone will be hanging on to your every word. This is as it should be, and you should not be afraid of your newfound power and influence. 

Dessert will probably involve some form of chocolate, possibly in cake format. (Anyone who serves fruit as dessert is not someone you should associate with.) Take the opportunity to tell your follow diners that chocolate is not actually good for you and will not boost brain performance.

The health benefits of chocolate are often repeated but rarely scrutinized. In fact, much of the scientific research purporting to show that chocolate is good for you did not actually study chocolate. It usually involved a cocoa bean extract because the chocolate manufacturing process destroys the supposedly health-promoting antioxidants in the cocoa bean. It is true that dark chocolate has more antioxidants than milk chocolate, and that the addition of milk to chocolate further inactivates the potentially healthy antioxidants. But the amount of sugar and fat that has to be added to chocolate to make it palatable precludes any serious consideration about health benefits. Dark chocolate may have less fat and sugar than milk chocolate, but it still has a lot.

But even the cocoa bean extract doesn’t seem to do much for your heart or your brain. The long-awaited COSMOS study was published with surprisingly little fanfare. The largest randomized controlled trial of chocolate (or rather cocoa bean extract) was supposed to settle the issue definitively.

COSMOS showed no cardiovascular or neurocognitive benefit to the cocoa bean extract. But the health halo of chocolate continues to be bolstered by many studies funded by chocolate manufacturers

We are appropriately critical of the pharmaceutical industry’s involvement in drug research. However, we should not forget that any private entity is prone to the same self-interest regardless of its product’s tastiness. How many of you knew that there was an avocado lobby funding research? No matter how many industry-funded observational studies using surrogate endpoints are out there telling you that chocolate is healthy, a randomized trial with hard clinical endpoints such as COSMOS should generally win the day.

 

The Final Goodbyes — Summarizing Your Case

As the party slowly winds down and everyone is saddened that you will soon take your leave, synthesize everything you have taught them over the evening. Like movies, not all studies are good. Some are just bad. They can be prone to reverse causation or confounding, and they may report relative risks when absolute risks would be more telling. Reading research studies critically is essential for separating the wheat from the chaff. With the knowledge you have now imparted to your friends, they will be much better consumers of medical news, especially when it comes to food. 

And they will no doubt thank you for it by never inviting you to another dinner party!

Labos, a cardiologist at Hôpital, Notre-Dame, Montreal, Quebec, Canada, has disclosed no relevant financial relationships. He has a degree in epidemiology.

A version of this article appeared on Medscape.com.

You’re invited to a dinner party but you struggle to make small talk. Do not worry; you can use your knowledge of study design and epidemiology to impress people with your savoir faire regarding popular food myths that will invariably crop up over cocktails. Because all journalism has been reduced to listicles, here are four ways to seem clever at dinner parties.

1. The Predinner Cocktails: A Lesson in Reverse Causation

Wine connoisseurs sniff, swirl, and gently swish the wine in their mouths before spitting out and cleansing their palates to better appreciate the subtlety of each vintage. If you’re not an oenophile, no matter. Whenever somebody claims that moderate amounts of alcohol are good for your heart, this is your moment to pounce. Interject yourself in the conversation and tell everybody about reverse causation.

Reverse causation, also known as protopathic bias, involves misinterpreting the directionality of an association. You assume that X leads to Y, when in fact Y leads to X. Temporal paradoxes are useful plot devices in science fiction movies, but they have no place in medical research. In our bland world, cause must precede effect. As such, smoking leads to lung cancer; lung cancer doesn’t make you smoke more. 

But with alcohol, directionality is less obvious. Many studies of alcohol and cardiovascular disease have demonstrated a U-shaped association, with risk being lowest among those who drink moderate amounts of alcohol (usually one to two drinks per day) and higher in those who drink more and also those who drink very little.

But one must ask why some people drink little or no alcohol. There is an important difference between former drinkers and never drinkers. Former drinkers cut back for a reason. More likely than not, the reason for this newfound sobriety was medical. A new cancer diagnosis, the emergence of atrial fibrillation, the development of diabetes, or rising blood pressure are all good reasons to reduce or eliminate alcohol. A cross-sectional study will fail to capture that alcohol consumption changes over time — people who now don’t drink may have imbibed alcohol in years past. It was not abstinence that led to an increased risk for heart disease; it was the increased risk for heart disease that led to abstinence.

You see the same phenomenon with the so-called obesity paradox. The idea that being a little overweight is good for you may appeal when you no longer fit into last year’s pants. But people who are underweight are so for a reason. Malnutrition, cachexia from cancer, or some other cause is almost certainly driving up the risk at the left-hand side of the U-shaped curve that makes the middle part seem better than it actually is.

Food consumption changes over time. A cross-sectional survey at one point in time cannot accurately capture past habits and distant exposures, especially for diseases such as heart disease and cancer that develop slowly over time. Studies on alcohol that try to overcome these shortcomings by eliminating former drinkers, or by using Mendelian randomization to better account for past exposure, do not show a cardiovascular benefit for moderate red wine drinking.

 

2. The Hors D’oeuvres — The Importance of RCTs

Now that you have made yourself the center of attention, it is time to cement your newfound reputation as a font of scientific knowledge. Most self-respecting hosts will serve smoked salmon as an amuse-bouche before the main meal. When someone mentions the health benefits of fish oils, you should take the opportunity to teach them about confounding.

Fish, especially cold-water fish from northern climates, have relatively high amounts of omega-3 fatty acids. Despite the plethora of observational studies suggesting a cardiovascular benefit, it’s now relatively clear that fish oil or omega-3 supplements have no medical benefit.

This will probably come as a shock to the worried well, but many studies, including VITAL and ASCEND, have demonstrated no cardiovascular or cancer benefit to supplementation with omega-3s. The reason is straightforward and explains why hormone replacement therapy, vitamin D, and myriad purported game-changers never panned out. Confounding is hard to overcome in observational research.

Prior to the publication of the Women’s Health Initiative (WHI) Study, hormone replacement therapy was routinely prescribed to postmenopausal women because numerous observational studies suggested a cardiovascular benefit. But with the publication of the WHI study, it became clear that much of that “benefit” was due to confounding. The women choosing to take hormones were more health conscious at baseline and healthier overall. 

A similar phenomenon occurred during COVID. Patients with low serum vitamin D levels had worse outcomes, prompting many to suggest vitamin D supplementation as a possible treatment. Trials did not support the intervention because we’d overlooked the obvious. People with vitamin D deficiency have underlying health problems that contribute to the vitamin D deficiency. They are probably older, frailer, possibly with a poorer diet. No amount of statistical adjustment can account for all those differences, and some degree of residual confounding will always persist.

The only way to overcome confounding is with randomization. When patients are randomly assigned to one group or another, their baseline differences largely balance out if the randomization was performed properly and the groups were large enough. There is a role for observational research, such as in situations where ethics, cost, and practicality do not allow for a randomized controlled trial. But randomized controlled trials have largely put to rest the purported health benefits of over-the-counter fish oils, omega-3s, and vitamin D.

 

3. The Main Course — Absolute vs Relative Risk

When you get to the main course, all eyes will now be on you. You will almost certainly be called upon to pronounce on the harms or benefits of red meat consumption. Begin by regaling your guests with a little trivia. Ask them if they know the definition of red meat and white meat. When someone says pork is white meat, you can reveal that “pork, the other white meat,” was a marketing slogan with no scientific underpinning. Now that everyone is lulled into a stupefied silence, tell them that red meat comes from mammals and white meat comes from birds. As they process this revelation, you can now launch into the deeply mathematical concept of absolute vs relative risk.

Many etiquette books will caution against bringing up math at a dinner party. These books are wrong. Everyone finds math interesting if they are primed properly. For example, you can point to a study claiming that berries reduce cardiovascular risk in women. Even if true — and there is reason to be cautious, given the observational nature of the research — we need to understand what the authors meant by a 32% risk reduction. (Side note: It was a reduction in hazard, with a hazard ratio of 0.68 (95% CI, 0.49-0.96), but we won’t dwell on the difference between hazard ratios and risk ratios right now.)

This relative risk reduction has to be interpreted carefully. The authors divided the population into quintiles based on their consumption of anthocyanins (the antioxidant in blueberries and strawberries) and compared the bottom fifth (average consumption, 2.5 mg/d) with the top fifth (average consumption, 25 mg/d). The bottom quintile had 126 myocardial infarctions (MIs) over 324,793 patient-years compared with 59 MIs over 332,143 patient-years. Some quick math shows an approximate reduction from 39 to 18 MIs per 100,000 patient-years. Or to put it another way, you must get 4762 women to increase their berry consumption 10-fold for 1 year to prevent one heart attack. Feel free to show people how you calculated this number. They will be impressed by your head for numbers. It is nothing more than 39 minus 18, divided by 100,000, to get the absolute risk reduction. Take the reciprocal of this (ie, 1 divided by this number) to get the number needed to treat.

Describing risks in absolute terms or using number needed to treat (or harm) can help conceptualize statistics that are sometimes hard to wrap your head around.

 

4. Dessert — Funding

By the time the coffee is served, everyone will be hanging on to your every word. This is as it should be, and you should not be afraid of your newfound power and influence. 

Dessert will probably involve some form of chocolate, possibly in cake format. (Anyone who serves fruit as dessert is not someone you should associate with.) Take the opportunity to tell your follow diners that chocolate is not actually good for you and will not boost brain performance.

The health benefits of chocolate are often repeated but rarely scrutinized. In fact, much of the scientific research purporting to show that chocolate is good for you did not actually study chocolate. It usually involved a cocoa bean extract because the chocolate manufacturing process destroys the supposedly health-promoting antioxidants in the cocoa bean. It is true that dark chocolate has more antioxidants than milk chocolate, and that the addition of milk to chocolate further inactivates the potentially healthy antioxidants. But the amount of sugar and fat that has to be added to chocolate to make it palatable precludes any serious consideration about health benefits. Dark chocolate may have less fat and sugar than milk chocolate, but it still has a lot.

But even the cocoa bean extract doesn’t seem to do much for your heart or your brain. The long-awaited COSMOS study was published with surprisingly little fanfare. The largest randomized controlled trial of chocolate (or rather cocoa bean extract) was supposed to settle the issue definitively.

COSMOS showed no cardiovascular or neurocognitive benefit to the cocoa bean extract. But the health halo of chocolate continues to be bolstered by many studies funded by chocolate manufacturers

We are appropriately critical of the pharmaceutical industry’s involvement in drug research. However, we should not forget that any private entity is prone to the same self-interest regardless of its product’s tastiness. How many of you knew that there was an avocado lobby funding research? No matter how many industry-funded observational studies using surrogate endpoints are out there telling you that chocolate is healthy, a randomized trial with hard clinical endpoints such as COSMOS should generally win the day.

 

The Final Goodbyes — Summarizing Your Case

As the party slowly winds down and everyone is saddened that you will soon take your leave, synthesize everything you have taught them over the evening. Like movies, not all studies are good. Some are just bad. They can be prone to reverse causation or confounding, and they may report relative risks when absolute risks would be more telling. Reading research studies critically is essential for separating the wheat from the chaff. With the knowledge you have now imparted to your friends, they will be much better consumers of medical news, especially when it comes to food. 

And they will no doubt thank you for it by never inviting you to another dinner party!

Labos, a cardiologist at Hôpital, Notre-Dame, Montreal, Quebec, Canada, has disclosed no relevant financial relationships. He has a degree in epidemiology.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 11/18/2024 - 15:45
Un-Gate On Date
Mon, 11/18/2024 - 15:45
Use ProPublica
CFC Schedule Remove Status
Mon, 11/18/2024 - 15:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 11/18/2024 - 15:45

On Second Thought: Aspirin for Primary Prevention — What We Really Know

Article Type
Changed
Wed, 11/27/2024 - 04:38

This transcript has been edited for clarity

Aspirin. Once upon a time, everybody over age 50 years was supposed to take a baby aspirin. Now we make it a point to tell people to stop. What is going on?  

Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients. 

That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).

Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy. 

For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal. 

People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day. 

Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.

Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell. 

More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.

We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does. 

If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing. 

The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead. 

The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine. 

That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.

But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?

Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity

Aspirin. Once upon a time, everybody over age 50 years was supposed to take a baby aspirin. Now we make it a point to tell people to stop. What is going on?  

Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients. 

That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).

Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy. 

For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal. 

People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day. 

Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.

Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell. 

More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.

We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does. 

If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing. 

The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead. 

The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine. 

That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.

But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?

Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity

Aspirin. Once upon a time, everybody over age 50 years was supposed to take a baby aspirin. Now we make it a point to tell people to stop. What is going on?  

Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients. 

That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).

Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy. 

For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal. 

People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day. 

Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.

Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell. 

More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.

We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does. 

If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing. 

The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead. 

The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine. 

That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.

But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?

Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/27/2024 - 04:38
Un-Gate On Date
Wed, 11/27/2024 - 04:38
Use ProPublica
CFC Schedule Remove Status
Wed, 11/27/2024 - 04:38
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/27/2024 - 04:38

Higher Doses of Vitamin D3 Do Not Reduce Cardiac Biomarkers in Older Adults

Article Type
Changed
Tue, 10/22/2024 - 11:14

 

TOPLINE:

Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.

METHODOLOGY:

  • A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
  • Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
  • Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
  • The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
  • The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.

TAKEAWAY:

  • Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
  • No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
  • Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
  • The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.

IN PRACTICE:

“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.

SOURCE:

The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.

LIMITATIONS:

The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.

METHODOLOGY:

  • A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
  • Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
  • Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
  • The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
  • The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.

TAKEAWAY:

  • Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
  • No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
  • Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
  • The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.

IN PRACTICE:

“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.

SOURCE:

The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.

LIMITATIONS:

The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.

METHODOLOGY:

  • A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
  • Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
  • Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
  • The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
  • The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.

TAKEAWAY:

  • Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
  • No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
  • Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
  • The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.

IN PRACTICE:

“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.

SOURCE:

The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.

LIMITATIONS:

The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Genetic Risk for Gout Raises Risk for Cardiovascular Disease Independent of Urate Level

Article Type
Changed
Tue, 10/15/2024 - 15:25

 

TOPLINE:

Genetic predisposition to gout, unfavorable lifestyle habits, and poor metabolic health are associated with an increased risk for cardiovascular disease (CVD); however, adherence to a healthy lifestyle can reduce this risk by up to 62%, even in individuals with high genetic risk.

METHODOLOGY:

  • Researchers investigated the association between genetic predisposition to gout, combined with lifestyle habits, and the risk for CVD in two diverse prospective cohorts from different ancestral backgrounds.
  • They analyzed the data of 224,689 participants of European descent from the UK Biobank (mean age, 57.0 years; 56.1% women) and 50,364 participants of East Asian descent from the Korean Genome and Epidemiology Study (KoGES; mean age, 53.7 years; 66.0% women).
  • The genetic predisposition to gout was evaluated using a polygenic risk score (PRS) derived from a metagenome-wide association study, and the participants were categorized into low, intermediate, and high genetic risk groups based on their PRS for gout.
  • A favorable lifestyle was defined as having ≥ 3 healthy lifestyle factors, and 0-1 metabolic syndrome factor defined the ideal metabolic health status.
  • The incident CVD risk was evaluated according to genetic risk, lifestyle habits, and metabolic syndrome.

TAKEAWAY:

  • Individuals in the high genetic risk group had a higher risk for CVD than those in the low genetic risk group in both the UK Biobank (adjusted hazard ratio [aHR], 1.10; P < .001) and KoGES (aHR, 1.31; P = .024) cohorts.
  • In the UK Biobank cohort, individuals with a high genetic risk for gout and unfavorable lifestyle choices had a 1.99 times higher risk for incident CVD than those with low genetic risk (aHR, 1.99; P < .001); similar outcomes were observed in the KoGES cohort.
  • Similarly, individuals with a high genetic risk for gout and poor metabolic health in the UK Biobank cohort had a 2.16 times higher risk for CVD than those with low genetic risk (aHR, 2.16; P < .001 for both); outcomes were no different in the KoGES cohort.
  • Improving metabolic health and adhering to a healthy lifestyle reduced the risk for CVD by 62% in individuals with high genetic risk and by 46% in those with low genetic risk (P < .001 for both).

IN PRACTICE:

“PRS for gout can be used for preventing not only gout but also CVD. It is possible to identify individuals with high genetic risk for gout and strongly recommend modifying lifestyle habits. Weight reduction, smoking cessation, regular exercise, and eating healthy food are effective strategies to prevent gout and CVD,” the authors wrote.

SOURCE:

This study was led by Ki Won Moon, MD, PhD, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon, Republic of Korea, and SangHyuk Jung, PhD, University of Pennsylvania, Philadelphia, and was published online on October 8, 2024, in RMD Open.

 

 

LIMITATIONS: 

The definitions of lifestyle and metabolic syndrome were different in each cohort, which may have affected the findings. Data on lifestyle behaviors and metabolic health statuses were collected at enrollment, but these variables may have changed during the follow-up period, which potentially introduced bias into the results. This study was not able to establish causality between genetic predisposition to gout and the incident risk for CVD.

DISCLOSURES:

This study was supported by the National Institute of General Medical Sciences and the National Research Foundation of Korea. The authors declared no competing interests.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Genetic predisposition to gout, unfavorable lifestyle habits, and poor metabolic health are associated with an increased risk for cardiovascular disease (CVD); however, adherence to a healthy lifestyle can reduce this risk by up to 62%, even in individuals with high genetic risk.

METHODOLOGY:

  • Researchers investigated the association between genetic predisposition to gout, combined with lifestyle habits, and the risk for CVD in two diverse prospective cohorts from different ancestral backgrounds.
  • They analyzed the data of 224,689 participants of European descent from the UK Biobank (mean age, 57.0 years; 56.1% women) and 50,364 participants of East Asian descent from the Korean Genome and Epidemiology Study (KoGES; mean age, 53.7 years; 66.0% women).
  • The genetic predisposition to gout was evaluated using a polygenic risk score (PRS) derived from a metagenome-wide association study, and the participants were categorized into low, intermediate, and high genetic risk groups based on their PRS for gout.
  • A favorable lifestyle was defined as having ≥ 3 healthy lifestyle factors, and 0-1 metabolic syndrome factor defined the ideal metabolic health status.
  • The incident CVD risk was evaluated according to genetic risk, lifestyle habits, and metabolic syndrome.

TAKEAWAY:

  • Individuals in the high genetic risk group had a higher risk for CVD than those in the low genetic risk group in both the UK Biobank (adjusted hazard ratio [aHR], 1.10; P < .001) and KoGES (aHR, 1.31; P = .024) cohorts.
  • In the UK Biobank cohort, individuals with a high genetic risk for gout and unfavorable lifestyle choices had a 1.99 times higher risk for incident CVD than those with low genetic risk (aHR, 1.99; P < .001); similar outcomes were observed in the KoGES cohort.
  • Similarly, individuals with a high genetic risk for gout and poor metabolic health in the UK Biobank cohort had a 2.16 times higher risk for CVD than those with low genetic risk (aHR, 2.16; P < .001 for both); outcomes were no different in the KoGES cohort.
  • Improving metabolic health and adhering to a healthy lifestyle reduced the risk for CVD by 62% in individuals with high genetic risk and by 46% in those with low genetic risk (P < .001 for both).

IN PRACTICE:

“PRS for gout can be used for preventing not only gout but also CVD. It is possible to identify individuals with high genetic risk for gout and strongly recommend modifying lifestyle habits. Weight reduction, smoking cessation, regular exercise, and eating healthy food are effective strategies to prevent gout and CVD,” the authors wrote.

SOURCE:

This study was led by Ki Won Moon, MD, PhD, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon, Republic of Korea, and SangHyuk Jung, PhD, University of Pennsylvania, Philadelphia, and was published online on October 8, 2024, in RMD Open.

 

 

LIMITATIONS: 

The definitions of lifestyle and metabolic syndrome were different in each cohort, which may have affected the findings. Data on lifestyle behaviors and metabolic health statuses were collected at enrollment, but these variables may have changed during the follow-up period, which potentially introduced bias into the results. This study was not able to establish causality between genetic predisposition to gout and the incident risk for CVD.

DISCLOSURES:

This study was supported by the National Institute of General Medical Sciences and the National Research Foundation of Korea. The authors declared no competing interests.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Genetic predisposition to gout, unfavorable lifestyle habits, and poor metabolic health are associated with an increased risk for cardiovascular disease (CVD); however, adherence to a healthy lifestyle can reduce this risk by up to 62%, even in individuals with high genetic risk.

METHODOLOGY:

  • Researchers investigated the association between genetic predisposition to gout, combined with lifestyle habits, and the risk for CVD in two diverse prospective cohorts from different ancestral backgrounds.
  • They analyzed the data of 224,689 participants of European descent from the UK Biobank (mean age, 57.0 years; 56.1% women) and 50,364 participants of East Asian descent from the Korean Genome and Epidemiology Study (KoGES; mean age, 53.7 years; 66.0% women).
  • The genetic predisposition to gout was evaluated using a polygenic risk score (PRS) derived from a metagenome-wide association study, and the participants were categorized into low, intermediate, and high genetic risk groups based on their PRS for gout.
  • A favorable lifestyle was defined as having ≥ 3 healthy lifestyle factors, and 0-1 metabolic syndrome factor defined the ideal metabolic health status.
  • The incident CVD risk was evaluated according to genetic risk, lifestyle habits, and metabolic syndrome.

TAKEAWAY:

  • Individuals in the high genetic risk group had a higher risk for CVD than those in the low genetic risk group in both the UK Biobank (adjusted hazard ratio [aHR], 1.10; P < .001) and KoGES (aHR, 1.31; P = .024) cohorts.
  • In the UK Biobank cohort, individuals with a high genetic risk for gout and unfavorable lifestyle choices had a 1.99 times higher risk for incident CVD than those with low genetic risk (aHR, 1.99; P < .001); similar outcomes were observed in the KoGES cohort.
  • Similarly, individuals with a high genetic risk for gout and poor metabolic health in the UK Biobank cohort had a 2.16 times higher risk for CVD than those with low genetic risk (aHR, 2.16; P < .001 for both); outcomes were no different in the KoGES cohort.
  • Improving metabolic health and adhering to a healthy lifestyle reduced the risk for CVD by 62% in individuals with high genetic risk and by 46% in those with low genetic risk (P < .001 for both).

IN PRACTICE:

“PRS for gout can be used for preventing not only gout but also CVD. It is possible to identify individuals with high genetic risk for gout and strongly recommend modifying lifestyle habits. Weight reduction, smoking cessation, regular exercise, and eating healthy food are effective strategies to prevent gout and CVD,” the authors wrote.

SOURCE:

This study was led by Ki Won Moon, MD, PhD, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon, Republic of Korea, and SangHyuk Jung, PhD, University of Pennsylvania, Philadelphia, and was published online on October 8, 2024, in RMD Open.

 

 

LIMITATIONS: 

The definitions of lifestyle and metabolic syndrome were different in each cohort, which may have affected the findings. Data on lifestyle behaviors and metabolic health statuses were collected at enrollment, but these variables may have changed during the follow-up period, which potentially introduced bias into the results. This study was not able to establish causality between genetic predisposition to gout and the incident risk for CVD.

DISCLOSURES:

This study was supported by the National Institute of General Medical Sciences and the National Research Foundation of Korea. The authors declared no competing interests.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Evidence That Plaque Buildup Shouldn’t Be Ignored

Article Type
Changed
Thu, 10/10/2024 - 14:32

 

Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.

The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”

Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.

Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.

“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.

David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”

“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
 

Predicting Risk Early

The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.

Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.

Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”

Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
 

3D Vascular Ultrasound vs Coronary Artery Calcium

While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.

For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.

But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.

Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.

3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
 

Progression Increases Risk

Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further. 

The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.

Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”

He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.

The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”

Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.

Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.

“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.

David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”

“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
 

Predicting Risk Early

The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.

Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.

Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”

Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
 

3D Vascular Ultrasound vs Coronary Artery Calcium

While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.

For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.

But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.

Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.

3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
 

Progression Increases Risk

Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further. 

The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.

Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”

He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
 

A version of this article appeared on Medscape.com.

 

Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.

The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”

Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.

Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.

“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.

David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”

“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
 

Predicting Risk Early

The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.

Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.

Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”

Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
 

3D Vascular Ultrasound vs Coronary Artery Calcium

While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.

For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.

But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.

Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.

3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
 

Progression Increases Risk

Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further. 

The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.

Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”

He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article