User login
Are Beta-Blockers Needed Post MI? No, Even After the ABYSS Trial
The ABYSS trial found that interruption of beta-blocker therapy in patients after myocardial infarction (MI) was not noninferior to continuing the drugs.
I will argue why I think it is okay to stop beta-blockers after MI — despite this conclusion. The results of ABYSS are, in fact, similar to REDUCE-AMI, which compared beta-blocker use or nonuse immediately after MI, and found no difference in a composite endpoint of death or MI.
The ABYSS Trial
ABYSS investigators randomly assigned nearly 3700 patients who had MI and were prescribed a beta-blocker to either continue (control arm) or stop (active arm) the drug at 1 year.
Patients had to have a left ventricular ejection fraction (LVEF) at least 40%; the median was 60%.
The composite primary endpoint included death, MI, stroke, or hospitalization for any cardiovascular reason. ABYSS authors chose a noninferiority design. The assumption must have been that the interruption arm offered an easier option for patients — eg, fewer pills.
Over 3 years, a primary endpoint occurred in 23.8% of the interruption group vs 21.1% in the continuation group.
In ABYSS, the noninferiority margin was set at a 3% absolute risk increase. The 2.7% absolute risk increase had an upper bound of the 95% CI (worst case) of 5.5% leading to the not-noninferior conclusion (5.5% exceeds the noninferiority margins).
More simply stated, the primary outcome event rate was higher in the interruption arm.
Does This Mean we Should Continue Beta-Blockers in Post-MI Patients?
This led some to conclude that we should continue beta-blockers. I disagree. To properly interpret the ABYSS trial, you must consider trial procedures, components of the primary endpoint, and then compare ABYSS with REDUCE-AMI.
It’s also reasonable to have extremely pessimistic prior beliefs about post-MI beta-blockade because the evidence establishing benefit comes from trials conducted before urgent revascularization became the standard therapy.
ABYSS was a pragmatic open-label trial. The core problem with this design is that one of the components of the primary outcome (hospitalization for cardiovascular reasons) requires clinical judgment — and is therefore susceptible to bias, particularly in an open-label trial.
This becomes apparent when we look at the components of the primary outcome in the two arms of the trial (interrupt vs continue):
- For death, the rates were 4.1 and 4.0%
- For MI, the rates were 2.5 and 2.4%
- For stroke, the rates were 1.0% in both arms
- For CV hospitalization, the rates were 18.9% vs 16.6%
The higher rate CV hospitalization alone drove the results of ABYSS. Death, MI, and stroke rates were nearly identical.
The most common reason for admission to the hospital in this category was for angiography. In fact, the rate of angiography was 2.3% higher in the interruption arm — identical to the rate increase in the CV hospitalization component of the primary endpoint.
The results of ABYSS, therefore, were driven by higher rates of angiography in the interrupt arm.
You need not imply malfeasance to speculate that patients who had their beta-blocker stopped might be treated differently regarding hospital admissions or angiography than those who stayed on beta-blockers. Researchers from Imperial College London called such a bias in unblinded trials “subtraction anxiety and faith healing.”
Had the ABYSS investigators chosen the simpler, less bias-prone endpoints of death, MI, or stroke, their results would have been the same as REDUCE-AMI.
My Final Two Conclusions
I would conclude that interruption of beta-blockers at 1 year vs continuation in post-MI patients did not lead to an increase in death, MI, or stroke.
ABYSS, therefore, is consistent with REDUCE-AMI. Taken together, along with the pessimistic priors, these are important findings because they allow us to stop a medicine and reduce the work of being a patient.
My second conclusion concerns ways of knowing in medicine. I’ve long felt that randomized controlled trials (RCTs) are the best way to sort out causation. This idea led me to the believe that medicine should have more RCTs rather than follow expert opinion or therapeutic fashion.
I’ve now modified my love of RCTs — a little. The ABYSS trial is yet another example of the need to be super careful with their design.
Something as seemingly simple as choosing what to measure can alter the way clinicians interpret and use the data.
So, let’s have (slightly) more trials, but we should be really careful in their design. Slow and careful is the best way to practice medicine. And it’s surely the best way to do research as well.
Dr. Mandrola, clinical electrophysiologist, Baptist Medical Associates, Louisville, Kentucky, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
The ABYSS trial found that interruption of beta-blocker therapy in patients after myocardial infarction (MI) was not noninferior to continuing the drugs.
I will argue why I think it is okay to stop beta-blockers after MI — despite this conclusion. The results of ABYSS are, in fact, similar to REDUCE-AMI, which compared beta-blocker use or nonuse immediately after MI, and found no difference in a composite endpoint of death or MI.
The ABYSS Trial
ABYSS investigators randomly assigned nearly 3700 patients who had MI and were prescribed a beta-blocker to either continue (control arm) or stop (active arm) the drug at 1 year.
Patients had to have a left ventricular ejection fraction (LVEF) at least 40%; the median was 60%.
The composite primary endpoint included death, MI, stroke, or hospitalization for any cardiovascular reason. ABYSS authors chose a noninferiority design. The assumption must have been that the interruption arm offered an easier option for patients — eg, fewer pills.
Over 3 years, a primary endpoint occurred in 23.8% of the interruption group vs 21.1% in the continuation group.
In ABYSS, the noninferiority margin was set at a 3% absolute risk increase. The 2.7% absolute risk increase had an upper bound of the 95% CI (worst case) of 5.5% leading to the not-noninferior conclusion (5.5% exceeds the noninferiority margins).
More simply stated, the primary outcome event rate was higher in the interruption arm.
Does This Mean we Should Continue Beta-Blockers in Post-MI Patients?
This led some to conclude that we should continue beta-blockers. I disagree. To properly interpret the ABYSS trial, you must consider trial procedures, components of the primary endpoint, and then compare ABYSS with REDUCE-AMI.
It’s also reasonable to have extremely pessimistic prior beliefs about post-MI beta-blockade because the evidence establishing benefit comes from trials conducted before urgent revascularization became the standard therapy.
ABYSS was a pragmatic open-label trial. The core problem with this design is that one of the components of the primary outcome (hospitalization for cardiovascular reasons) requires clinical judgment — and is therefore susceptible to bias, particularly in an open-label trial.
This becomes apparent when we look at the components of the primary outcome in the two arms of the trial (interrupt vs continue):
- For death, the rates were 4.1 and 4.0%
- For MI, the rates were 2.5 and 2.4%
- For stroke, the rates were 1.0% in both arms
- For CV hospitalization, the rates were 18.9% vs 16.6%
The higher rate CV hospitalization alone drove the results of ABYSS. Death, MI, and stroke rates were nearly identical.
The most common reason for admission to the hospital in this category was for angiography. In fact, the rate of angiography was 2.3% higher in the interruption arm — identical to the rate increase in the CV hospitalization component of the primary endpoint.
The results of ABYSS, therefore, were driven by higher rates of angiography in the interrupt arm.
You need not imply malfeasance to speculate that patients who had their beta-blocker stopped might be treated differently regarding hospital admissions or angiography than those who stayed on beta-blockers. Researchers from Imperial College London called such a bias in unblinded trials “subtraction anxiety and faith healing.”
Had the ABYSS investigators chosen the simpler, less bias-prone endpoints of death, MI, or stroke, their results would have been the same as REDUCE-AMI.
My Final Two Conclusions
I would conclude that interruption of beta-blockers at 1 year vs continuation in post-MI patients did not lead to an increase in death, MI, or stroke.
ABYSS, therefore, is consistent with REDUCE-AMI. Taken together, along with the pessimistic priors, these are important findings because they allow us to stop a medicine and reduce the work of being a patient.
My second conclusion concerns ways of knowing in medicine. I’ve long felt that randomized controlled trials (RCTs) are the best way to sort out causation. This idea led me to the believe that medicine should have more RCTs rather than follow expert opinion or therapeutic fashion.
I’ve now modified my love of RCTs — a little. The ABYSS trial is yet another example of the need to be super careful with their design.
Something as seemingly simple as choosing what to measure can alter the way clinicians interpret and use the data.
So, let’s have (slightly) more trials, but we should be really careful in their design. Slow and careful is the best way to practice medicine. And it’s surely the best way to do research as well.
Dr. Mandrola, clinical electrophysiologist, Baptist Medical Associates, Louisville, Kentucky, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
The ABYSS trial found that interruption of beta-blocker therapy in patients after myocardial infarction (MI) was not noninferior to continuing the drugs.
I will argue why I think it is okay to stop beta-blockers after MI — despite this conclusion. The results of ABYSS are, in fact, similar to REDUCE-AMI, which compared beta-blocker use or nonuse immediately after MI, and found no difference in a composite endpoint of death or MI.
The ABYSS Trial
ABYSS investigators randomly assigned nearly 3700 patients who had MI and were prescribed a beta-blocker to either continue (control arm) or stop (active arm) the drug at 1 year.
Patients had to have a left ventricular ejection fraction (LVEF) at least 40%; the median was 60%.
The composite primary endpoint included death, MI, stroke, or hospitalization for any cardiovascular reason. ABYSS authors chose a noninferiority design. The assumption must have been that the interruption arm offered an easier option for patients — eg, fewer pills.
Over 3 years, a primary endpoint occurred in 23.8% of the interruption group vs 21.1% in the continuation group.
In ABYSS, the noninferiority margin was set at a 3% absolute risk increase. The 2.7% absolute risk increase had an upper bound of the 95% CI (worst case) of 5.5% leading to the not-noninferior conclusion (5.5% exceeds the noninferiority margins).
More simply stated, the primary outcome event rate was higher in the interruption arm.
Does This Mean we Should Continue Beta-Blockers in Post-MI Patients?
This led some to conclude that we should continue beta-blockers. I disagree. To properly interpret the ABYSS trial, you must consider trial procedures, components of the primary endpoint, and then compare ABYSS with REDUCE-AMI.
It’s also reasonable to have extremely pessimistic prior beliefs about post-MI beta-blockade because the evidence establishing benefit comes from trials conducted before urgent revascularization became the standard therapy.
ABYSS was a pragmatic open-label trial. The core problem with this design is that one of the components of the primary outcome (hospitalization for cardiovascular reasons) requires clinical judgment — and is therefore susceptible to bias, particularly in an open-label trial.
This becomes apparent when we look at the components of the primary outcome in the two arms of the trial (interrupt vs continue):
- For death, the rates were 4.1 and 4.0%
- For MI, the rates were 2.5 and 2.4%
- For stroke, the rates were 1.0% in both arms
- For CV hospitalization, the rates were 18.9% vs 16.6%
The higher rate CV hospitalization alone drove the results of ABYSS. Death, MI, and stroke rates were nearly identical.
The most common reason for admission to the hospital in this category was for angiography. In fact, the rate of angiography was 2.3% higher in the interruption arm — identical to the rate increase in the CV hospitalization component of the primary endpoint.
The results of ABYSS, therefore, were driven by higher rates of angiography in the interrupt arm.
You need not imply malfeasance to speculate that patients who had their beta-blocker stopped might be treated differently regarding hospital admissions or angiography than those who stayed on beta-blockers. Researchers from Imperial College London called such a bias in unblinded trials “subtraction anxiety and faith healing.”
Had the ABYSS investigators chosen the simpler, less bias-prone endpoints of death, MI, or stroke, their results would have been the same as REDUCE-AMI.
My Final Two Conclusions
I would conclude that interruption of beta-blockers at 1 year vs continuation in post-MI patients did not lead to an increase in death, MI, or stroke.
ABYSS, therefore, is consistent with REDUCE-AMI. Taken together, along with the pessimistic priors, these are important findings because they allow us to stop a medicine and reduce the work of being a patient.
My second conclusion concerns ways of knowing in medicine. I’ve long felt that randomized controlled trials (RCTs) are the best way to sort out causation. This idea led me to the believe that medicine should have more RCTs rather than follow expert opinion or therapeutic fashion.
I’ve now modified my love of RCTs — a little. The ABYSS trial is yet another example of the need to be super careful with their design.
Something as seemingly simple as choosing what to measure can alter the way clinicians interpret and use the data.
So, let’s have (slightly) more trials, but we should be really careful in their design. Slow and careful is the best way to practice medicine. And it’s surely the best way to do research as well.
Dr. Mandrola, clinical electrophysiologist, Baptist Medical Associates, Louisville, Kentucky, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
First-Time Fathers Experience Period of High Psychological Risk
Anxiety and stress during fatherhood receive less research attention than do anxiety and stress during motherhood.
Longitudinal data tracking the evolution of men’s mental health following the birth of the first child are even rarer, especially in the French population. Only two studies of the subject have been conducted. They were dedicated solely to paternal depression and limited to the first 4 months post partum. Better understanding of the risk in the population can not only help identify public health issues, but also aid in defining targeted preventive approaches.
French researchers in epidemiology and public health sought to expand our knowledge of the mental health trajectories of new fathers using 9 years of data from the CONSTANCES cohort. Within this cohort, participants filled out self-administered questionnaires annually. They declared their parental status and the presence of mental illnesses. They also completed questionnaires to assess mental health, such as the Center for Epidemiologic Studies Depression Scale for depression and the General Health Questionnaire for depressive, anxious, and somatic disorders. Thresholds for each score were established to characterize the severity of symptoms. In addition, the researchers analyzed all factors (eg, sociodemographic, psychosocial, lifestyle, professional, family, or cultural) that potentially are associated with poor mental health and were available within the questionnaires.
The study included 6299 men who had their first child and for whom at least one mental health measure was collected during the follow-up period. These men had an average age of 38 years at inclusion, 88% lived with a partner, and 85% were employed. Overall, 7.9% of this male cohort self-reported a mental illness during the study, with 5.6% of illnesses occurring before the child’s birth and 9.7% after. Anxiety affected 6.5% of the cohort, and it was more pronounced after the birth than before (7.8% after vs 4.9% before).
The rate of clinically significant symptoms averaged 23.2% during the study period, increasing from 18.3% to 25.2% after the birth. The discrepancy between the self-declared diagnosis by new fathers and the symptom-related score highlights underreporting or insufficient awareness among men.
After conducting a latent class analysis, the researchers identified three homogeneous subgroups of men who had comparable mental health trajectories over time. The first group (90.3% of the cohort) maintained a constant and low risk for mental illnesses. The second (4.1%) presented a high and generally constant risk over time. Finally, 5.6% of the cohort had a temporarily high risk in the 2-4 years surrounding the birth.
The risk factors associated with being at a transiently high risk for mental illness were, in order of descending significance, not having a job, having had at least one negative experience during childhood, forgoing healthcare for financial reasons, and being aged 35-39 years (adjusted odds ratio [AOR] between 3.01 and 1.61). The risk factors associated with a high and constant mental illness risk were, in order of descending significance, being aged 60 years or older, not having a job, not living with a partner, being aged 40-44 years, and having other children in the following years (AOR between 3.79 and 1.85).
The authors noted that the risk factors for mental health challenges associated with fatherhood do not imply causality, the meaning of which would also need further study. They contended that French fathers, who on average are entitled to 2 weeks of paid paternity leave, may struggle to manage their time, professional responsibilities, and parenting duties. Consequently, they may experience dissatisfaction and difficulty seeking support, assistance, or a mental health diagnosis, especially in the face of a mental health risk to which they are less attuned than women.
This story was translated from Univadis France, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Anxiety and stress during fatherhood receive less research attention than do anxiety and stress during motherhood.
Longitudinal data tracking the evolution of men’s mental health following the birth of the first child are even rarer, especially in the French population. Only two studies of the subject have been conducted. They were dedicated solely to paternal depression and limited to the first 4 months post partum. Better understanding of the risk in the population can not only help identify public health issues, but also aid in defining targeted preventive approaches.
French researchers in epidemiology and public health sought to expand our knowledge of the mental health trajectories of new fathers using 9 years of data from the CONSTANCES cohort. Within this cohort, participants filled out self-administered questionnaires annually. They declared their parental status and the presence of mental illnesses. They also completed questionnaires to assess mental health, such as the Center for Epidemiologic Studies Depression Scale for depression and the General Health Questionnaire for depressive, anxious, and somatic disorders. Thresholds for each score were established to characterize the severity of symptoms. In addition, the researchers analyzed all factors (eg, sociodemographic, psychosocial, lifestyle, professional, family, or cultural) that potentially are associated with poor mental health and were available within the questionnaires.
The study included 6299 men who had their first child and for whom at least one mental health measure was collected during the follow-up period. These men had an average age of 38 years at inclusion, 88% lived with a partner, and 85% were employed. Overall, 7.9% of this male cohort self-reported a mental illness during the study, with 5.6% of illnesses occurring before the child’s birth and 9.7% after. Anxiety affected 6.5% of the cohort, and it was more pronounced after the birth than before (7.8% after vs 4.9% before).
The rate of clinically significant symptoms averaged 23.2% during the study period, increasing from 18.3% to 25.2% after the birth. The discrepancy between the self-declared diagnosis by new fathers and the symptom-related score highlights underreporting or insufficient awareness among men.
After conducting a latent class analysis, the researchers identified three homogeneous subgroups of men who had comparable mental health trajectories over time. The first group (90.3% of the cohort) maintained a constant and low risk for mental illnesses. The second (4.1%) presented a high and generally constant risk over time. Finally, 5.6% of the cohort had a temporarily high risk in the 2-4 years surrounding the birth.
The risk factors associated with being at a transiently high risk for mental illness were, in order of descending significance, not having a job, having had at least one negative experience during childhood, forgoing healthcare for financial reasons, and being aged 35-39 years (adjusted odds ratio [AOR] between 3.01 and 1.61). The risk factors associated with a high and constant mental illness risk were, in order of descending significance, being aged 60 years or older, not having a job, not living with a partner, being aged 40-44 years, and having other children in the following years (AOR between 3.79 and 1.85).
The authors noted that the risk factors for mental health challenges associated with fatherhood do not imply causality, the meaning of which would also need further study. They contended that French fathers, who on average are entitled to 2 weeks of paid paternity leave, may struggle to manage their time, professional responsibilities, and parenting duties. Consequently, they may experience dissatisfaction and difficulty seeking support, assistance, or a mental health diagnosis, especially in the face of a mental health risk to which they are less attuned than women.
This story was translated from Univadis France, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Anxiety and stress during fatherhood receive less research attention than do anxiety and stress during motherhood.
Longitudinal data tracking the evolution of men’s mental health following the birth of the first child are even rarer, especially in the French population. Only two studies of the subject have been conducted. They were dedicated solely to paternal depression and limited to the first 4 months post partum. Better understanding of the risk in the population can not only help identify public health issues, but also aid in defining targeted preventive approaches.
French researchers in epidemiology and public health sought to expand our knowledge of the mental health trajectories of new fathers using 9 years of data from the CONSTANCES cohort. Within this cohort, participants filled out self-administered questionnaires annually. They declared their parental status and the presence of mental illnesses. They also completed questionnaires to assess mental health, such as the Center for Epidemiologic Studies Depression Scale for depression and the General Health Questionnaire for depressive, anxious, and somatic disorders. Thresholds for each score were established to characterize the severity of symptoms. In addition, the researchers analyzed all factors (eg, sociodemographic, psychosocial, lifestyle, professional, family, or cultural) that potentially are associated with poor mental health and were available within the questionnaires.
The study included 6299 men who had their first child and for whom at least one mental health measure was collected during the follow-up period. These men had an average age of 38 years at inclusion, 88% lived with a partner, and 85% were employed. Overall, 7.9% of this male cohort self-reported a mental illness during the study, with 5.6% of illnesses occurring before the child’s birth and 9.7% after. Anxiety affected 6.5% of the cohort, and it was more pronounced after the birth than before (7.8% after vs 4.9% before).
The rate of clinically significant symptoms averaged 23.2% during the study period, increasing from 18.3% to 25.2% after the birth. The discrepancy between the self-declared diagnosis by new fathers and the symptom-related score highlights underreporting or insufficient awareness among men.
After conducting a latent class analysis, the researchers identified three homogeneous subgroups of men who had comparable mental health trajectories over time. The first group (90.3% of the cohort) maintained a constant and low risk for mental illnesses. The second (4.1%) presented a high and generally constant risk over time. Finally, 5.6% of the cohort had a temporarily high risk in the 2-4 years surrounding the birth.
The risk factors associated with being at a transiently high risk for mental illness were, in order of descending significance, not having a job, having had at least one negative experience during childhood, forgoing healthcare for financial reasons, and being aged 35-39 years (adjusted odds ratio [AOR] between 3.01 and 1.61). The risk factors associated with a high and constant mental illness risk were, in order of descending significance, being aged 60 years or older, not having a job, not living with a partner, being aged 40-44 years, and having other children in the following years (AOR between 3.79 and 1.85).
The authors noted that the risk factors for mental health challenges associated with fatherhood do not imply causality, the meaning of which would also need further study. They contended that French fathers, who on average are entitled to 2 weeks of paid paternity leave, may struggle to manage their time, professional responsibilities, and parenting duties. Consequently, they may experience dissatisfaction and difficulty seeking support, assistance, or a mental health diagnosis, especially in the face of a mental health risk to which they are less attuned than women.
This story was translated from Univadis France, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Setbacks Identified After Stopping Beta-Blockers
LONDON — It may not be advisable for patients with a history of myocardial infarction and preserved left ventricular function to discontinue long-term beta-blocker therapy, warn investigators.
In the randomized ABYSS trial, although there was no difference in death, MI, or stroke between patients who discontinued and those who continued taking beta-blockers, those who stopped taking the drugs had a higher rate of cardiovascular hospitalization.
Discontinuation was also associated with an increase in blood pressure and heart rate, without any improvement in quality of life.
The results, which were simultaneously published online in The New England Journal of Medicine, call into question current guidelines, which suggest that beta-blockers may be discontinued after 1 year in certain patient groups.
Beta-blockers have long been considered the standard of care for patients after MI, but trials showing the benefit of these drugs were conducted before the modern era of myocardial reperfusion and pharmacotherapy, which have led to sharp decreases in the risk for heart failure and for death after MI, Dr. Silvain explained.
This has led to questions about the add-on benefits of lifelong beta-blocker treatment for patients with MI and a preserved left ventricular ejection fraction and no other primary indication for beta-blocker therapy.
The ABYSS Trial
To explore this issue, the open-label, non-inferiority ABYSS trial randomly assigned 3698 patients with a history of MI to the discontinuation or continuation of beta-blocker treatment. All study participants had a left ventricular ejection fraction of at least 40%, were receiving long-term beta-blocker treatment, and had experienced no cardiovascular event in the previous 6 months.
At a median follow-up of 3 years, the primary endpoint — a composite of death, MI, stroke, and hospitalization for cardiovascular reasons — occurred more often in the discontinuation group than in the continuation group (23.8% vs 21.1%; hazard ratio, 1.16; 95% CI, 1.01-1.33). This did not meet the criteria for non-inferiority of discontinuation, compared with continuation, of beta-blocker therapy (P for non-inferiority = .44).
The difference in event rates between the two groups was driven by cardiovascular hospitalizations, which occurred more often in the discontinuation group than in the continuation group (18.9% vs 16.6%).
Other key results showed that there was no difference in quality of life between the two groups.
However, 6 months after randomization, there were increases in blood pressure and heart rate in the discontinuation group. Systolic blood pressure increased by 3.7 mm Hg and diastolic blood pressure increased by 3.9 mm Hg. Resting heart rate increased by 9.8 beats per minute.
“We were not able to show the non-inferiority of stopping beta-blockers in terms of cardiovascular events, [but we] showed a safety signal with this strategy of an increase in blood pressure and heart rate, with no improvement in quality of life,” Dr. Sylvain said.
“While recent guidelines suggest it may be reasonable to stop beta-blockers in this population, after these results, I will not be stopping these drugs if they are being well tolerated,” he said.
Sylvain said he was surprised that there was not an improvement in quality of life in the group that discontinued beta-blockers. “We are always told that beta-blockers have many side effects, so we expected to see an improvement in quality of life in the patients who stopped these drugs.”
One possible reason for the lack of improvement in quality of life is that the trial participants had been taking beta-blockers for several years. “We may have, therefore, selected patients who tolerate these drugs quite well. Those who had tolerance issues had probably already stopped taking them,” he explained.
In addition, the patient population had relatively high quality-of-life scores at baseline. “They were well treated and the therapies they were taking were well tolerated, so maybe it is difficult to improve quality of life further,” he said.
The REDUCE-AMI Trial
The ABYSS results appear at first to differ from results from the recent REDUCE-AMI trial, which failed to show the superiority of beta-blocker therapy, compared with no beta-blocker therapy, in acute MI patients with preserved ejection fraction.
But the REDUCE-AMI primary endpoint was a composite of death from any cause or new myocardial infarction; it did not include cardiovascular hospitalization, which was the main driver of the difference in outcomes in the ABYSS study, Dr. Sylvain pointed out.
“We showed an increase in coronary cases of hospitalization with stopping beta-blockers, and you have to remember that beta-blockers were developed to reduce coronary disease,” he said.
‘Slightly Inconclusive’
Jane Armitage, MBBS, University of Oxford, England, the ABYSS discussant for the ESC HOTLINE session, pointed out some limitations of the study, which led her to report that the result was “slightly inconclusive.”
The open-label design may have allowed some bias regarding the cardiovascular hospitalization endpoint, she said.
“The decision whether to admit a patient to [the] hospital is somewhat subjective and could be influenced by a physician’s knowledge of treatment allocation. That is why, ideally, we prefer blinded trials. I think there are questions there,” she explained.
She also questioned whether the non-inferiority margin could have been increased, given the higher-than-expected event rate.
More data on this issue will come from several trials that are currently ongoing, Dr. Armitage said.
The ABYSS and REDUCE-AMI trials together suggest that it is safe, with respect to serious cardiac events, to stop beta-blocker treatment in MI patients with preserved ejection fraction, writes Tomas Jernberg, MD, PhD, from the Karolinska Institute in Stockholm, Sweden, in an accompanying editorial.
However, “because of the anti-ischemic effects of beta-blockers, an interruption may increase the risk of recurrent angina and the need for rehospitalization,” he adds.
“It is prudent to wait for the results of additional ongoing trials of beta-blockers involving patients with MI and a preserved left ventricular ejection fraction before definitively updating guidelines,” Dr. Jernberg concludes.
The ABYSS trial was funded by the French Ministry of Health and the ACTION Study Group. Dr. Sylvain, Dr. Armitage, and Dr. Jernberg report no relevant financial relationships.
A version of this article appeared on Medscape.com.
LONDON — It may not be advisable for patients with a history of myocardial infarction and preserved left ventricular function to discontinue long-term beta-blocker therapy, warn investigators.
In the randomized ABYSS trial, although there was no difference in death, MI, or stroke between patients who discontinued and those who continued taking beta-blockers, those who stopped taking the drugs had a higher rate of cardiovascular hospitalization.
Discontinuation was also associated with an increase in blood pressure and heart rate, without any improvement in quality of life.
The results, which were simultaneously published online in The New England Journal of Medicine, call into question current guidelines, which suggest that beta-blockers may be discontinued after 1 year in certain patient groups.
Beta-blockers have long been considered the standard of care for patients after MI, but trials showing the benefit of these drugs were conducted before the modern era of myocardial reperfusion and pharmacotherapy, which have led to sharp decreases in the risk for heart failure and for death after MI, Dr. Silvain explained.
This has led to questions about the add-on benefits of lifelong beta-blocker treatment for patients with MI and a preserved left ventricular ejection fraction and no other primary indication for beta-blocker therapy.
The ABYSS Trial
To explore this issue, the open-label, non-inferiority ABYSS trial randomly assigned 3698 patients with a history of MI to the discontinuation or continuation of beta-blocker treatment. All study participants had a left ventricular ejection fraction of at least 40%, were receiving long-term beta-blocker treatment, and had experienced no cardiovascular event in the previous 6 months.
At a median follow-up of 3 years, the primary endpoint — a composite of death, MI, stroke, and hospitalization for cardiovascular reasons — occurred more often in the discontinuation group than in the continuation group (23.8% vs 21.1%; hazard ratio, 1.16; 95% CI, 1.01-1.33). This did not meet the criteria for non-inferiority of discontinuation, compared with continuation, of beta-blocker therapy (P for non-inferiority = .44).
The difference in event rates between the two groups was driven by cardiovascular hospitalizations, which occurred more often in the discontinuation group than in the continuation group (18.9% vs 16.6%).
Other key results showed that there was no difference in quality of life between the two groups.
However, 6 months after randomization, there were increases in blood pressure and heart rate in the discontinuation group. Systolic blood pressure increased by 3.7 mm Hg and diastolic blood pressure increased by 3.9 mm Hg. Resting heart rate increased by 9.8 beats per minute.
“We were not able to show the non-inferiority of stopping beta-blockers in terms of cardiovascular events, [but we] showed a safety signal with this strategy of an increase in blood pressure and heart rate, with no improvement in quality of life,” Dr. Sylvain said.
“While recent guidelines suggest it may be reasonable to stop beta-blockers in this population, after these results, I will not be stopping these drugs if they are being well tolerated,” he said.
Sylvain said he was surprised that there was not an improvement in quality of life in the group that discontinued beta-blockers. “We are always told that beta-blockers have many side effects, so we expected to see an improvement in quality of life in the patients who stopped these drugs.”
One possible reason for the lack of improvement in quality of life is that the trial participants had been taking beta-blockers for several years. “We may have, therefore, selected patients who tolerate these drugs quite well. Those who had tolerance issues had probably already stopped taking them,” he explained.
In addition, the patient population had relatively high quality-of-life scores at baseline. “They were well treated and the therapies they were taking were well tolerated, so maybe it is difficult to improve quality of life further,” he said.
The REDUCE-AMI Trial
The ABYSS results appear at first to differ from results from the recent REDUCE-AMI trial, which failed to show the superiority of beta-blocker therapy, compared with no beta-blocker therapy, in acute MI patients with preserved ejection fraction.
But the REDUCE-AMI primary endpoint was a composite of death from any cause or new myocardial infarction; it did not include cardiovascular hospitalization, which was the main driver of the difference in outcomes in the ABYSS study, Dr. Sylvain pointed out.
“We showed an increase in coronary cases of hospitalization with stopping beta-blockers, and you have to remember that beta-blockers were developed to reduce coronary disease,” he said.
‘Slightly Inconclusive’
Jane Armitage, MBBS, University of Oxford, England, the ABYSS discussant for the ESC HOTLINE session, pointed out some limitations of the study, which led her to report that the result was “slightly inconclusive.”
The open-label design may have allowed some bias regarding the cardiovascular hospitalization endpoint, she said.
“The decision whether to admit a patient to [the] hospital is somewhat subjective and could be influenced by a physician’s knowledge of treatment allocation. That is why, ideally, we prefer blinded trials. I think there are questions there,” she explained.
She also questioned whether the non-inferiority margin could have been increased, given the higher-than-expected event rate.
More data on this issue will come from several trials that are currently ongoing, Dr. Armitage said.
The ABYSS and REDUCE-AMI trials together suggest that it is safe, with respect to serious cardiac events, to stop beta-blocker treatment in MI patients with preserved ejection fraction, writes Tomas Jernberg, MD, PhD, from the Karolinska Institute in Stockholm, Sweden, in an accompanying editorial.
However, “because of the anti-ischemic effects of beta-blockers, an interruption may increase the risk of recurrent angina and the need for rehospitalization,” he adds.
“It is prudent to wait for the results of additional ongoing trials of beta-blockers involving patients with MI and a preserved left ventricular ejection fraction before definitively updating guidelines,” Dr. Jernberg concludes.
The ABYSS trial was funded by the French Ministry of Health and the ACTION Study Group. Dr. Sylvain, Dr. Armitage, and Dr. Jernberg report no relevant financial relationships.
A version of this article appeared on Medscape.com.
LONDON — It may not be advisable for patients with a history of myocardial infarction and preserved left ventricular function to discontinue long-term beta-blocker therapy, warn investigators.
In the randomized ABYSS trial, although there was no difference in death, MI, or stroke between patients who discontinued and those who continued taking beta-blockers, those who stopped taking the drugs had a higher rate of cardiovascular hospitalization.
Discontinuation was also associated with an increase in blood pressure and heart rate, without any improvement in quality of life.
The results, which were simultaneously published online in The New England Journal of Medicine, call into question current guidelines, which suggest that beta-blockers may be discontinued after 1 year in certain patient groups.
Beta-blockers have long been considered the standard of care for patients after MI, but trials showing the benefit of these drugs were conducted before the modern era of myocardial reperfusion and pharmacotherapy, which have led to sharp decreases in the risk for heart failure and for death after MI, Dr. Silvain explained.
This has led to questions about the add-on benefits of lifelong beta-blocker treatment for patients with MI and a preserved left ventricular ejection fraction and no other primary indication for beta-blocker therapy.
The ABYSS Trial
To explore this issue, the open-label, non-inferiority ABYSS trial randomly assigned 3698 patients with a history of MI to the discontinuation or continuation of beta-blocker treatment. All study participants had a left ventricular ejection fraction of at least 40%, were receiving long-term beta-blocker treatment, and had experienced no cardiovascular event in the previous 6 months.
At a median follow-up of 3 years, the primary endpoint — a composite of death, MI, stroke, and hospitalization for cardiovascular reasons — occurred more often in the discontinuation group than in the continuation group (23.8% vs 21.1%; hazard ratio, 1.16; 95% CI, 1.01-1.33). This did not meet the criteria for non-inferiority of discontinuation, compared with continuation, of beta-blocker therapy (P for non-inferiority = .44).
The difference in event rates between the two groups was driven by cardiovascular hospitalizations, which occurred more often in the discontinuation group than in the continuation group (18.9% vs 16.6%).
Other key results showed that there was no difference in quality of life between the two groups.
However, 6 months after randomization, there were increases in blood pressure and heart rate in the discontinuation group. Systolic blood pressure increased by 3.7 mm Hg and diastolic blood pressure increased by 3.9 mm Hg. Resting heart rate increased by 9.8 beats per minute.
“We were not able to show the non-inferiority of stopping beta-blockers in terms of cardiovascular events, [but we] showed a safety signal with this strategy of an increase in blood pressure and heart rate, with no improvement in quality of life,” Dr. Sylvain said.
“While recent guidelines suggest it may be reasonable to stop beta-blockers in this population, after these results, I will not be stopping these drugs if they are being well tolerated,” he said.
Sylvain said he was surprised that there was not an improvement in quality of life in the group that discontinued beta-blockers. “We are always told that beta-blockers have many side effects, so we expected to see an improvement in quality of life in the patients who stopped these drugs.”
One possible reason for the lack of improvement in quality of life is that the trial participants had been taking beta-blockers for several years. “We may have, therefore, selected patients who tolerate these drugs quite well. Those who had tolerance issues had probably already stopped taking them,” he explained.
In addition, the patient population had relatively high quality-of-life scores at baseline. “They were well treated and the therapies they were taking were well tolerated, so maybe it is difficult to improve quality of life further,” he said.
The REDUCE-AMI Trial
The ABYSS results appear at first to differ from results from the recent REDUCE-AMI trial, which failed to show the superiority of beta-blocker therapy, compared with no beta-blocker therapy, in acute MI patients with preserved ejection fraction.
But the REDUCE-AMI primary endpoint was a composite of death from any cause or new myocardial infarction; it did not include cardiovascular hospitalization, which was the main driver of the difference in outcomes in the ABYSS study, Dr. Sylvain pointed out.
“We showed an increase in coronary cases of hospitalization with stopping beta-blockers, and you have to remember that beta-blockers were developed to reduce coronary disease,” he said.
‘Slightly Inconclusive’
Jane Armitage, MBBS, University of Oxford, England, the ABYSS discussant for the ESC HOTLINE session, pointed out some limitations of the study, which led her to report that the result was “slightly inconclusive.”
The open-label design may have allowed some bias regarding the cardiovascular hospitalization endpoint, she said.
“The decision whether to admit a patient to [the] hospital is somewhat subjective and could be influenced by a physician’s knowledge of treatment allocation. That is why, ideally, we prefer blinded trials. I think there are questions there,” she explained.
She also questioned whether the non-inferiority margin could have been increased, given the higher-than-expected event rate.
More data on this issue will come from several trials that are currently ongoing, Dr. Armitage said.
The ABYSS and REDUCE-AMI trials together suggest that it is safe, with respect to serious cardiac events, to stop beta-blocker treatment in MI patients with preserved ejection fraction, writes Tomas Jernberg, MD, PhD, from the Karolinska Institute in Stockholm, Sweden, in an accompanying editorial.
However, “because of the anti-ischemic effects of beta-blockers, an interruption may increase the risk of recurrent angina and the need for rehospitalization,” he adds.
“It is prudent to wait for the results of additional ongoing trials of beta-blockers involving patients with MI and a preserved left ventricular ejection fraction before definitively updating guidelines,” Dr. Jernberg concludes.
The ABYSS trial was funded by the French Ministry of Health and the ACTION Study Group. Dr. Sylvain, Dr. Armitage, and Dr. Jernberg report no relevant financial relationships.
A version of this article appeared on Medscape.com.
Can Endurance Exercise Be Harmful?
In 490 BC, Pheidippides (or possibly Philippides) ran from Athens to Sparta to ask for military aid against the invading Persian army, then back to Athens, then off to the battlefield of Marathon, then back to Athens to announce the army’s victory, after which he promptly died. The story, if it is to be believed (there is some doubt among historians), raises an interesting question: Are some forms of exercise dangerous?
Running a marathon is a lot of work. The “worst parade ever,” as one spectator described it, is not without its risks. As a runner myself, I know that it doesn’t take much to generate a bloody sock at the end of a long run.
But when most people think about the risks of exercise, they mean the cardiovascular risks, such as sudden deaths during marathons, probably because of the aforementioned ancient Greek’s demise. The reality is more reassuring. An analysis of 10 years’ worth of data from US marathons and half-marathons found that out of 10.9 million runners, there were 59 cardiac arrests, an incidence rate of 0.54 per 100,000 participants. Others have found incidence rates in the same range. An analysis of the annual Marine Corps and Twin Cities marathons found a sudden death rate of 0.002%.
Marathon runners do sometimes require medical attention. In the Twin Cities cohort, 25 out of every 1000 finishers required medical attention, but 90% of their problems were mild. The majority included issues such as dehydration, vasovagal syncope, hyperthermia, and exhaustion. Musculoskeletal problems and skin abrasions made up the rest. Objectively, long distance running is fairly safe.
Running and Coronary Calcium
Then a study comes around suggesting that marathon runners have more coronary artery calcium (CAC). In 2008, German researchers compared 108 healthy male marathon runners over 50 years of age with Framingham risk–matched controls. The marathoners had a higher median CAC score (36 vs 12; P =.02), but scores across the board were quite low and not all studies were in agreement. The MESA study and another from Korea found an inverse relationship between physical activity and coronary calcium, but they compared sedentary people with vigorous exercisers, not specifically marathoners.
Two later studies, published in 2017, generally corroborated that endurance exercise was associated with higher calcium — with some caveats. A group from the Netherlands looked at lifelong exercise volume and compared men who accumulated > 2000 MET-min/week with those who exercised < 1000 MET-min/week. Again, the analysis was limited to men, and CAC scores, though statistically different, were still very low (9.4 vs 0; P =.02). Importantly, in men with coronary plaques, the more active group had less mixed plaque and more calcified plaque.
A UK study of middle-aged masters-level athletes at low cardiovascular risk had similar findings. Most of the study population (70%) were men, and 77% were runners (not all were marathoners). Overall, the male athletes had not only more plaque but more calcified plaque than their sedentary peers, even though most male athletes (60%) had a CAC score of zero.
The findings from these two studies were interpreted as reassuring. They confirmed that athletes are a generally low-risk group with low calcium scores, and although they might have more plaque and coronary calcium on average, it tends to be the more benign calcified type.
Masters at Heart
But the 2023 Master@Heart study challenged that assertion. It analyzed lifelong endurance athletes, late-onset endurance athletes (those who came to the game later in life), and healthy nonathletic controls. The study also found more coronary stenoses in lifelong athletes, but the breakdown of calcified vs noncalcified vs mixed plaques was the same across groups, thus contradicting the idea that exercise exerted its protective effect by calcifying and therefore stabilizing said plaques. The silver lining was fewer vulnerable plaques in the lifelong athletes (defined via high-risk features) but these were generally rare across the entire population.
Whether Master@Heart is groundbreaking or an outlier depends on your point of view. In 2024, a study from Portugal suggested that the relationship between exercise and coronary calcification is more complicated. Among 105 male veteran athletes, a high volume of exercise was associated with more coronary atherosclerosis in those at higher cardiovascular risk, but it tended to be protective in those deemed lower risk. In fact, the high-volume exercise group had fewer individuals with a CAC score > 100 (16% vs 4%; P =.029), though again, the vast majority had low CAC scores.
A limitation of all these studies is that they had cross-sectional designs, measuring coronary calcium at a single point in time and relying on questionnaires and patient recall to determine lifelong exposure to exercise. Recall bias could have been a problem, and exercise patterns vary over time. It’s not unreasonable to wonder whether people at higher cardiovascular risk should start exercising to mitigate that risk. Granted, they might not start running marathons, but many of these studies looked only at physical activity levels. A study that measured the increase (or stability) of coronary calcium over time would be more helpful.
Prior research (in men again) showed that high levels of physical activity were associated with more coronary calcium, but not with all-cause or cardiovascular mortality. But it too looked only at a single time point. The most recent study added to the body of evidence included data on nearly 9000 men and women and found that higher exercise volume did not correlate with CAC progression over the mean follow-up of 7.8 years. The study measured physical activity of any variety and included physically taxing sports like golf (without a cart). So it was not an assessment of the dangers of endurance exercise.
Outstanding Questions and Bananas
Ultimately, many questions remain. Is the lack of risk seen in women a spurious finding because they are underrepresented in most studies, or might exercise affect men and women differently? Is it valid to combine studies on endurance exercise with those looking at physical activity more generally? How accurate are self-reports of exercise? Could endurance exercisers be using performance-enhancing drugs that are confounding the associations? Are people who engage in more physical activity healthier or just trying to mitigate a higher baseline cardiovascular risk? Why do they give out bananas at the end of marathons given that there are better sources of potassium?
We have no randomized trials on the benefits and risks of endurance exercise. Even if you could get ethics approval, one imagines there would be few volunteers. In the end, we must make do with observational data and remember that coronary calcifications are a surrogate endpoint.
When it comes to hard endpoints, an analysis of French Tour de France participants found a lower risk for both cardiovascular and cancer deaths compared with the general male population. So perhaps the most important take-home message is one that has been said many times: Beware of surrogate endpoints. And for those contemplating running a marathon, I am forced to agree with the person who wrote the sign I saw during my first race. It does seem like a lot of work for a free banana.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, Quebec, Canada. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
In 490 BC, Pheidippides (or possibly Philippides) ran from Athens to Sparta to ask for military aid against the invading Persian army, then back to Athens, then off to the battlefield of Marathon, then back to Athens to announce the army’s victory, after which he promptly died. The story, if it is to be believed (there is some doubt among historians), raises an interesting question: Are some forms of exercise dangerous?
Running a marathon is a lot of work. The “worst parade ever,” as one spectator described it, is not without its risks. As a runner myself, I know that it doesn’t take much to generate a bloody sock at the end of a long run.
But when most people think about the risks of exercise, they mean the cardiovascular risks, such as sudden deaths during marathons, probably because of the aforementioned ancient Greek’s demise. The reality is more reassuring. An analysis of 10 years’ worth of data from US marathons and half-marathons found that out of 10.9 million runners, there were 59 cardiac arrests, an incidence rate of 0.54 per 100,000 participants. Others have found incidence rates in the same range. An analysis of the annual Marine Corps and Twin Cities marathons found a sudden death rate of 0.002%.
Marathon runners do sometimes require medical attention. In the Twin Cities cohort, 25 out of every 1000 finishers required medical attention, but 90% of their problems were mild. The majority included issues such as dehydration, vasovagal syncope, hyperthermia, and exhaustion. Musculoskeletal problems and skin abrasions made up the rest. Objectively, long distance running is fairly safe.
Running and Coronary Calcium
Then a study comes around suggesting that marathon runners have more coronary artery calcium (CAC). In 2008, German researchers compared 108 healthy male marathon runners over 50 years of age with Framingham risk–matched controls. The marathoners had a higher median CAC score (36 vs 12; P =.02), but scores across the board were quite low and not all studies were in agreement. The MESA study and another from Korea found an inverse relationship between physical activity and coronary calcium, but they compared sedentary people with vigorous exercisers, not specifically marathoners.
Two later studies, published in 2017, generally corroborated that endurance exercise was associated with higher calcium — with some caveats. A group from the Netherlands looked at lifelong exercise volume and compared men who accumulated > 2000 MET-min/week with those who exercised < 1000 MET-min/week. Again, the analysis was limited to men, and CAC scores, though statistically different, were still very low (9.4 vs 0; P =.02). Importantly, in men with coronary plaques, the more active group had less mixed plaque and more calcified plaque.
A UK study of middle-aged masters-level athletes at low cardiovascular risk had similar findings. Most of the study population (70%) were men, and 77% were runners (not all were marathoners). Overall, the male athletes had not only more plaque but more calcified plaque than their sedentary peers, even though most male athletes (60%) had a CAC score of zero.
The findings from these two studies were interpreted as reassuring. They confirmed that athletes are a generally low-risk group with low calcium scores, and although they might have more plaque and coronary calcium on average, it tends to be the more benign calcified type.
Masters at Heart
But the 2023 Master@Heart study challenged that assertion. It analyzed lifelong endurance athletes, late-onset endurance athletes (those who came to the game later in life), and healthy nonathletic controls. The study also found more coronary stenoses in lifelong athletes, but the breakdown of calcified vs noncalcified vs mixed plaques was the same across groups, thus contradicting the idea that exercise exerted its protective effect by calcifying and therefore stabilizing said plaques. The silver lining was fewer vulnerable plaques in the lifelong athletes (defined via high-risk features) but these were generally rare across the entire population.
Whether Master@Heart is groundbreaking or an outlier depends on your point of view. In 2024, a study from Portugal suggested that the relationship between exercise and coronary calcification is more complicated. Among 105 male veteran athletes, a high volume of exercise was associated with more coronary atherosclerosis in those at higher cardiovascular risk, but it tended to be protective in those deemed lower risk. In fact, the high-volume exercise group had fewer individuals with a CAC score > 100 (16% vs 4%; P =.029), though again, the vast majority had low CAC scores.
A limitation of all these studies is that they had cross-sectional designs, measuring coronary calcium at a single point in time and relying on questionnaires and patient recall to determine lifelong exposure to exercise. Recall bias could have been a problem, and exercise patterns vary over time. It’s not unreasonable to wonder whether people at higher cardiovascular risk should start exercising to mitigate that risk. Granted, they might not start running marathons, but many of these studies looked only at physical activity levels. A study that measured the increase (or stability) of coronary calcium over time would be more helpful.
Prior research (in men again) showed that high levels of physical activity were associated with more coronary calcium, but not with all-cause or cardiovascular mortality. But it too looked only at a single time point. The most recent study added to the body of evidence included data on nearly 9000 men and women and found that higher exercise volume did not correlate with CAC progression over the mean follow-up of 7.8 years. The study measured physical activity of any variety and included physically taxing sports like golf (without a cart). So it was not an assessment of the dangers of endurance exercise.
Outstanding Questions and Bananas
Ultimately, many questions remain. Is the lack of risk seen in women a spurious finding because they are underrepresented in most studies, or might exercise affect men and women differently? Is it valid to combine studies on endurance exercise with those looking at physical activity more generally? How accurate are self-reports of exercise? Could endurance exercisers be using performance-enhancing drugs that are confounding the associations? Are people who engage in more physical activity healthier or just trying to mitigate a higher baseline cardiovascular risk? Why do they give out bananas at the end of marathons given that there are better sources of potassium?
We have no randomized trials on the benefits and risks of endurance exercise. Even if you could get ethics approval, one imagines there would be few volunteers. In the end, we must make do with observational data and remember that coronary calcifications are a surrogate endpoint.
When it comes to hard endpoints, an analysis of French Tour de France participants found a lower risk for both cardiovascular and cancer deaths compared with the general male population. So perhaps the most important take-home message is one that has been said many times: Beware of surrogate endpoints. And for those contemplating running a marathon, I am forced to agree with the person who wrote the sign I saw during my first race. It does seem like a lot of work for a free banana.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, Quebec, Canada. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
In 490 BC, Pheidippides (or possibly Philippides) ran from Athens to Sparta to ask for military aid against the invading Persian army, then back to Athens, then off to the battlefield of Marathon, then back to Athens to announce the army’s victory, after which he promptly died. The story, if it is to be believed (there is some doubt among historians), raises an interesting question: Are some forms of exercise dangerous?
Running a marathon is a lot of work. The “worst parade ever,” as one spectator described it, is not without its risks. As a runner myself, I know that it doesn’t take much to generate a bloody sock at the end of a long run.
But when most people think about the risks of exercise, they mean the cardiovascular risks, such as sudden deaths during marathons, probably because of the aforementioned ancient Greek’s demise. The reality is more reassuring. An analysis of 10 years’ worth of data from US marathons and half-marathons found that out of 10.9 million runners, there were 59 cardiac arrests, an incidence rate of 0.54 per 100,000 participants. Others have found incidence rates in the same range. An analysis of the annual Marine Corps and Twin Cities marathons found a sudden death rate of 0.002%.
Marathon runners do sometimes require medical attention. In the Twin Cities cohort, 25 out of every 1000 finishers required medical attention, but 90% of their problems were mild. The majority included issues such as dehydration, vasovagal syncope, hyperthermia, and exhaustion. Musculoskeletal problems and skin abrasions made up the rest. Objectively, long distance running is fairly safe.
Running and Coronary Calcium
Then a study comes around suggesting that marathon runners have more coronary artery calcium (CAC). In 2008, German researchers compared 108 healthy male marathon runners over 50 years of age with Framingham risk–matched controls. The marathoners had a higher median CAC score (36 vs 12; P =.02), but scores across the board were quite low and not all studies were in agreement. The MESA study and another from Korea found an inverse relationship between physical activity and coronary calcium, but they compared sedentary people with vigorous exercisers, not specifically marathoners.
Two later studies, published in 2017, generally corroborated that endurance exercise was associated with higher calcium — with some caveats. A group from the Netherlands looked at lifelong exercise volume and compared men who accumulated > 2000 MET-min/week with those who exercised < 1000 MET-min/week. Again, the analysis was limited to men, and CAC scores, though statistically different, were still very low (9.4 vs 0; P =.02). Importantly, in men with coronary plaques, the more active group had less mixed plaque and more calcified plaque.
A UK study of middle-aged masters-level athletes at low cardiovascular risk had similar findings. Most of the study population (70%) were men, and 77% were runners (not all were marathoners). Overall, the male athletes had not only more plaque but more calcified plaque than their sedentary peers, even though most male athletes (60%) had a CAC score of zero.
The findings from these two studies were interpreted as reassuring. They confirmed that athletes are a generally low-risk group with low calcium scores, and although they might have more plaque and coronary calcium on average, it tends to be the more benign calcified type.
Masters at Heart
But the 2023 Master@Heart study challenged that assertion. It analyzed lifelong endurance athletes, late-onset endurance athletes (those who came to the game later in life), and healthy nonathletic controls. The study also found more coronary stenoses in lifelong athletes, but the breakdown of calcified vs noncalcified vs mixed plaques was the same across groups, thus contradicting the idea that exercise exerted its protective effect by calcifying and therefore stabilizing said plaques. The silver lining was fewer vulnerable plaques in the lifelong athletes (defined via high-risk features) but these were generally rare across the entire population.
Whether Master@Heart is groundbreaking or an outlier depends on your point of view. In 2024, a study from Portugal suggested that the relationship between exercise and coronary calcification is more complicated. Among 105 male veteran athletes, a high volume of exercise was associated with more coronary atherosclerosis in those at higher cardiovascular risk, but it tended to be protective in those deemed lower risk. In fact, the high-volume exercise group had fewer individuals with a CAC score > 100 (16% vs 4%; P =.029), though again, the vast majority had low CAC scores.
A limitation of all these studies is that they had cross-sectional designs, measuring coronary calcium at a single point in time and relying on questionnaires and patient recall to determine lifelong exposure to exercise. Recall bias could have been a problem, and exercise patterns vary over time. It’s not unreasonable to wonder whether people at higher cardiovascular risk should start exercising to mitigate that risk. Granted, they might not start running marathons, but many of these studies looked only at physical activity levels. A study that measured the increase (or stability) of coronary calcium over time would be more helpful.
Prior research (in men again) showed that high levels of physical activity were associated with more coronary calcium, but not with all-cause or cardiovascular mortality. But it too looked only at a single time point. The most recent study added to the body of evidence included data on nearly 9000 men and women and found that higher exercise volume did not correlate with CAC progression over the mean follow-up of 7.8 years. The study measured physical activity of any variety and included physically taxing sports like golf (without a cart). So it was not an assessment of the dangers of endurance exercise.
Outstanding Questions and Bananas
Ultimately, many questions remain. Is the lack of risk seen in women a spurious finding because they are underrepresented in most studies, or might exercise affect men and women differently? Is it valid to combine studies on endurance exercise with those looking at physical activity more generally? How accurate are self-reports of exercise? Could endurance exercisers be using performance-enhancing drugs that are confounding the associations? Are people who engage in more physical activity healthier or just trying to mitigate a higher baseline cardiovascular risk? Why do they give out bananas at the end of marathons given that there are better sources of potassium?
We have no randomized trials on the benefits and risks of endurance exercise. Even if you could get ethics approval, one imagines there would be few volunteers. In the end, we must make do with observational data and remember that coronary calcifications are a surrogate endpoint.
When it comes to hard endpoints, an analysis of French Tour de France participants found a lower risk for both cardiovascular and cancer deaths compared with the general male population. So perhaps the most important take-home message is one that has been said many times: Beware of surrogate endpoints. And for those contemplating running a marathon, I am forced to agree with the person who wrote the sign I saw during my first race. It does seem like a lot of work for a free banana.
Dr. Labos is a cardiologist at Hôpital Notre-Dame, Montreal, Quebec, Canada. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Olanzapine Eases Chemo-Induced Nausea and Vomiting
TOPLINE:
, reduced need for rescue medications, and improved quality of life in patients with solid malignant tumors at moderate risk for chemotherapy-induced nausea and vomiting, a new analysis finds.
METHODOLOGY:
- Chemotherapy-induced nausea and vomiting can impact quality of life in patients with cancer. Olanzapine — an atypical antipsychotic agent — has been approved as part of antiemetic prophylaxis in patients receiving chemotherapy regimens that come with a high risk for nausea and vomiting; the agent may also help those at more moderate risk for chemotherapy-induced nausea and vomiting.
- Researchers evaluated whether receiving antiemetic prophylaxis with olanzapine reduced nausea and vomiting and improved complete response rates in patients at more moderate risk for chemotherapy-induced nausea and vomiting.
- In the phase 3 randomized study, 544 patients (median age, 51 years) with solid malignant tumors received either oxaliplatin-, irinotecan-, or carboplatin-based chemotherapy regimens at three institutes in India and were randomly assigned to antiemetic prophylaxis that included dexamethasone, aprepitant, and palonosetron with or without 10 mg olanzapine.
- The primary endpoint was the rate of complete response — defined as no vomiting, a nausea score < 5 on the visual analog scale, and no use of rescue medications during the first 120 hours of chemotherapy. Secondary endpoints included the proportion of patients who experienced nausea or chemotherapy-induced nausea and vomiting and who received rescue medications.
TAKEAWAY:
- Overall, patients who received olanzapine had a significantly higher complete response rate (91%) than those not receiving olanzapine (82%). This effect was significant after 25 hours (92% vs 83%; P = .001) but not within the first 24 hours of the chemotherapy cycle (96% vs 94%; P = .53).
- The addition of olanzapine improved complete response rates in patients who received oxaliplatin-based chemotherapy (odds ratio [OR], 0.36) and carboplatin-based chemotherapy (OR, 0.23) but not irinotecan-based chemotherapy (OR, 2.36; 95% CI, 0.23-24.25).
- Olanzapine led to better nausea control, with 96% of patients achieving a nausea score < 5 on the visual analog scale compared with 87% in the observation group (P < .001) as well as eased chemotherapy-induced nausea and vomiting (96% vs 91%; P = .02). Olanzapine also reduced the need for rescue medications — only 4% of patients in the olanzapine group received rescue medications vs 11% of patients not receiving olanzapine — and improved patients’ quality of life.
- However, 10% of the patients in the olanzapine group experienced grade 1 somnolence, whereas none in the observation group reported this side effect.
IN PRACTICE:
“Olanzapine 10 mg, combined with aprepitant, palonosetron, and dexamethasone, improved complete response rates compared with no olanzapine,” the authors concluded. “These findings suggest that this regimen could be considered as one of the standards of antiemetic therapy” in patients receiving chemotherapy regimens associated with a moderate risk for chemotherapy-induced nausea and vomiting.
SOURCE:
The study, led by Vikas Ostwal, DM, Tata Memorial Centre, Mumbai, India, was published online in JAMA Network Open.
LIMITATIONS:
The lack of a placebo group could affect the interpretation of the results. The study evaluated only a 10-mg dose of olanzapine but did not consider a lower (5-mg) dose. Other potential side effects of olanzapine, such as increased appetite or constipation, were not reported. The study predominantly involved patients with gastrointestinal cancers receiving oxaliplatin-containing regimens, which may limit the generalizability of the findings.
DISCLOSURES:
The study was supported by grants from Intas Pharmaceuticals, Zydus Lifesciences, and Dr. Reddy’s Laboratories to Tata Memorial Centre. Several authors reported receiving grants and having other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, reduced need for rescue medications, and improved quality of life in patients with solid malignant tumors at moderate risk for chemotherapy-induced nausea and vomiting, a new analysis finds.
METHODOLOGY:
- Chemotherapy-induced nausea and vomiting can impact quality of life in patients with cancer. Olanzapine — an atypical antipsychotic agent — has been approved as part of antiemetic prophylaxis in patients receiving chemotherapy regimens that come with a high risk for nausea and vomiting; the agent may also help those at more moderate risk for chemotherapy-induced nausea and vomiting.
- Researchers evaluated whether receiving antiemetic prophylaxis with olanzapine reduced nausea and vomiting and improved complete response rates in patients at more moderate risk for chemotherapy-induced nausea and vomiting.
- In the phase 3 randomized study, 544 patients (median age, 51 years) with solid malignant tumors received either oxaliplatin-, irinotecan-, or carboplatin-based chemotherapy regimens at three institutes in India and were randomly assigned to antiemetic prophylaxis that included dexamethasone, aprepitant, and palonosetron with or without 10 mg olanzapine.
- The primary endpoint was the rate of complete response — defined as no vomiting, a nausea score < 5 on the visual analog scale, and no use of rescue medications during the first 120 hours of chemotherapy. Secondary endpoints included the proportion of patients who experienced nausea or chemotherapy-induced nausea and vomiting and who received rescue medications.
TAKEAWAY:
- Overall, patients who received olanzapine had a significantly higher complete response rate (91%) than those not receiving olanzapine (82%). This effect was significant after 25 hours (92% vs 83%; P = .001) but not within the first 24 hours of the chemotherapy cycle (96% vs 94%; P = .53).
- The addition of olanzapine improved complete response rates in patients who received oxaliplatin-based chemotherapy (odds ratio [OR], 0.36) and carboplatin-based chemotherapy (OR, 0.23) but not irinotecan-based chemotherapy (OR, 2.36; 95% CI, 0.23-24.25).
- Olanzapine led to better nausea control, with 96% of patients achieving a nausea score < 5 on the visual analog scale compared with 87% in the observation group (P < .001) as well as eased chemotherapy-induced nausea and vomiting (96% vs 91%; P = .02). Olanzapine also reduced the need for rescue medications — only 4% of patients in the olanzapine group received rescue medications vs 11% of patients not receiving olanzapine — and improved patients’ quality of life.
- However, 10% of the patients in the olanzapine group experienced grade 1 somnolence, whereas none in the observation group reported this side effect.
IN PRACTICE:
“Olanzapine 10 mg, combined with aprepitant, palonosetron, and dexamethasone, improved complete response rates compared with no olanzapine,” the authors concluded. “These findings suggest that this regimen could be considered as one of the standards of antiemetic therapy” in patients receiving chemotherapy regimens associated with a moderate risk for chemotherapy-induced nausea and vomiting.
SOURCE:
The study, led by Vikas Ostwal, DM, Tata Memorial Centre, Mumbai, India, was published online in JAMA Network Open.
LIMITATIONS:
The lack of a placebo group could affect the interpretation of the results. The study evaluated only a 10-mg dose of olanzapine but did not consider a lower (5-mg) dose. Other potential side effects of olanzapine, such as increased appetite or constipation, were not reported. The study predominantly involved patients with gastrointestinal cancers receiving oxaliplatin-containing regimens, which may limit the generalizability of the findings.
DISCLOSURES:
The study was supported by grants from Intas Pharmaceuticals, Zydus Lifesciences, and Dr. Reddy’s Laboratories to Tata Memorial Centre. Several authors reported receiving grants and having other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, reduced need for rescue medications, and improved quality of life in patients with solid malignant tumors at moderate risk for chemotherapy-induced nausea and vomiting, a new analysis finds.
METHODOLOGY:
- Chemotherapy-induced nausea and vomiting can impact quality of life in patients with cancer. Olanzapine — an atypical antipsychotic agent — has been approved as part of antiemetic prophylaxis in patients receiving chemotherapy regimens that come with a high risk for nausea and vomiting; the agent may also help those at more moderate risk for chemotherapy-induced nausea and vomiting.
- Researchers evaluated whether receiving antiemetic prophylaxis with olanzapine reduced nausea and vomiting and improved complete response rates in patients at more moderate risk for chemotherapy-induced nausea and vomiting.
- In the phase 3 randomized study, 544 patients (median age, 51 years) with solid malignant tumors received either oxaliplatin-, irinotecan-, or carboplatin-based chemotherapy regimens at three institutes in India and were randomly assigned to antiemetic prophylaxis that included dexamethasone, aprepitant, and palonosetron with or without 10 mg olanzapine.
- The primary endpoint was the rate of complete response — defined as no vomiting, a nausea score < 5 on the visual analog scale, and no use of rescue medications during the first 120 hours of chemotherapy. Secondary endpoints included the proportion of patients who experienced nausea or chemotherapy-induced nausea and vomiting and who received rescue medications.
TAKEAWAY:
- Overall, patients who received olanzapine had a significantly higher complete response rate (91%) than those not receiving olanzapine (82%). This effect was significant after 25 hours (92% vs 83%; P = .001) but not within the first 24 hours of the chemotherapy cycle (96% vs 94%; P = .53).
- The addition of olanzapine improved complete response rates in patients who received oxaliplatin-based chemotherapy (odds ratio [OR], 0.36) and carboplatin-based chemotherapy (OR, 0.23) but not irinotecan-based chemotherapy (OR, 2.36; 95% CI, 0.23-24.25).
- Olanzapine led to better nausea control, with 96% of patients achieving a nausea score < 5 on the visual analog scale compared with 87% in the observation group (P < .001) as well as eased chemotherapy-induced nausea and vomiting (96% vs 91%; P = .02). Olanzapine also reduced the need for rescue medications — only 4% of patients in the olanzapine group received rescue medications vs 11% of patients not receiving olanzapine — and improved patients’ quality of life.
- However, 10% of the patients in the olanzapine group experienced grade 1 somnolence, whereas none in the observation group reported this side effect.
IN PRACTICE:
“Olanzapine 10 mg, combined with aprepitant, palonosetron, and dexamethasone, improved complete response rates compared with no olanzapine,” the authors concluded. “These findings suggest that this regimen could be considered as one of the standards of antiemetic therapy” in patients receiving chemotherapy regimens associated with a moderate risk for chemotherapy-induced nausea and vomiting.
SOURCE:
The study, led by Vikas Ostwal, DM, Tata Memorial Centre, Mumbai, India, was published online in JAMA Network Open.
LIMITATIONS:
The lack of a placebo group could affect the interpretation of the results. The study evaluated only a 10-mg dose of olanzapine but did not consider a lower (5-mg) dose. Other potential side effects of olanzapine, such as increased appetite or constipation, were not reported. The study predominantly involved patients with gastrointestinal cancers receiving oxaliplatin-containing regimens, which may limit the generalizability of the findings.
DISCLOSURES:
The study was supported by grants from Intas Pharmaceuticals, Zydus Lifesciences, and Dr. Reddy’s Laboratories to Tata Memorial Centre. Several authors reported receiving grants and having other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
New AFib Guidelines Address Underlying Illness, Comorbidities
LONDON — Updated guidelines for the management of atrial fibrillation released by the European Society of Cardiology are revamping the approach to care for this complex, multifactorial disease.
It is not just appropriate to place the same emphasis on the control of comorbidities as on the rhythm disturbance, it is critical, said Dr. Van Gelder, who served as chair of the ESC-AF guidelines task force.
Comorbidities are the drivers of both the onset and recurrence of atrial fibrillation, and a dynamic approach to comorbidities is “central for the success of AF management.”
Class I Recommendation
In fact, on the basis of overwhelming evidence, a class I recommendation has been issued for a large number of goals in the comorbidity and risk factor management step of atrial fibrillation management, including those for hypertension, components of heart failure, obesity, diabetes, alcohol consumption, and exercise.
Sodium-glucose cotransporter-2 (SGLT2) inhibitors “should be offered to all patients with AF,” according to Dr. Van Gelder, who identified this as a new class I recommendation.
Patients who are not managed aggressively for the listed comorbidities ultimately face “treatment failure, poor patient outcomes, and a waste of healthcare resources,” she said.
Control of sleep apnea is also noted as a key target, although Van Gelder acknowledged that the supporting evidence only allows for a class IIb recommendation.
Control of comorbidities is not a new idea. In the 2023 joint guideline, led by a consortium of professional groups, including the American Heart Association (AHA) and the American College of Cardiology (ACC), the control of comorbidities, including most of those identified in the new ESC guidelines, was second in a list of 10 key take-home messages.
However, the new ESC guidelines have prioritized comorbidity management by listing it first in each of the specific patient-care pathways developed to define optimized care.
These pathways, defined in algorithms for newly diagnosed AF, paroxysmal AF, and persistent AF, always start with the assessment of comorbidities, followed by step A — avoiding stroke — largely with anticoagulation.
Direct oral anticoagulants should be used, “except in those with a mechanical valve or mitral stenosis,” Dr. Van Gelder said. This includes, essentially, all patients with a CHA2DS2-VASc score of 2 or greater, and it should be “considered” in those with a score of 1.
The ESC framework has been identified with the acronym AF-CARE, in which the C stands for comorbidities.
In the A step of the framework, identifying and treating all modifiable bleeding risk factors in AF patients is a class I recommendation. On the basis of a class III recommendation, she cautioned against withholding anticoagulants because of CHA2DS2-VASc risk factors alone. Rather, Dr. Van Gelder called the decision to administer or withhold anticoagulation — like all decisions — one that should be individualized in consultation with the patient.
For reducing AF symptoms and rhythm control, the specific pathways diverge for newly diagnosed AF, paroxysmal AF, and persistent AF. Like all of the guidelines, the specific options for symptom management and AF ablation are color coded, with green signifying level 1 evidence.
The evaluation and dynamic reassessment step refers to the need to periodically assess patients for new modifiable risk factors related to comorbidities, risk for stroke, risk for bleeding, and risk for AF.
The management of risk factors for AF has long been emphasized in guidelines, but a previous focus on AF with attention to comorbidities has been replaced by a focus on comorbidities with an expectation of more durable AF control. The success of this pivot is based on multidisciplinary care, chosen in collaboration with the patient, to reduce or eliminate the triggers of AF and the risks of its complications.
Pathways Are Appropriate for All Patients
A very important recommendation — and this is new — is “to treat all our patients with atrial fibrillation, whether they are young or old, men or women, Black or White, or at high or low risk, according to our patient-centered integrated AF-CARE approach,” Dr. Van Gelder said.
The changes reflect a shared appreciation for the tight relation between the control of comorbidities and the control of AF, according to José A. Joglar, MD, professor of cardiac electrophysiologic research at the University of Texas Southwestern Medical Center in Dallas. Dr. Joglar was chair of the writing committee for the joint 2023 AF guidelines released by the AHA, ACC, the American College of Clinical Pharmacy, and the Heart Rhythm Society.
“It is increasingly clear that AF in many cases is the consequence of underlying risk factors and comorbidities, which cannot be separated from AF alone,” Dr. Joglar explained in an interview.
This was placed first “to emphasize the importance of viewing AFib as a complex disease that requires a holistic, multidisciplinary approach to care, as opposed to being viewed just as a rhythm abnormality,” he said.
A version of this article first appeared on Medscape.com.
LONDON — Updated guidelines for the management of atrial fibrillation released by the European Society of Cardiology are revamping the approach to care for this complex, multifactorial disease.
It is not just appropriate to place the same emphasis on the control of comorbidities as on the rhythm disturbance, it is critical, said Dr. Van Gelder, who served as chair of the ESC-AF guidelines task force.
Comorbidities are the drivers of both the onset and recurrence of atrial fibrillation, and a dynamic approach to comorbidities is “central for the success of AF management.”
Class I Recommendation
In fact, on the basis of overwhelming evidence, a class I recommendation has been issued for a large number of goals in the comorbidity and risk factor management step of atrial fibrillation management, including those for hypertension, components of heart failure, obesity, diabetes, alcohol consumption, and exercise.
Sodium-glucose cotransporter-2 (SGLT2) inhibitors “should be offered to all patients with AF,” according to Dr. Van Gelder, who identified this as a new class I recommendation.
Patients who are not managed aggressively for the listed comorbidities ultimately face “treatment failure, poor patient outcomes, and a waste of healthcare resources,” she said.
Control of sleep apnea is also noted as a key target, although Van Gelder acknowledged that the supporting evidence only allows for a class IIb recommendation.
Control of comorbidities is not a new idea. In the 2023 joint guideline, led by a consortium of professional groups, including the American Heart Association (AHA) and the American College of Cardiology (ACC), the control of comorbidities, including most of those identified in the new ESC guidelines, was second in a list of 10 key take-home messages.
However, the new ESC guidelines have prioritized comorbidity management by listing it first in each of the specific patient-care pathways developed to define optimized care.
These pathways, defined in algorithms for newly diagnosed AF, paroxysmal AF, and persistent AF, always start with the assessment of comorbidities, followed by step A — avoiding stroke — largely with anticoagulation.
Direct oral anticoagulants should be used, “except in those with a mechanical valve or mitral stenosis,” Dr. Van Gelder said. This includes, essentially, all patients with a CHA2DS2-VASc score of 2 or greater, and it should be “considered” in those with a score of 1.
The ESC framework has been identified with the acronym AF-CARE, in which the C stands for comorbidities.
In the A step of the framework, identifying and treating all modifiable bleeding risk factors in AF patients is a class I recommendation. On the basis of a class III recommendation, she cautioned against withholding anticoagulants because of CHA2DS2-VASc risk factors alone. Rather, Dr. Van Gelder called the decision to administer or withhold anticoagulation — like all decisions — one that should be individualized in consultation with the patient.
For reducing AF symptoms and rhythm control, the specific pathways diverge for newly diagnosed AF, paroxysmal AF, and persistent AF. Like all of the guidelines, the specific options for symptom management and AF ablation are color coded, with green signifying level 1 evidence.
The evaluation and dynamic reassessment step refers to the need to periodically assess patients for new modifiable risk factors related to comorbidities, risk for stroke, risk for bleeding, and risk for AF.
The management of risk factors for AF has long been emphasized in guidelines, but a previous focus on AF with attention to comorbidities has been replaced by a focus on comorbidities with an expectation of more durable AF control. The success of this pivot is based on multidisciplinary care, chosen in collaboration with the patient, to reduce or eliminate the triggers of AF and the risks of its complications.
Pathways Are Appropriate for All Patients
A very important recommendation — and this is new — is “to treat all our patients with atrial fibrillation, whether they are young or old, men or women, Black or White, or at high or low risk, according to our patient-centered integrated AF-CARE approach,” Dr. Van Gelder said.
The changes reflect a shared appreciation for the tight relation between the control of comorbidities and the control of AF, according to José A. Joglar, MD, professor of cardiac electrophysiologic research at the University of Texas Southwestern Medical Center in Dallas. Dr. Joglar was chair of the writing committee for the joint 2023 AF guidelines released by the AHA, ACC, the American College of Clinical Pharmacy, and the Heart Rhythm Society.
“It is increasingly clear that AF in many cases is the consequence of underlying risk factors and comorbidities, which cannot be separated from AF alone,” Dr. Joglar explained in an interview.
This was placed first “to emphasize the importance of viewing AFib as a complex disease that requires a holistic, multidisciplinary approach to care, as opposed to being viewed just as a rhythm abnormality,” he said.
A version of this article first appeared on Medscape.com.
LONDON — Updated guidelines for the management of atrial fibrillation released by the European Society of Cardiology are revamping the approach to care for this complex, multifactorial disease.
It is not just appropriate to place the same emphasis on the control of comorbidities as on the rhythm disturbance, it is critical, said Dr. Van Gelder, who served as chair of the ESC-AF guidelines task force.
Comorbidities are the drivers of both the onset and recurrence of atrial fibrillation, and a dynamic approach to comorbidities is “central for the success of AF management.”
Class I Recommendation
In fact, on the basis of overwhelming evidence, a class I recommendation has been issued for a large number of goals in the comorbidity and risk factor management step of atrial fibrillation management, including those for hypertension, components of heart failure, obesity, diabetes, alcohol consumption, and exercise.
Sodium-glucose cotransporter-2 (SGLT2) inhibitors “should be offered to all patients with AF,” according to Dr. Van Gelder, who identified this as a new class I recommendation.
Patients who are not managed aggressively for the listed comorbidities ultimately face “treatment failure, poor patient outcomes, and a waste of healthcare resources,” she said.
Control of sleep apnea is also noted as a key target, although Van Gelder acknowledged that the supporting evidence only allows for a class IIb recommendation.
Control of comorbidities is not a new idea. In the 2023 joint guideline, led by a consortium of professional groups, including the American Heart Association (AHA) and the American College of Cardiology (ACC), the control of comorbidities, including most of those identified in the new ESC guidelines, was second in a list of 10 key take-home messages.
However, the new ESC guidelines have prioritized comorbidity management by listing it first in each of the specific patient-care pathways developed to define optimized care.
These pathways, defined in algorithms for newly diagnosed AF, paroxysmal AF, and persistent AF, always start with the assessment of comorbidities, followed by step A — avoiding stroke — largely with anticoagulation.
Direct oral anticoagulants should be used, “except in those with a mechanical valve or mitral stenosis,” Dr. Van Gelder said. This includes, essentially, all patients with a CHA2DS2-VASc score of 2 or greater, and it should be “considered” in those with a score of 1.
The ESC framework has been identified with the acronym AF-CARE, in which the C stands for comorbidities.
In the A step of the framework, identifying and treating all modifiable bleeding risk factors in AF patients is a class I recommendation. On the basis of a class III recommendation, she cautioned against withholding anticoagulants because of CHA2DS2-VASc risk factors alone. Rather, Dr. Van Gelder called the decision to administer or withhold anticoagulation — like all decisions — one that should be individualized in consultation with the patient.
For reducing AF symptoms and rhythm control, the specific pathways diverge for newly diagnosed AF, paroxysmal AF, and persistent AF. Like all of the guidelines, the specific options for symptom management and AF ablation are color coded, with green signifying level 1 evidence.
The evaluation and dynamic reassessment step refers to the need to periodically assess patients for new modifiable risk factors related to comorbidities, risk for stroke, risk for bleeding, and risk for AF.
The management of risk factors for AF has long been emphasized in guidelines, but a previous focus on AF with attention to comorbidities has been replaced by a focus on comorbidities with an expectation of more durable AF control. The success of this pivot is based on multidisciplinary care, chosen in collaboration with the patient, to reduce or eliminate the triggers of AF and the risks of its complications.
Pathways Are Appropriate for All Patients
A very important recommendation — and this is new — is “to treat all our patients with atrial fibrillation, whether they are young or old, men or women, Black or White, or at high or low risk, according to our patient-centered integrated AF-CARE approach,” Dr. Van Gelder said.
The changes reflect a shared appreciation for the tight relation between the control of comorbidities and the control of AF, according to José A. Joglar, MD, professor of cardiac electrophysiologic research at the University of Texas Southwestern Medical Center in Dallas. Dr. Joglar was chair of the writing committee for the joint 2023 AF guidelines released by the AHA, ACC, the American College of Clinical Pharmacy, and the Heart Rhythm Society.
“It is increasingly clear that AF in many cases is the consequence of underlying risk factors and comorbidities, which cannot be separated from AF alone,” Dr. Joglar explained in an interview.
This was placed first “to emphasize the importance of viewing AFib as a complex disease that requires a holistic, multidisciplinary approach to care, as opposed to being viewed just as a rhythm abnormality,” he said.
A version of this article first appeared on Medscape.com.
FROM ESC 2024
New Blood Pressure Guidelines Simplified, Lower Treatment Target
LONDON — Simplified and more aggressive targets are among the significant changes to the updated hypertension guidelines released by the European Society of Cardiology.
Although the updated guidelines, presented here at the ESC Congress, continue to define hypertension as a systolic BP of at least 140 mm Hg and a diastolic BP of at least 90 mm Hg, there is a new category — elevated BP. This is defined as a systolic BP of 120 mm Hg to 139 mm Hg or a diastolic BP of 70 mm Hg to 89 mm Hg, and cardiovascular risk assessment is advised to guide treatment, particularly in patients with a BP of at least 130/80 mm Hg.
The guidelines also introduce new recommendations for lifestyle options to help lower BP, including changes to exercise advice and the addition of potassium supplementation. And for the first time, the ESC guidelines provide recommendations for the use of renal denervation to treat hypertension in certain circumstances.
The guidelines were produced by an international panel, led by Bill McEvoy, MB BCh, from the University of Galway, Ireland, and Rhian Touyz, MB BCh, PhD, from McGill University in Montreal.
Three Categories of Blood Pressure
There are now three categories for BP classification — non-elevated (< 120/70 mm Hg), elevated (120 mm Hg to139 mm Hg/70 mm Hg to 89 mm Hg), and hypertension (≥ 140/90 mm Hg) — Dr. McEvoy reported during a session on the new guidelines here at ESC.
The emphasis on out-of-office BP measurement is stronger than in previous guidelines, but office measurement will still be used, he said.
All patients in the hypertension category qualify for treatment, whereas those in the new elevated BP category will be subject to cardiovascular risk stratification before a treatment decision is made.
Patients in the elevated BP category who also have moderate or severe chronic kidney disease, established cardiovascular disease, diabetes, or familial hypercholesterolemia are among those considered at increased risk for cardiovascular disease, as are patients with an estimated 10-year cardiovascular risk of 10% or higher. In such patients with a confirmed BP of at least 130/80 mm Hg, after 3 months of lifestyle intervention, pharmacologic treatment is recommended.
“This new category of elevated blood pressure recognizes that people do not go from normal blood pressure to hypertensive overnight,” Dr. McEvoy said. “It is, in most cases, a steady gradient of change, and different subgroups of patients — for example, those at a higher risk of developing cardiovascular disease — could benefit from more intensive treatment before their blood pressure reaches the traditional threshold of hypertension.”
New Lower Target
The major change in target pressures in these guidelines is based on new clinical trial data that confirm that lower pressures lead to lower cardiovascular event rates, resulting in the new systolic BP target of 120 mm Hg to 129 mm Hg for most patients receiving antihypertensive medications.
This systolic target represents a major change from previous European guidelines, Dr. McEvoy said, which have generally recommended that patients be treated to a target of less than 140/90 mm Hg and, only after that has been reached, then treated to a target of less than 130/80 mm Hg (a two-step approach).
“This change is driven by new trial evidence confirming that more intensive blood pressure treatment targets reduce cardiovascular outcomes across a broad spectrum of eligible patients,” Dr. McEvoy said.
There are, however, several caveats to this recommendation, including the requirement that treatment to this target be well tolerated; more lenient targets can be considered in people with symptomatic orthostatic hypotension, those 85 years and older, and those with moderate to severe frailty or a limited life expectancy. For these patients, the guidelines recommend a target “that is as low as reasonably achievable.”
More in Line With US Guidelines
The new European guidelines are now more in line with the American guidelines, said Eugene Yang, MD, from the University of Washington in Seattle, who is chair of the Hypertension Writing Group at the American College of Cardiology.
“These new European guidelines have thoughtfully used the latest study data to simplify recommendations for a specific lower blood pressure target. This is a step forward. There is now a greater alignment of European and US guidelines. This is good to reduce confusion and build consensus across the world,” he said.
Both sets of guidelines now recommend a BP target of less than 130/80 mm Hg for most people.
“I think the Europeans have now embraced this more aggressive target because there are many more studies now showing that these lower blood pressure levels do lead to a reduction in cardiovascular events,” Dr. Yang explained. “When the last European guidelines came out, there was only SPRINT. Now there are several more studies showing similar results.”
New Lifestyle Advice
The updated recommendation of 75 minutes of vigorous-intensity aerobic exercise per week has been added as an alternative to the previous recommendation of at least 2.5 hours per week of moderate-intensity aerobic exercise. This should be complemented with low- or moderate-intensity dynamic or isometric resistance training two to three times a week.
It is also recommended that people with hypertension, but without moderate or advanced chronic kidney disease, increase potassium intake with salt substitutes or diets rich in fruits and vegetables.
Renal Denervation Included for First Time
For the first time, the guidelines include the option of renal denervation for the treatment of hypertension — at medium- to high-volume centers — for patients with resistant hypertension that is uncontrolled despite a three-drug combination.
However, renal denervation is not recommended as a first-line treatment because of the lack of evidence of a benefit in cardiovascular outcomes. It is also not recommended for patients with highly impaired renal function or secondary causes of hypertension.
Dr. Yang said he approves of the inclusion of a frailty assessment in the new guidelines and less aggressive targets for people who are in poor health and older than age 85 years, but added that, “on the whole, they have less age-specific stratification than before, which is a significant change, and a good one in my view.”
Again, this is like the American guidelines, which have no age cutoffs and a target of less than 130/80 mm Hg for all, with the caveat that clinical judgment may be needed for individuals who are institutionalized, he added.
Dr. Yang said he was not as keen on the requirement for a cardiovascular risk assessment to guide treatment decisions for people with a systolic BP in the 130 mm Hg to 139 mm Hg range, although this is also included in the current American guidelines.
“As a clinician, I think this complicates things a bit too much and, as such, will be a barrier to treatment. In my view, blood pressure treatment recommendations need to be as simple as possible, so I think we still have some work to do there,” he said.
A version of this article first appeared on Medscape.com.
LONDON — Simplified and more aggressive targets are among the significant changes to the updated hypertension guidelines released by the European Society of Cardiology.
Although the updated guidelines, presented here at the ESC Congress, continue to define hypertension as a systolic BP of at least 140 mm Hg and a diastolic BP of at least 90 mm Hg, there is a new category — elevated BP. This is defined as a systolic BP of 120 mm Hg to 139 mm Hg or a diastolic BP of 70 mm Hg to 89 mm Hg, and cardiovascular risk assessment is advised to guide treatment, particularly in patients with a BP of at least 130/80 mm Hg.
The guidelines also introduce new recommendations for lifestyle options to help lower BP, including changes to exercise advice and the addition of potassium supplementation. And for the first time, the ESC guidelines provide recommendations for the use of renal denervation to treat hypertension in certain circumstances.
The guidelines were produced by an international panel, led by Bill McEvoy, MB BCh, from the University of Galway, Ireland, and Rhian Touyz, MB BCh, PhD, from McGill University in Montreal.
Three Categories of Blood Pressure
There are now three categories for BP classification — non-elevated (< 120/70 mm Hg), elevated (120 mm Hg to139 mm Hg/70 mm Hg to 89 mm Hg), and hypertension (≥ 140/90 mm Hg) — Dr. McEvoy reported during a session on the new guidelines here at ESC.
The emphasis on out-of-office BP measurement is stronger than in previous guidelines, but office measurement will still be used, he said.
All patients in the hypertension category qualify for treatment, whereas those in the new elevated BP category will be subject to cardiovascular risk stratification before a treatment decision is made.
Patients in the elevated BP category who also have moderate or severe chronic kidney disease, established cardiovascular disease, diabetes, or familial hypercholesterolemia are among those considered at increased risk for cardiovascular disease, as are patients with an estimated 10-year cardiovascular risk of 10% or higher. In such patients with a confirmed BP of at least 130/80 mm Hg, after 3 months of lifestyle intervention, pharmacologic treatment is recommended.
“This new category of elevated blood pressure recognizes that people do not go from normal blood pressure to hypertensive overnight,” Dr. McEvoy said. “It is, in most cases, a steady gradient of change, and different subgroups of patients — for example, those at a higher risk of developing cardiovascular disease — could benefit from more intensive treatment before their blood pressure reaches the traditional threshold of hypertension.”
New Lower Target
The major change in target pressures in these guidelines is based on new clinical trial data that confirm that lower pressures lead to lower cardiovascular event rates, resulting in the new systolic BP target of 120 mm Hg to 129 mm Hg for most patients receiving antihypertensive medications.
This systolic target represents a major change from previous European guidelines, Dr. McEvoy said, which have generally recommended that patients be treated to a target of less than 140/90 mm Hg and, only after that has been reached, then treated to a target of less than 130/80 mm Hg (a two-step approach).
“This change is driven by new trial evidence confirming that more intensive blood pressure treatment targets reduce cardiovascular outcomes across a broad spectrum of eligible patients,” Dr. McEvoy said.
There are, however, several caveats to this recommendation, including the requirement that treatment to this target be well tolerated; more lenient targets can be considered in people with symptomatic orthostatic hypotension, those 85 years and older, and those with moderate to severe frailty or a limited life expectancy. For these patients, the guidelines recommend a target “that is as low as reasonably achievable.”
More in Line With US Guidelines
The new European guidelines are now more in line with the American guidelines, said Eugene Yang, MD, from the University of Washington in Seattle, who is chair of the Hypertension Writing Group at the American College of Cardiology.
“These new European guidelines have thoughtfully used the latest study data to simplify recommendations for a specific lower blood pressure target. This is a step forward. There is now a greater alignment of European and US guidelines. This is good to reduce confusion and build consensus across the world,” he said.
Both sets of guidelines now recommend a BP target of less than 130/80 mm Hg for most people.
“I think the Europeans have now embraced this more aggressive target because there are many more studies now showing that these lower blood pressure levels do lead to a reduction in cardiovascular events,” Dr. Yang explained. “When the last European guidelines came out, there was only SPRINT. Now there are several more studies showing similar results.”
New Lifestyle Advice
The updated recommendation of 75 minutes of vigorous-intensity aerobic exercise per week has been added as an alternative to the previous recommendation of at least 2.5 hours per week of moderate-intensity aerobic exercise. This should be complemented with low- or moderate-intensity dynamic or isometric resistance training two to three times a week.
It is also recommended that people with hypertension, but without moderate or advanced chronic kidney disease, increase potassium intake with salt substitutes or diets rich in fruits and vegetables.
Renal Denervation Included for First Time
For the first time, the guidelines include the option of renal denervation for the treatment of hypertension — at medium- to high-volume centers — for patients with resistant hypertension that is uncontrolled despite a three-drug combination.
However, renal denervation is not recommended as a first-line treatment because of the lack of evidence of a benefit in cardiovascular outcomes. It is also not recommended for patients with highly impaired renal function or secondary causes of hypertension.
Dr. Yang said he approves of the inclusion of a frailty assessment in the new guidelines and less aggressive targets for people who are in poor health and older than age 85 years, but added that, “on the whole, they have less age-specific stratification than before, which is a significant change, and a good one in my view.”
Again, this is like the American guidelines, which have no age cutoffs and a target of less than 130/80 mm Hg for all, with the caveat that clinical judgment may be needed for individuals who are institutionalized, he added.
Dr. Yang said he was not as keen on the requirement for a cardiovascular risk assessment to guide treatment decisions for people with a systolic BP in the 130 mm Hg to 139 mm Hg range, although this is also included in the current American guidelines.
“As a clinician, I think this complicates things a bit too much and, as such, will be a barrier to treatment. In my view, blood pressure treatment recommendations need to be as simple as possible, so I think we still have some work to do there,” he said.
A version of this article first appeared on Medscape.com.
LONDON — Simplified and more aggressive targets are among the significant changes to the updated hypertension guidelines released by the European Society of Cardiology.
Although the updated guidelines, presented here at the ESC Congress, continue to define hypertension as a systolic BP of at least 140 mm Hg and a diastolic BP of at least 90 mm Hg, there is a new category — elevated BP. This is defined as a systolic BP of 120 mm Hg to 139 mm Hg or a diastolic BP of 70 mm Hg to 89 mm Hg, and cardiovascular risk assessment is advised to guide treatment, particularly in patients with a BP of at least 130/80 mm Hg.
The guidelines also introduce new recommendations for lifestyle options to help lower BP, including changes to exercise advice and the addition of potassium supplementation. And for the first time, the ESC guidelines provide recommendations for the use of renal denervation to treat hypertension in certain circumstances.
The guidelines were produced by an international panel, led by Bill McEvoy, MB BCh, from the University of Galway, Ireland, and Rhian Touyz, MB BCh, PhD, from McGill University in Montreal.
Three Categories of Blood Pressure
There are now three categories for BP classification — non-elevated (< 120/70 mm Hg), elevated (120 mm Hg to139 mm Hg/70 mm Hg to 89 mm Hg), and hypertension (≥ 140/90 mm Hg) — Dr. McEvoy reported during a session on the new guidelines here at ESC.
The emphasis on out-of-office BP measurement is stronger than in previous guidelines, but office measurement will still be used, he said.
All patients in the hypertension category qualify for treatment, whereas those in the new elevated BP category will be subject to cardiovascular risk stratification before a treatment decision is made.
Patients in the elevated BP category who also have moderate or severe chronic kidney disease, established cardiovascular disease, diabetes, or familial hypercholesterolemia are among those considered at increased risk for cardiovascular disease, as are patients with an estimated 10-year cardiovascular risk of 10% or higher. In such patients with a confirmed BP of at least 130/80 mm Hg, after 3 months of lifestyle intervention, pharmacologic treatment is recommended.
“This new category of elevated blood pressure recognizes that people do not go from normal blood pressure to hypertensive overnight,” Dr. McEvoy said. “It is, in most cases, a steady gradient of change, and different subgroups of patients — for example, those at a higher risk of developing cardiovascular disease — could benefit from more intensive treatment before their blood pressure reaches the traditional threshold of hypertension.”
New Lower Target
The major change in target pressures in these guidelines is based on new clinical trial data that confirm that lower pressures lead to lower cardiovascular event rates, resulting in the new systolic BP target of 120 mm Hg to 129 mm Hg for most patients receiving antihypertensive medications.
This systolic target represents a major change from previous European guidelines, Dr. McEvoy said, which have generally recommended that patients be treated to a target of less than 140/90 mm Hg and, only after that has been reached, then treated to a target of less than 130/80 mm Hg (a two-step approach).
“This change is driven by new trial evidence confirming that more intensive blood pressure treatment targets reduce cardiovascular outcomes across a broad spectrum of eligible patients,” Dr. McEvoy said.
There are, however, several caveats to this recommendation, including the requirement that treatment to this target be well tolerated; more lenient targets can be considered in people with symptomatic orthostatic hypotension, those 85 years and older, and those with moderate to severe frailty or a limited life expectancy. For these patients, the guidelines recommend a target “that is as low as reasonably achievable.”
More in Line With US Guidelines
The new European guidelines are now more in line with the American guidelines, said Eugene Yang, MD, from the University of Washington in Seattle, who is chair of the Hypertension Writing Group at the American College of Cardiology.
“These new European guidelines have thoughtfully used the latest study data to simplify recommendations for a specific lower blood pressure target. This is a step forward. There is now a greater alignment of European and US guidelines. This is good to reduce confusion and build consensus across the world,” he said.
Both sets of guidelines now recommend a BP target of less than 130/80 mm Hg for most people.
“I think the Europeans have now embraced this more aggressive target because there are many more studies now showing that these lower blood pressure levels do lead to a reduction in cardiovascular events,” Dr. Yang explained. “When the last European guidelines came out, there was only SPRINT. Now there are several more studies showing similar results.”
New Lifestyle Advice
The updated recommendation of 75 minutes of vigorous-intensity aerobic exercise per week has been added as an alternative to the previous recommendation of at least 2.5 hours per week of moderate-intensity aerobic exercise. This should be complemented with low- or moderate-intensity dynamic or isometric resistance training two to three times a week.
It is also recommended that people with hypertension, but without moderate or advanced chronic kidney disease, increase potassium intake with salt substitutes or diets rich in fruits and vegetables.
Renal Denervation Included for First Time
For the first time, the guidelines include the option of renal denervation for the treatment of hypertension — at medium- to high-volume centers — for patients with resistant hypertension that is uncontrolled despite a three-drug combination.
However, renal denervation is not recommended as a first-line treatment because of the lack of evidence of a benefit in cardiovascular outcomes. It is also not recommended for patients with highly impaired renal function or secondary causes of hypertension.
Dr. Yang said he approves of the inclusion of a frailty assessment in the new guidelines and less aggressive targets for people who are in poor health and older than age 85 years, but added that, “on the whole, they have less age-specific stratification than before, which is a significant change, and a good one in my view.”
Again, this is like the American guidelines, which have no age cutoffs and a target of less than 130/80 mm Hg for all, with the caveat that clinical judgment may be needed for individuals who are institutionalized, he added.
Dr. Yang said he was not as keen on the requirement for a cardiovascular risk assessment to guide treatment decisions for people with a systolic BP in the 130 mm Hg to 139 mm Hg range, although this is also included in the current American guidelines.
“As a clinician, I think this complicates things a bit too much and, as such, will be a barrier to treatment. In my view, blood pressure treatment recommendations need to be as simple as possible, so I think we still have some work to do there,” he said.
A version of this article first appeared on Medscape.com.
FROM ESC 2024
Diet Rich in Processed Foods Linked to Elevated Risk for Colorectal Cancer
TOPLINE:
METHODOLOGY:
- To date, no known studies have investigated how a dietary pattern (rather than just individual foods or nutrients) specifically directed at CRC-related microbes may contribute to an increased CRC risk.
- Using stool metagenomes and dietary information from 307 men and 212 women, researchers identified and then validated a dietary pattern specifically linked to an established CRC-related gut microbial signature, which they termed the CRC Microbial Dietary Score (CMDS).
- They then investigated the association between CMDS and the risk for CRC in 259,200 participants (50,637 men and 208,563 women) from three large US cohorts where health professionals provided detailed information on various lifestyle factors over long follow-up periods.
- Researchers also analyzed the risk for CRC on the basis of the presence of gut bacteria, such as F nucleatum, pks+ E coli, and ETBF, in the tumor tissues of the participants who underwent surgical resection for CRC.
TAKEAWAY:
- The CMDS was characterized by high intake of processed foods and low intake of fiber-rich foods.
- Over 6,467,378 person-years assessed in the three US cohorts, 3854 cases of incident CRC were documented, with 1172, 1096, and 1119 cases measured for F nucleatum, pks+ E coli, and ETBF, respectively.
- A higher CMDS was associated with an increased risk for CRC after adjusting for putative CRC risk factors (adjusted hazard ratio [HR], 1.25; Ptrend < .001).
- The association between CMDS and the risk for CRC was stronger for tumors with detectable levels of F nucleatum (HR, 2.51; Ptrend < .001), pks+ E coli (HR, 1.68; Ptrend = .005), and ETBF (HR, 2.06; Ptrend = .016).
IN PRACTICE:
“A dietary pattern with a low consumption of processed foods may help prevent colorectal cancer through modulation of the gut microbiome. The dietary pattern modulating the colorectal cancer–related gut microbial signature may particularly help prevent tumoral microbial positive colorectal cancer, which tends to have a worse prognosis,” the authors wrote.
SOURCE:
This study, led by Kai Wang and Chun-Han Lo, Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, was published online in Gastroenterology.
LIMITATIONS:
The study’s observational design may have limited the ability to establish causality between dietary patterns and the risk for CRC. The inclusion of participants who were all health professionals from a predominantly White US population may have limited the generalizability of the findings to other populations. The reliance on self-reported dietary data may have introduced recall bias and affected the accuracy of the dietary pattern assessed.
DISCLOSURES:
This work was supported by various sources, including the National Institutes of Health and the Cancer Research UK Grand Challenge Award. One author served as a consultant for some pharmaceutical companies, and another received funding from various sources, both unrelated to this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- To date, no known studies have investigated how a dietary pattern (rather than just individual foods or nutrients) specifically directed at CRC-related microbes may contribute to an increased CRC risk.
- Using stool metagenomes and dietary information from 307 men and 212 women, researchers identified and then validated a dietary pattern specifically linked to an established CRC-related gut microbial signature, which they termed the CRC Microbial Dietary Score (CMDS).
- They then investigated the association between CMDS and the risk for CRC in 259,200 participants (50,637 men and 208,563 women) from three large US cohorts where health professionals provided detailed information on various lifestyle factors over long follow-up periods.
- Researchers also analyzed the risk for CRC on the basis of the presence of gut bacteria, such as F nucleatum, pks+ E coli, and ETBF, in the tumor tissues of the participants who underwent surgical resection for CRC.
TAKEAWAY:
- The CMDS was characterized by high intake of processed foods and low intake of fiber-rich foods.
- Over 6,467,378 person-years assessed in the three US cohorts, 3854 cases of incident CRC were documented, with 1172, 1096, and 1119 cases measured for F nucleatum, pks+ E coli, and ETBF, respectively.
- A higher CMDS was associated with an increased risk for CRC after adjusting for putative CRC risk factors (adjusted hazard ratio [HR], 1.25; Ptrend < .001).
- The association between CMDS and the risk for CRC was stronger for tumors with detectable levels of F nucleatum (HR, 2.51; Ptrend < .001), pks+ E coli (HR, 1.68; Ptrend = .005), and ETBF (HR, 2.06; Ptrend = .016).
IN PRACTICE:
“A dietary pattern with a low consumption of processed foods may help prevent colorectal cancer through modulation of the gut microbiome. The dietary pattern modulating the colorectal cancer–related gut microbial signature may particularly help prevent tumoral microbial positive colorectal cancer, which tends to have a worse prognosis,” the authors wrote.
SOURCE:
This study, led by Kai Wang and Chun-Han Lo, Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, was published online in Gastroenterology.
LIMITATIONS:
The study’s observational design may have limited the ability to establish causality between dietary patterns and the risk for CRC. The inclusion of participants who were all health professionals from a predominantly White US population may have limited the generalizability of the findings to other populations. The reliance on self-reported dietary data may have introduced recall bias and affected the accuracy of the dietary pattern assessed.
DISCLOSURES:
This work was supported by various sources, including the National Institutes of Health and the Cancer Research UK Grand Challenge Award. One author served as a consultant for some pharmaceutical companies, and another received funding from various sources, both unrelated to this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- To date, no known studies have investigated how a dietary pattern (rather than just individual foods or nutrients) specifically directed at CRC-related microbes may contribute to an increased CRC risk.
- Using stool metagenomes and dietary information from 307 men and 212 women, researchers identified and then validated a dietary pattern specifically linked to an established CRC-related gut microbial signature, which they termed the CRC Microbial Dietary Score (CMDS).
- They then investigated the association between CMDS and the risk for CRC in 259,200 participants (50,637 men and 208,563 women) from three large US cohorts where health professionals provided detailed information on various lifestyle factors over long follow-up periods.
- Researchers also analyzed the risk for CRC on the basis of the presence of gut bacteria, such as F nucleatum, pks+ E coli, and ETBF, in the tumor tissues of the participants who underwent surgical resection for CRC.
TAKEAWAY:
- The CMDS was characterized by high intake of processed foods and low intake of fiber-rich foods.
- Over 6,467,378 person-years assessed in the three US cohorts, 3854 cases of incident CRC were documented, with 1172, 1096, and 1119 cases measured for F nucleatum, pks+ E coli, and ETBF, respectively.
- A higher CMDS was associated with an increased risk for CRC after adjusting for putative CRC risk factors (adjusted hazard ratio [HR], 1.25; Ptrend < .001).
- The association between CMDS and the risk for CRC was stronger for tumors with detectable levels of F nucleatum (HR, 2.51; Ptrend < .001), pks+ E coli (HR, 1.68; Ptrend = .005), and ETBF (HR, 2.06; Ptrend = .016).
IN PRACTICE:
“A dietary pattern with a low consumption of processed foods may help prevent colorectal cancer through modulation of the gut microbiome. The dietary pattern modulating the colorectal cancer–related gut microbial signature may particularly help prevent tumoral microbial positive colorectal cancer, which tends to have a worse prognosis,” the authors wrote.
SOURCE:
This study, led by Kai Wang and Chun-Han Lo, Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, was published online in Gastroenterology.
LIMITATIONS:
The study’s observational design may have limited the ability to establish causality between dietary patterns and the risk for CRC. The inclusion of participants who were all health professionals from a predominantly White US population may have limited the generalizability of the findings to other populations. The reliance on self-reported dietary data may have introduced recall bias and affected the accuracy of the dietary pattern assessed.
DISCLOSURES:
This work was supported by various sources, including the National Institutes of Health and the Cancer Research UK Grand Challenge Award. One author served as a consultant for some pharmaceutical companies, and another received funding from various sources, both unrelated to this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Systemic Sclerosis Without Scleroderma Has Unique Severity, Prognosis
TOPLINE:
Systemic sclerosis sine scleroderma (ssSSc) affects nearly 10% of patients with systemic sclerosis (SSc), with substantial internal organ involvement. Despite lacking skin fibrosis, patients with ssSSc are at a risk for interstitial lung disease, pulmonary arterial hypertension, and cardiac dysfunction.
METHODOLOGY:
- Driven by a fatal case of ssSSc with cardiac involvement, researchers aimed to evaluate its prevalence, severity, and prognosis.
- They conducted a systematic literature and qualitative synthesis of 35 studies on SSc cohorts from databases published between 1976 and 2023 that comprised data on the prevalence of SSc with or without organ involvement.
- A total of 25,455 patients with SSc were included, with 2437 identified as having ssSSc.
- Studies used various classification criteria for SSc, including the 1980 American Rheumatism Association criteria, 2001 LeRoy and Medsger criteria, and 2013 American College of Rheumatology/European League Against Rheumatism criteria, while ssSSc was classified on the basis of the definitions provided by Rodnan and Fennell and also Poormoghim.
- The analysis focused on ssSSc prevalence, reclassification rates, and internal organ involvement, including interstitial lung disease, pulmonary arterial hypertension, scleroderma renal crisis, and cardiac dysfunction.
TAKEAWAY:
- The overall mean prevalence of ssSSc was 9.6%, with a range of 0%-22.9% across different studies.
- Reclassification rates of ssSSc into limited cutaneous SSc (lcSSc) or diffuse cutaneous SSc (dcSSc) varied substantially, with some studies reporting rates as high as 27.8% over a 4-year follow-up period.
- The mean frequency of internal organ involvement in patients with ssSSc was 46% for interstitial lung disease, 15% for pulmonary arterial hypertension, 5% for scleroderma renal crisis, and 26.5% for cardiac dysfunction — mainly diastolic dysfunction.
- The survival rates in patients with ssSSc were similar to those with lcSSc and better than those with dcSSc.
IN PRACTICE:
“The results presented herein suggest a slightly more severe yet similar clinical picture of ssSSc compared to lcSSc [limited cutaneous SSc], while dcSSc [diffuse cutaneous SSc] remains the most severe disease form,” the authors wrote. “Although classification criteria should not impact appropriate management of patients, updated ssSSc subclassification criteria, which will take into account time from disease onset, should be considered,” they further added.
SOURCE:
The study was led by Anastasios Makris, MD, First Department of Propaedeutic & Internal Medicine, National and Kapodistrian University of Athens, Medical School, Athens, Greece. It was published online on August 15, 2024, in The Journal of Rheumatology.
LIMITATIONS:
The variability in the classification criteria across different studies may affect the comparability of results. The included studies lacked data on cardiac MRI, restricting the identification of myocardial fibrosis patterns and characterization of cardiac disease activity.
DISCLOSURES:
The study did not receive any specific funding. Some authors disclosed having a consultancy relationship, serving as speakers, and receiving funding for research from multiple companies. One author reported having a patent and being a cofounder of CITUS AG.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Systemic sclerosis sine scleroderma (ssSSc) affects nearly 10% of patients with systemic sclerosis (SSc), with substantial internal organ involvement. Despite lacking skin fibrosis, patients with ssSSc are at a risk for interstitial lung disease, pulmonary arterial hypertension, and cardiac dysfunction.
METHODOLOGY:
- Driven by a fatal case of ssSSc with cardiac involvement, researchers aimed to evaluate its prevalence, severity, and prognosis.
- They conducted a systematic literature and qualitative synthesis of 35 studies on SSc cohorts from databases published between 1976 and 2023 that comprised data on the prevalence of SSc with or without organ involvement.
- A total of 25,455 patients with SSc were included, with 2437 identified as having ssSSc.
- Studies used various classification criteria for SSc, including the 1980 American Rheumatism Association criteria, 2001 LeRoy and Medsger criteria, and 2013 American College of Rheumatology/European League Against Rheumatism criteria, while ssSSc was classified on the basis of the definitions provided by Rodnan and Fennell and also Poormoghim.
- The analysis focused on ssSSc prevalence, reclassification rates, and internal organ involvement, including interstitial lung disease, pulmonary arterial hypertension, scleroderma renal crisis, and cardiac dysfunction.
TAKEAWAY:
- The overall mean prevalence of ssSSc was 9.6%, with a range of 0%-22.9% across different studies.
- Reclassification rates of ssSSc into limited cutaneous SSc (lcSSc) or diffuse cutaneous SSc (dcSSc) varied substantially, with some studies reporting rates as high as 27.8% over a 4-year follow-up period.
- The mean frequency of internal organ involvement in patients with ssSSc was 46% for interstitial lung disease, 15% for pulmonary arterial hypertension, 5% for scleroderma renal crisis, and 26.5% for cardiac dysfunction — mainly diastolic dysfunction.
- The survival rates in patients with ssSSc were similar to those with lcSSc and better than those with dcSSc.
IN PRACTICE:
“The results presented herein suggest a slightly more severe yet similar clinical picture of ssSSc compared to lcSSc [limited cutaneous SSc], while dcSSc [diffuse cutaneous SSc] remains the most severe disease form,” the authors wrote. “Although classification criteria should not impact appropriate management of patients, updated ssSSc subclassification criteria, which will take into account time from disease onset, should be considered,” they further added.
SOURCE:
The study was led by Anastasios Makris, MD, First Department of Propaedeutic & Internal Medicine, National and Kapodistrian University of Athens, Medical School, Athens, Greece. It was published online on August 15, 2024, in The Journal of Rheumatology.
LIMITATIONS:
The variability in the classification criteria across different studies may affect the comparability of results. The included studies lacked data on cardiac MRI, restricting the identification of myocardial fibrosis patterns and characterization of cardiac disease activity.
DISCLOSURES:
The study did not receive any specific funding. Some authors disclosed having a consultancy relationship, serving as speakers, and receiving funding for research from multiple companies. One author reported having a patent and being a cofounder of CITUS AG.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Systemic sclerosis sine scleroderma (ssSSc) affects nearly 10% of patients with systemic sclerosis (SSc), with substantial internal organ involvement. Despite lacking skin fibrosis, patients with ssSSc are at a risk for interstitial lung disease, pulmonary arterial hypertension, and cardiac dysfunction.
METHODOLOGY:
- Driven by a fatal case of ssSSc with cardiac involvement, researchers aimed to evaluate its prevalence, severity, and prognosis.
- They conducted a systematic literature and qualitative synthesis of 35 studies on SSc cohorts from databases published between 1976 and 2023 that comprised data on the prevalence of SSc with or without organ involvement.
- A total of 25,455 patients with SSc were included, with 2437 identified as having ssSSc.
- Studies used various classification criteria for SSc, including the 1980 American Rheumatism Association criteria, 2001 LeRoy and Medsger criteria, and 2013 American College of Rheumatology/European League Against Rheumatism criteria, while ssSSc was classified on the basis of the definitions provided by Rodnan and Fennell and also Poormoghim.
- The analysis focused on ssSSc prevalence, reclassification rates, and internal organ involvement, including interstitial lung disease, pulmonary arterial hypertension, scleroderma renal crisis, and cardiac dysfunction.
TAKEAWAY:
- The overall mean prevalence of ssSSc was 9.6%, with a range of 0%-22.9% across different studies.
- Reclassification rates of ssSSc into limited cutaneous SSc (lcSSc) or diffuse cutaneous SSc (dcSSc) varied substantially, with some studies reporting rates as high as 27.8% over a 4-year follow-up period.
- The mean frequency of internal organ involvement in patients with ssSSc was 46% for interstitial lung disease, 15% for pulmonary arterial hypertension, 5% for scleroderma renal crisis, and 26.5% for cardiac dysfunction — mainly diastolic dysfunction.
- The survival rates in patients with ssSSc were similar to those with lcSSc and better than those with dcSSc.
IN PRACTICE:
“The results presented herein suggest a slightly more severe yet similar clinical picture of ssSSc compared to lcSSc [limited cutaneous SSc], while dcSSc [diffuse cutaneous SSc] remains the most severe disease form,” the authors wrote. “Although classification criteria should not impact appropriate management of patients, updated ssSSc subclassification criteria, which will take into account time from disease onset, should be considered,” they further added.
SOURCE:
The study was led by Anastasios Makris, MD, First Department of Propaedeutic & Internal Medicine, National and Kapodistrian University of Athens, Medical School, Athens, Greece. It was published online on August 15, 2024, in The Journal of Rheumatology.
LIMITATIONS:
The variability in the classification criteria across different studies may affect the comparability of results. The included studies lacked data on cardiac MRI, restricting the identification of myocardial fibrosis patterns and characterization of cardiac disease activity.
DISCLOSURES:
The study did not receive any specific funding. Some authors disclosed having a consultancy relationship, serving as speakers, and receiving funding for research from multiple companies. One author reported having a patent and being a cofounder of CITUS AG.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
GIST Rates Rise, With Black Patients Facing Higher Mortality
TOPLINE:
METHODOLOGY:
- A steep increase in GIST incidence was observed from 2000 to 2005, largely due to the reclassification of sarcomas as GISTs. The classification of GISTs has changed over time, with all GISTs now considered malignant instead of benign, likely further increasing the incidence. However, updated data on GIST trends are lacking.
- This study assessed recent trends in GIST incidence and survival outcomes across different racial and ethnic groups using data from the National Cancer Institute’s SEER database, including the SEER-22 and SEER-17 registries.
- Researchers evaluated annual percentage changes and incidences among 23,001 patients from SEER-22 (mean age, 64 years) and median overall and cancer-specific survival rates in 12,109 patients from SEER-17 (mean age, 64 years).
- More than half of the patients in both cohorts were White, 17.8%-19.6% were Black, 11.6%-12.3% were Hispanic, and 9.7%-13.2% were Asian or Pacific Islander.
TAKEAWAY:
- The rates of GISTs increased annually between 2000 and 2019 for all organ sites, except the colon, where it decreased by 0.2% per year. Esophageal GISTs increased by 7.3%, gastric by 5.1%, small intestine by 2.7%, and rectal by 1.9%.
- Black patients had significantly lower median overall survival than other racial groups. For example, the median survival for Black patients with esophageal GISTs was 3.6 years vs 15.3 years for White patients (hazard ratio [HR], 6.4; 95% CI, 2.0-20.3). Similar patterns were seen for gastric GISTs — 9.1 years for Black patients vs 11.8 years for White patients (HR, 1.4). GIST-specific mortality was also higher in Black patients for these two organ sites.
- Additionally, Asian or Pacific Islander patients with esophageal GISTs had lower survival rates, with a median of 8.8 years (HR, 5.6) vs 15.3 years for White patients. Similarly, American Indian or Alaska Native patients with gastric GIST had lower survival rates, with a median of 8.5 years (HR, 1.6) vs 11.8 years for White patients.
- Over the 20-year study period, 5-year relative survival rates improved for most patient groups but remained the lowest among American Indian or Alaska Native patients across various GIST sites.
IN PRACTICE:
“We observed a continued increase in the incidence of GISTs after 2005” with a “substantial increase in the last two decades,” the authors wrote. Therefore, “future research should explore lifestyle-related or environmental factors underlying the unfavorable trends” which “could not fully be explained by coding reclassification and advances in diagnostic technologies,” they further added.
SOURCE:
The study was led by Christian S. Alvarez, PhD, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Rockville, Maryland. It was published online on August 19, 2024, in JAMA Network Open.
LIMITATIONS:
A lack of individual-level data on socioeconomic factors and healthcare access could have influenced the findings. Although the SEER registries used standardized codes and procedures for classifying the data on race and ethnicity, misclassification was possible. Additionally, data on prognostic factors were incomplete or missing, which limited the inferences of the analysis.
DISCLOSURES:
This work was supported by the National Institutes of Health Intramural Research Program of the National Cancer Institute. Two authors reported receiving grants or personal fees and having other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- A steep increase in GIST incidence was observed from 2000 to 2005, largely due to the reclassification of sarcomas as GISTs. The classification of GISTs has changed over time, with all GISTs now considered malignant instead of benign, likely further increasing the incidence. However, updated data on GIST trends are lacking.
- This study assessed recent trends in GIST incidence and survival outcomes across different racial and ethnic groups using data from the National Cancer Institute’s SEER database, including the SEER-22 and SEER-17 registries.
- Researchers evaluated annual percentage changes and incidences among 23,001 patients from SEER-22 (mean age, 64 years) and median overall and cancer-specific survival rates in 12,109 patients from SEER-17 (mean age, 64 years).
- More than half of the patients in both cohorts were White, 17.8%-19.6% were Black, 11.6%-12.3% were Hispanic, and 9.7%-13.2% were Asian or Pacific Islander.
TAKEAWAY:
- The rates of GISTs increased annually between 2000 and 2019 for all organ sites, except the colon, where it decreased by 0.2% per year. Esophageal GISTs increased by 7.3%, gastric by 5.1%, small intestine by 2.7%, and rectal by 1.9%.
- Black patients had significantly lower median overall survival than other racial groups. For example, the median survival for Black patients with esophageal GISTs was 3.6 years vs 15.3 years for White patients (hazard ratio [HR], 6.4; 95% CI, 2.0-20.3). Similar patterns were seen for gastric GISTs — 9.1 years for Black patients vs 11.8 years for White patients (HR, 1.4). GIST-specific mortality was also higher in Black patients for these two organ sites.
- Additionally, Asian or Pacific Islander patients with esophageal GISTs had lower survival rates, with a median of 8.8 years (HR, 5.6) vs 15.3 years for White patients. Similarly, American Indian or Alaska Native patients with gastric GIST had lower survival rates, with a median of 8.5 years (HR, 1.6) vs 11.8 years for White patients.
- Over the 20-year study period, 5-year relative survival rates improved for most patient groups but remained the lowest among American Indian or Alaska Native patients across various GIST sites.
IN PRACTICE:
“We observed a continued increase in the incidence of GISTs after 2005” with a “substantial increase in the last two decades,” the authors wrote. Therefore, “future research should explore lifestyle-related or environmental factors underlying the unfavorable trends” which “could not fully be explained by coding reclassification and advances in diagnostic technologies,” they further added.
SOURCE:
The study was led by Christian S. Alvarez, PhD, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Rockville, Maryland. It was published online on August 19, 2024, in JAMA Network Open.
LIMITATIONS:
A lack of individual-level data on socioeconomic factors and healthcare access could have influenced the findings. Although the SEER registries used standardized codes and procedures for classifying the data on race and ethnicity, misclassification was possible. Additionally, data on prognostic factors were incomplete or missing, which limited the inferences of the analysis.
DISCLOSURES:
This work was supported by the National Institutes of Health Intramural Research Program of the National Cancer Institute. Two authors reported receiving grants or personal fees and having other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- A steep increase in GIST incidence was observed from 2000 to 2005, largely due to the reclassification of sarcomas as GISTs. The classification of GISTs has changed over time, with all GISTs now considered malignant instead of benign, likely further increasing the incidence. However, updated data on GIST trends are lacking.
- This study assessed recent trends in GIST incidence and survival outcomes across different racial and ethnic groups using data from the National Cancer Institute’s SEER database, including the SEER-22 and SEER-17 registries.
- Researchers evaluated annual percentage changes and incidences among 23,001 patients from SEER-22 (mean age, 64 years) and median overall and cancer-specific survival rates in 12,109 patients from SEER-17 (mean age, 64 years).
- More than half of the patients in both cohorts were White, 17.8%-19.6% were Black, 11.6%-12.3% were Hispanic, and 9.7%-13.2% were Asian or Pacific Islander.
TAKEAWAY:
- The rates of GISTs increased annually between 2000 and 2019 for all organ sites, except the colon, where it decreased by 0.2% per year. Esophageal GISTs increased by 7.3%, gastric by 5.1%, small intestine by 2.7%, and rectal by 1.9%.
- Black patients had significantly lower median overall survival than other racial groups. For example, the median survival for Black patients with esophageal GISTs was 3.6 years vs 15.3 years for White patients (hazard ratio [HR], 6.4; 95% CI, 2.0-20.3). Similar patterns were seen for gastric GISTs — 9.1 years for Black patients vs 11.8 years for White patients (HR, 1.4). GIST-specific mortality was also higher in Black patients for these two organ sites.
- Additionally, Asian or Pacific Islander patients with esophageal GISTs had lower survival rates, with a median of 8.8 years (HR, 5.6) vs 15.3 years for White patients. Similarly, American Indian or Alaska Native patients with gastric GIST had lower survival rates, with a median of 8.5 years (HR, 1.6) vs 11.8 years for White patients.
- Over the 20-year study period, 5-year relative survival rates improved for most patient groups but remained the lowest among American Indian or Alaska Native patients across various GIST sites.
IN PRACTICE:
“We observed a continued increase in the incidence of GISTs after 2005” with a “substantial increase in the last two decades,” the authors wrote. Therefore, “future research should explore lifestyle-related or environmental factors underlying the unfavorable trends” which “could not fully be explained by coding reclassification and advances in diagnostic technologies,” they further added.
SOURCE:
The study was led by Christian S. Alvarez, PhD, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Rockville, Maryland. It was published online on August 19, 2024, in JAMA Network Open.
LIMITATIONS:
A lack of individual-level data on socioeconomic factors and healthcare access could have influenced the findings. Although the SEER registries used standardized codes and procedures for classifying the data on race and ethnicity, misclassification was possible. Additionally, data on prognostic factors were incomplete or missing, which limited the inferences of the analysis.
DISCLOSURES:
This work was supported by the National Institutes of Health Intramural Research Program of the National Cancer Institute. Two authors reported receiving grants or personal fees and having other ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.