Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort

COVID hospitalizations climb for fourth straight week

Article Type
Changed

Weekly new hospitalizations for COVID-19 have climbed for the fourth straight week. 

Nationwide, 10,320 people were hospitalized during the week ending Aug. 5, up from 9,026 the week prior, which is about a 14% week-over-week increase, according to newly updated Centers for Disease Control and Prevention figures. Hospitalizations reached an all-time low of about 6,300 per week in July.

The CDC stopped tracking the number of people infected by the virus earlier in 2023, and now relies on hospitalization data to gauge the current impact of COVID-19. 

“We have to remember that we’re still dealing with numbers that are far less than what we’ve seen for the pandemic,” John Brownstein, PhD, a professor of biomedical informatics at Harvard Medical School, Boston, told ABC News. “We have to zoom out to look at our experience for the entire pandemic, to understand that what we’re dealing with now is far from any crisis that we’ve experienced with previous waves.”

The current predominant strain remains EG.5, and experts believe it is not more severe or more contagious than other recent variants.  

Dr. Brownstein told ABC News that one reason for the concern about rising COVID metrics, despite their overall low levels, is that a surge occurred in the summer of 2021 with the dangerous Delta variant.

“But each new variant so far that has come through has subsequently had less of a population impact,” he said. “Now, is it possible we may see one in the future that is worthy, a real concern? Absolutely. But overall, we’ve seen a dampening of effect over the last several variants that have come through.”

A version of this article appeared on WebMD.com.

Publications
Topics
Sections

Weekly new hospitalizations for COVID-19 have climbed for the fourth straight week. 

Nationwide, 10,320 people were hospitalized during the week ending Aug. 5, up from 9,026 the week prior, which is about a 14% week-over-week increase, according to newly updated Centers for Disease Control and Prevention figures. Hospitalizations reached an all-time low of about 6,300 per week in July.

The CDC stopped tracking the number of people infected by the virus earlier in 2023, and now relies on hospitalization data to gauge the current impact of COVID-19. 

“We have to remember that we’re still dealing with numbers that are far less than what we’ve seen for the pandemic,” John Brownstein, PhD, a professor of biomedical informatics at Harvard Medical School, Boston, told ABC News. “We have to zoom out to look at our experience for the entire pandemic, to understand that what we’re dealing with now is far from any crisis that we’ve experienced with previous waves.”

The current predominant strain remains EG.5, and experts believe it is not more severe or more contagious than other recent variants.  

Dr. Brownstein told ABC News that one reason for the concern about rising COVID metrics, despite their overall low levels, is that a surge occurred in the summer of 2021 with the dangerous Delta variant.

“But each new variant so far that has come through has subsequently had less of a population impact,” he said. “Now, is it possible we may see one in the future that is worthy, a real concern? Absolutely. But overall, we’ve seen a dampening of effect over the last several variants that have come through.”

A version of this article appeared on WebMD.com.

Weekly new hospitalizations for COVID-19 have climbed for the fourth straight week. 

Nationwide, 10,320 people were hospitalized during the week ending Aug. 5, up from 9,026 the week prior, which is about a 14% week-over-week increase, according to newly updated Centers for Disease Control and Prevention figures. Hospitalizations reached an all-time low of about 6,300 per week in July.

The CDC stopped tracking the number of people infected by the virus earlier in 2023, and now relies on hospitalization data to gauge the current impact of COVID-19. 

“We have to remember that we’re still dealing with numbers that are far less than what we’ve seen for the pandemic,” John Brownstein, PhD, a professor of biomedical informatics at Harvard Medical School, Boston, told ABC News. “We have to zoom out to look at our experience for the entire pandemic, to understand that what we’re dealing with now is far from any crisis that we’ve experienced with previous waves.”

The current predominant strain remains EG.5, and experts believe it is not more severe or more contagious than other recent variants.  

Dr. Brownstein told ABC News that one reason for the concern about rising COVID metrics, despite their overall low levels, is that a surge occurred in the summer of 2021 with the dangerous Delta variant.

“But each new variant so far that has come through has subsequently had less of a population impact,” he said. “Now, is it possible we may see one in the future that is worthy, a real concern? Absolutely. But overall, we’ve seen a dampening of effect over the last several variants that have come through.”

A version of this article appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cancer rates rise among people under age 50

Article Type
Changed

People under the age of 50 are becoming more likely to be diagnosed with cancer, according to comprehensive new data.

From 2010 to 2019, the rate of cancer diagnoses rose from 100 to 103 cases per 100,000 people, according to the study, published in JAMA Network Open. The increases were driven by jumps in certain types of cancer and within specific age, racial, and ethnic groups. Researchers analyzed data for more than 560,000 people under age 50 who were diagnosed with cancer during the 9-year period.

Breast cancer remained the most common type of cancer to affect younger people, while the most striking increase was seen in gastrointestinal cancers. The rate of people with GI cancers rose 15%.

Women were more likely to be diagnosed with cancer, whereas the rate of cancer among men under age 50 declined by 5%. When the researchers analyzed the data based on a person’s race or ethnicity, they found that cancer rates were increasing among people who are Asian, Pacific Islander, Hispanic, American Indian, or Alaska Native. The rate of cancer among Black people declined and was steady among White people.

The only age group that saw cancer rates increase was 30- to 39-year-olds. One of the top concerns for younger people with cancer is that there is a greater risk for the cancer to spread.

The cancer rate has been declining among older people, the researchers noted. One doctor told The Washington Post that it’s urgent that the reasons for the increases among young people be understood.

“If we don’t understand what’s causing this risk and we can’t do something to change it, we’re afraid that as time goes on, it’s going to become a bigger and bigger challenge,” said Paul Oberstein, MD, director of the gastrointestinal medical oncology program at NYU Langone’s Perlmutter Cancer Center, New York. He was not involved in the study.

It’s unclear why cancer rates are rising among young people, but some possible reasons are obesity, alcohol use, smoking, poor sleep, sedentary lifestyle, and things in the environment like pollution and carcinogens, the Post reported.

A version of this article first appeared on WebMD.com.

Publications
Topics
Sections

People under the age of 50 are becoming more likely to be diagnosed with cancer, according to comprehensive new data.

From 2010 to 2019, the rate of cancer diagnoses rose from 100 to 103 cases per 100,000 people, according to the study, published in JAMA Network Open. The increases were driven by jumps in certain types of cancer and within specific age, racial, and ethnic groups. Researchers analyzed data for more than 560,000 people under age 50 who were diagnosed with cancer during the 9-year period.

Breast cancer remained the most common type of cancer to affect younger people, while the most striking increase was seen in gastrointestinal cancers. The rate of people with GI cancers rose 15%.

Women were more likely to be diagnosed with cancer, whereas the rate of cancer among men under age 50 declined by 5%. When the researchers analyzed the data based on a person’s race or ethnicity, they found that cancer rates were increasing among people who are Asian, Pacific Islander, Hispanic, American Indian, or Alaska Native. The rate of cancer among Black people declined and was steady among White people.

The only age group that saw cancer rates increase was 30- to 39-year-olds. One of the top concerns for younger people with cancer is that there is a greater risk for the cancer to spread.

The cancer rate has been declining among older people, the researchers noted. One doctor told The Washington Post that it’s urgent that the reasons for the increases among young people be understood.

“If we don’t understand what’s causing this risk and we can’t do something to change it, we’re afraid that as time goes on, it’s going to become a bigger and bigger challenge,” said Paul Oberstein, MD, director of the gastrointestinal medical oncology program at NYU Langone’s Perlmutter Cancer Center, New York. He was not involved in the study.

It’s unclear why cancer rates are rising among young people, but some possible reasons are obesity, alcohol use, smoking, poor sleep, sedentary lifestyle, and things in the environment like pollution and carcinogens, the Post reported.

A version of this article first appeared on WebMD.com.

People under the age of 50 are becoming more likely to be diagnosed with cancer, according to comprehensive new data.

From 2010 to 2019, the rate of cancer diagnoses rose from 100 to 103 cases per 100,000 people, according to the study, published in JAMA Network Open. The increases were driven by jumps in certain types of cancer and within specific age, racial, and ethnic groups. Researchers analyzed data for more than 560,000 people under age 50 who were diagnosed with cancer during the 9-year period.

Breast cancer remained the most common type of cancer to affect younger people, while the most striking increase was seen in gastrointestinal cancers. The rate of people with GI cancers rose 15%.

Women were more likely to be diagnosed with cancer, whereas the rate of cancer among men under age 50 declined by 5%. When the researchers analyzed the data based on a person’s race or ethnicity, they found that cancer rates were increasing among people who are Asian, Pacific Islander, Hispanic, American Indian, or Alaska Native. The rate of cancer among Black people declined and was steady among White people.

The only age group that saw cancer rates increase was 30- to 39-year-olds. One of the top concerns for younger people with cancer is that there is a greater risk for the cancer to spread.

The cancer rate has been declining among older people, the researchers noted. One doctor told The Washington Post that it’s urgent that the reasons for the increases among young people be understood.

“If we don’t understand what’s causing this risk and we can’t do something to change it, we’re afraid that as time goes on, it’s going to become a bigger and bigger challenge,” said Paul Oberstein, MD, director of the gastrointestinal medical oncology program at NYU Langone’s Perlmutter Cancer Center, New York. He was not involved in the study.

It’s unclear why cancer rates are rising among young people, but some possible reasons are obesity, alcohol use, smoking, poor sleep, sedentary lifestyle, and things in the environment like pollution and carcinogens, the Post reported.

A version of this article first appeared on WebMD.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

ADHD meds cut hospitalization risk in borderline personality disorder patients

Article Type
Changed

Treatment with medication often used for attention-deficit/hyperactivity disorder (ADHD) was associated with lower risk of psychiatric hospitalization, all-cause hospitalization, or death in adults with borderline personality disorder, based on data from more than 17,000 individuals.

Although most patients with borderline personality disorder (BPD) receive psychopharmacological treatment, clinical guidance and outcomes data for specific medication use in these patients are lacking, wrote Johannes Lieslehto, MD, PhD, of the University of Eastern Finland, Niuvankuja, and colleagues.

Dr. Lieslehto
Dr. Johannes Lieslehto

In a study published in Acta Psychiatrica Scandinavica , the researchers – using national databases in Sweden – identified 17,532 adults with BPD who were treated with medications between 2006 and 2018.

Medications included benzodiazepines, antipsychotics, and antidepressants, as well as medications often used for ADHD: clozapine, lisdexamphetamine, bupropion, and methylphenidate. The mean age of the study population was 29.8 years and 2,649 were men.

The primary outcomes were psychiatric hospitalization (which served as an indication of treatment failure), all-cause hospitalization, or death.

Overall, treatment with benzodiazepines, antipsychotics, and antidepressants was associated with increased risk of psychiatric rehospitalization, with hazard ratios of 1.38, 1.19, and 1.18, respectively, and with increased risk of all-cause hospitalization or death (HR 1.37, HR 1.21, HR 1.17, respectively).

By contrast, treatment with ADHD medication was associated with decreased risk of psychiatric hospitalization (HR = 0.88), as well as a decreased risk of all-cause hospitalization or death (HR = 0.86).

Specifically, clozapine, lisdexamphetamine, bupropion, and methylphenidate were associated with decreased risk of psychiatric rehospitalization, with hazard ratios of 0.54, 0.79, 0.84, and 0.90, respectively.

Treatment with mood stabilizers had no significant impact on outcomes.

BPD patients treated with ADHD medications also may exhibit ADHD symptoms, the researchers wrote in their discussion. However, “Although BPD and ADHD partially overlap in symptoms such as impulsivity and emotion dysregulation, previous efforts to investigate the efficacy of ADHD medication treatment in BPD are scarce,” and randomized, controlled trials are needed to determine whether these medications should be given to BPD patients without comorbid ADHD symptoms, they said.

The findings were limited by several factors including the lack of clinical parameters on symptom severity, quality of life, and level of function, and premature prescribing of medication (protopathic bias) may have affected the results, the researchers noted.

The results were strengthened by the large sample size and long follow-up, which increases the generalizability to real-world patients, and suggest that many pharmacological treatments for BPD may not improve outcomes, the researchers said. However, “even in the presence of possible protopathic bias, treatment with lisdexamphetamine, bupropion, methylphenidate, and clozapine was associated with improved outcomes, encouraging further research on these treatments,” they said.

The study was supported by the Finnish Ministry of Social Affairs and Health and the Academy of Finland. Dr. Lieslehto had no financial conflicts to disclose.

Publications
Topics
Sections

Treatment with medication often used for attention-deficit/hyperactivity disorder (ADHD) was associated with lower risk of psychiatric hospitalization, all-cause hospitalization, or death in adults with borderline personality disorder, based on data from more than 17,000 individuals.

Although most patients with borderline personality disorder (BPD) receive psychopharmacological treatment, clinical guidance and outcomes data for specific medication use in these patients are lacking, wrote Johannes Lieslehto, MD, PhD, of the University of Eastern Finland, Niuvankuja, and colleagues.

Dr. Lieslehto
Dr. Johannes Lieslehto

In a study published in Acta Psychiatrica Scandinavica , the researchers – using national databases in Sweden – identified 17,532 adults with BPD who were treated with medications between 2006 and 2018.

Medications included benzodiazepines, antipsychotics, and antidepressants, as well as medications often used for ADHD: clozapine, lisdexamphetamine, bupropion, and methylphenidate. The mean age of the study population was 29.8 years and 2,649 were men.

The primary outcomes were psychiatric hospitalization (which served as an indication of treatment failure), all-cause hospitalization, or death.

Overall, treatment with benzodiazepines, antipsychotics, and antidepressants was associated with increased risk of psychiatric rehospitalization, with hazard ratios of 1.38, 1.19, and 1.18, respectively, and with increased risk of all-cause hospitalization or death (HR 1.37, HR 1.21, HR 1.17, respectively).

By contrast, treatment with ADHD medication was associated with decreased risk of psychiatric hospitalization (HR = 0.88), as well as a decreased risk of all-cause hospitalization or death (HR = 0.86).

Specifically, clozapine, lisdexamphetamine, bupropion, and methylphenidate were associated with decreased risk of psychiatric rehospitalization, with hazard ratios of 0.54, 0.79, 0.84, and 0.90, respectively.

Treatment with mood stabilizers had no significant impact on outcomes.

BPD patients treated with ADHD medications also may exhibit ADHD symptoms, the researchers wrote in their discussion. However, “Although BPD and ADHD partially overlap in symptoms such as impulsivity and emotion dysregulation, previous efforts to investigate the efficacy of ADHD medication treatment in BPD are scarce,” and randomized, controlled trials are needed to determine whether these medications should be given to BPD patients without comorbid ADHD symptoms, they said.

The findings were limited by several factors including the lack of clinical parameters on symptom severity, quality of life, and level of function, and premature prescribing of medication (protopathic bias) may have affected the results, the researchers noted.

The results were strengthened by the large sample size and long follow-up, which increases the generalizability to real-world patients, and suggest that many pharmacological treatments for BPD may not improve outcomes, the researchers said. However, “even in the presence of possible protopathic bias, treatment with lisdexamphetamine, bupropion, methylphenidate, and clozapine was associated with improved outcomes, encouraging further research on these treatments,” they said.

The study was supported by the Finnish Ministry of Social Affairs and Health and the Academy of Finland. Dr. Lieslehto had no financial conflicts to disclose.

Treatment with medication often used for attention-deficit/hyperactivity disorder (ADHD) was associated with lower risk of psychiatric hospitalization, all-cause hospitalization, or death in adults with borderline personality disorder, based on data from more than 17,000 individuals.

Although most patients with borderline personality disorder (BPD) receive psychopharmacological treatment, clinical guidance and outcomes data for specific medication use in these patients are lacking, wrote Johannes Lieslehto, MD, PhD, of the University of Eastern Finland, Niuvankuja, and colleagues.

Dr. Lieslehto
Dr. Johannes Lieslehto

In a study published in Acta Psychiatrica Scandinavica , the researchers – using national databases in Sweden – identified 17,532 adults with BPD who were treated with medications between 2006 and 2018.

Medications included benzodiazepines, antipsychotics, and antidepressants, as well as medications often used for ADHD: clozapine, lisdexamphetamine, bupropion, and methylphenidate. The mean age of the study population was 29.8 years and 2,649 were men.

The primary outcomes were psychiatric hospitalization (which served as an indication of treatment failure), all-cause hospitalization, or death.

Overall, treatment with benzodiazepines, antipsychotics, and antidepressants was associated with increased risk of psychiatric rehospitalization, with hazard ratios of 1.38, 1.19, and 1.18, respectively, and with increased risk of all-cause hospitalization or death (HR 1.37, HR 1.21, HR 1.17, respectively).

By contrast, treatment with ADHD medication was associated with decreased risk of psychiatric hospitalization (HR = 0.88), as well as a decreased risk of all-cause hospitalization or death (HR = 0.86).

Specifically, clozapine, lisdexamphetamine, bupropion, and methylphenidate were associated with decreased risk of psychiatric rehospitalization, with hazard ratios of 0.54, 0.79, 0.84, and 0.90, respectively.

Treatment with mood stabilizers had no significant impact on outcomes.

BPD patients treated with ADHD medications also may exhibit ADHD symptoms, the researchers wrote in their discussion. However, “Although BPD and ADHD partially overlap in symptoms such as impulsivity and emotion dysregulation, previous efforts to investigate the efficacy of ADHD medication treatment in BPD are scarce,” and randomized, controlled trials are needed to determine whether these medications should be given to BPD patients without comorbid ADHD symptoms, they said.

The findings were limited by several factors including the lack of clinical parameters on symptom severity, quality of life, and level of function, and premature prescribing of medication (protopathic bias) may have affected the results, the researchers noted.

The results were strengthened by the large sample size and long follow-up, which increases the generalizability to real-world patients, and suggest that many pharmacological treatments for BPD may not improve outcomes, the researchers said. However, “even in the presence of possible protopathic bias, treatment with lisdexamphetamine, bupropion, methylphenidate, and clozapine was associated with improved outcomes, encouraging further research on these treatments,” they said.

The study was supported by the Finnish Ministry of Social Affairs and Health and the Academy of Finland. Dr. Lieslehto had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ACTA PSYCHIATRICA SCANDINAVICA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could colchicine replace aspirin after PCI for ACS?

Article Type
Changed

Dual antiplatelet therapy (DAPT) consisting of aspirin plus a P2Y12 inhibitor has been the standard of care to prevent thrombotic events in patients with acute coronary syndrome (ACS) undergoing percutaneous coronary intervention (PCI).

A new pilot study suggests that aspirin can be discontinued on the day after the PCI, and colchicine, an anti-inflammatory agent, could be added to reduce the risk for ischemic events in these patients, while mitigating the increased bleeding risk associated with aspirin.

Investigators conducted a pilot trial in ACS patients treated with drug-eluting stents (DES) who received low-dose colchicine the day after PCI, together with P2Y12 inhibitor (ticagrelor or prasugrel) maintenance therapy. Aspirin use was discontinued.

At 3 months, only 1% of the patients experienced stent thrombosis, and only 1 patient showed high platelet reactivity. Moreover, at 1 month, high-sensitivity C-reactive protein (hs-CRP) and platelet reactivity both decreased, pointing to reduced inflammation.

“In ACS patients undergoing PCI, it is feasible to discontinue aspirin therapy and administer low-dose colchicine on the day after PCI in addition to ticagrelor or prasugrel P2Y12 inhibitors,” write Seung-Yul Lee, MD, CHA Bundang Medical Center, CHA University, Seongnam, South Korea, and colleagues. “This approach is associated with favorable platelet function and inflammatory profiles.”

The study was published online in JACC: Cardiovascular Interventions.
 

Safety without compromised efficacy

The U.S. Food and Drug Administration recently approved colchicine 0.5-mg tablets (Lodoco, Agepha Pharma) as the first anti-inflammatory drug shown to reduce the risk for myocardial infarction, stroke, coronary revascularization, and cardiovascular death in adult patients with either established atherosclerotic disease or multiple risk factors for cardiovascular disease. It targets residual inflammation as an underlying cause of cardiovascular events.

Patients after PCI are generally treated using DAPT, but given the risk for increased bleeding associated with aspirin – especially when used long-term – there is a “need to identify strategies associated with a more favorable safety profile without compromising efficacy,” the authors write.

Previous research has yielded mixed results in terms of the discontinuation of aspirin therapy after 1-3 months and maintenance on P2Y12 inhibitor monotherapy. But one trial found colchicine to be effective in reducing recurrent ischemia, and its benefits may be more beneficial with early initiation in the hospital.

In this new study, researchers tested a “strategy that substitutes aspirin with colchicine during the acute phase to maximize the treatment effect of reducing recurrent ischemia and bleeding,” they write. The Mono Antiplatelet and Colchicine Therapy (MACT) single-arm, open-label proof-of-concept study was designed to investigate this approach.

The researchers studied 200 patients with non–ST-segment elevation ACS and ST-segment elevation myocardial infarction (STEMI) who underwent PCI with DES (mean [SD] age, 61.4 [10.7] years; 90% male; 100% of Asian ethnicity), who were receiving either ticagrelor or prasugrel plus a loading dose of aspirin.

On the day after PCI, aspirin was discontinued, and low-dose colchicine (0.6 mg once daily) was administered in addition to the P2Y12 inhibitor. In the case of staged PCI, it was performed under the maintenance of colchicine and ticagrelor or prasugrel.

No other antiplatelet or anticoagulant agents were permitted.

Patients underwent platelet function testing using the VerifyNow P2Y12 assay before discharge. Levels of hs-CRP were measured at admission, at 24 and 48 hours after PCI, and at 1-month follow-up. Clinical follow-up was performed at 1 and at 3 months.

The primary outcome was stent thrombosis within 3 months of follow-up. Secondary outcomes included all-cause mortality, MI, revascularization, major bleeding, a composite of cardiac death, target vessel MI, or target lesion revascularization, P2Y12 reaction units (PRUs), and change in hs-CRP levels between 24 hours post-PCI and 1-month follow-up.
 

 

 

The role of inflammation

Of the original 200 patients, 190 completed the full protocol and were available for follow-up.

The primary outcome occurred in only two patients. It turned out that one of the patients had not been adherent with antiplatelet medications.

“Although bleeding occurred in 36 patients, major bleeding occurred in only 1 patient,” the authors report.

The level of platelet reactivity at discharge was 27 ± 42 PRUs. Most patients (91%) met the criteria for low platelet reactivity, while only 0.5% met the criteria for high platelet reactivity. Platelet reactivity was similar, regardless of which P2Y12 inhibitor (ticagrelor or prasugrel) the patients were taking.

In all patients, the level of inflammation was “reduced considerably” over time: After 1 month, the hs-CRP level decreased from 6.1 mg/L (interquartile range [IQR], 2.6-15.9 mg/L) at 24 hours after PCI to 0.6 mg/L (IQR, 0.4-1.2 mg/L; P < .001).

The prevalence of high-inflammation criteria, defined as hs-CRP ≥ 2 mg/L, decreased significantly, from 81.8% at 24 hours after PCI to 11.8% at 1 month (P < .001).

Major bleeding was rare, they report, with a 3-month incidence of 0.5%.

“Inflammation plays a fundamental role in the development and progression of the atherothrombotic process,” the authors explain. A series of factors also trigger “an intense inflammatory response” in the acute phase of MI, which may lead to adverse myocardial remodeling. In the present study, inflammatory levels were rapidly reduced.

They noted several limitations. For example, all enrolled patients were Asian and were at relatively low bleeding and ischemic risk. “Although ticagrelor or prasugrel is effective regardless of ethnicity, clinical data supporting this de-escalation strategy are limited,” they state. Additionally, there was no control group for comparison.

The findings warrant further investigation, they conclude.
 

Promising but preliminary

Commenting for this news organization, Francesco Costa, MD, PhD, interventional cardiologist and assistant professor, University of Messina, Sicily, Italy, said he thinks it’s “too early for extensive clinical translation of these findings.”

Rather, larger and more extensive randomized trials are “on their way to give more precise estimates regarding the risks and benefits of early aspirin withdrawal in ACS.”

However, added Dr. Costa, who was not involved with the current research, “in this setting, adding colchicine early looks very promising to mitigate potential thrombotic risk without increasing bleeding risk.”

In the meantime, the study “provides novel insights on early aspirin withdrawal and P2Y12 monotherapy in an unselected population, including [those with] STEMI,” said Dr. Costa, also the coauthor of an accompanying editorial. The findings “could be of particular interest for those patients at extremely high bleeding risk or who are truly intolerant to aspirin, a scenario in which options are limited.”

This study was supported by the Cardiovascular Research Center, Seoul, South Korea. Dr. Lee reports no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Costa has served on an advisory board for AstraZeneca and has received speaker fees from Chiesi Farmaceutici. His coauthor reports no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Dual antiplatelet therapy (DAPT) consisting of aspirin plus a P2Y12 inhibitor has been the standard of care to prevent thrombotic events in patients with acute coronary syndrome (ACS) undergoing percutaneous coronary intervention (PCI).

A new pilot study suggests that aspirin can be discontinued on the day after the PCI, and colchicine, an anti-inflammatory agent, could be added to reduce the risk for ischemic events in these patients, while mitigating the increased bleeding risk associated with aspirin.

Investigators conducted a pilot trial in ACS patients treated with drug-eluting stents (DES) who received low-dose colchicine the day after PCI, together with P2Y12 inhibitor (ticagrelor or prasugrel) maintenance therapy. Aspirin use was discontinued.

At 3 months, only 1% of the patients experienced stent thrombosis, and only 1 patient showed high platelet reactivity. Moreover, at 1 month, high-sensitivity C-reactive protein (hs-CRP) and platelet reactivity both decreased, pointing to reduced inflammation.

“In ACS patients undergoing PCI, it is feasible to discontinue aspirin therapy and administer low-dose colchicine on the day after PCI in addition to ticagrelor or prasugrel P2Y12 inhibitors,” write Seung-Yul Lee, MD, CHA Bundang Medical Center, CHA University, Seongnam, South Korea, and colleagues. “This approach is associated with favorable platelet function and inflammatory profiles.”

The study was published online in JACC: Cardiovascular Interventions.
 

Safety without compromised efficacy

The U.S. Food and Drug Administration recently approved colchicine 0.5-mg tablets (Lodoco, Agepha Pharma) as the first anti-inflammatory drug shown to reduce the risk for myocardial infarction, stroke, coronary revascularization, and cardiovascular death in adult patients with either established atherosclerotic disease or multiple risk factors for cardiovascular disease. It targets residual inflammation as an underlying cause of cardiovascular events.

Patients after PCI are generally treated using DAPT, but given the risk for increased bleeding associated with aspirin – especially when used long-term – there is a “need to identify strategies associated with a more favorable safety profile without compromising efficacy,” the authors write.

Previous research has yielded mixed results in terms of the discontinuation of aspirin therapy after 1-3 months and maintenance on P2Y12 inhibitor monotherapy. But one trial found colchicine to be effective in reducing recurrent ischemia, and its benefits may be more beneficial with early initiation in the hospital.

In this new study, researchers tested a “strategy that substitutes aspirin with colchicine during the acute phase to maximize the treatment effect of reducing recurrent ischemia and bleeding,” they write. The Mono Antiplatelet and Colchicine Therapy (MACT) single-arm, open-label proof-of-concept study was designed to investigate this approach.

The researchers studied 200 patients with non–ST-segment elevation ACS and ST-segment elevation myocardial infarction (STEMI) who underwent PCI with DES (mean [SD] age, 61.4 [10.7] years; 90% male; 100% of Asian ethnicity), who were receiving either ticagrelor or prasugrel plus a loading dose of aspirin.

On the day after PCI, aspirin was discontinued, and low-dose colchicine (0.6 mg once daily) was administered in addition to the P2Y12 inhibitor. In the case of staged PCI, it was performed under the maintenance of colchicine and ticagrelor or prasugrel.

No other antiplatelet or anticoagulant agents were permitted.

Patients underwent platelet function testing using the VerifyNow P2Y12 assay before discharge. Levels of hs-CRP were measured at admission, at 24 and 48 hours after PCI, and at 1-month follow-up. Clinical follow-up was performed at 1 and at 3 months.

The primary outcome was stent thrombosis within 3 months of follow-up. Secondary outcomes included all-cause mortality, MI, revascularization, major bleeding, a composite of cardiac death, target vessel MI, or target lesion revascularization, P2Y12 reaction units (PRUs), and change in hs-CRP levels between 24 hours post-PCI and 1-month follow-up.
 

 

 

The role of inflammation

Of the original 200 patients, 190 completed the full protocol and were available for follow-up.

The primary outcome occurred in only two patients. It turned out that one of the patients had not been adherent with antiplatelet medications.

“Although bleeding occurred in 36 patients, major bleeding occurred in only 1 patient,” the authors report.

The level of platelet reactivity at discharge was 27 ± 42 PRUs. Most patients (91%) met the criteria for low platelet reactivity, while only 0.5% met the criteria for high platelet reactivity. Platelet reactivity was similar, regardless of which P2Y12 inhibitor (ticagrelor or prasugrel) the patients were taking.

In all patients, the level of inflammation was “reduced considerably” over time: After 1 month, the hs-CRP level decreased from 6.1 mg/L (interquartile range [IQR], 2.6-15.9 mg/L) at 24 hours after PCI to 0.6 mg/L (IQR, 0.4-1.2 mg/L; P < .001).

The prevalence of high-inflammation criteria, defined as hs-CRP ≥ 2 mg/L, decreased significantly, from 81.8% at 24 hours after PCI to 11.8% at 1 month (P < .001).

Major bleeding was rare, they report, with a 3-month incidence of 0.5%.

“Inflammation plays a fundamental role in the development and progression of the atherothrombotic process,” the authors explain. A series of factors also trigger “an intense inflammatory response” in the acute phase of MI, which may lead to adverse myocardial remodeling. In the present study, inflammatory levels were rapidly reduced.

They noted several limitations. For example, all enrolled patients were Asian and were at relatively low bleeding and ischemic risk. “Although ticagrelor or prasugrel is effective regardless of ethnicity, clinical data supporting this de-escalation strategy are limited,” they state. Additionally, there was no control group for comparison.

The findings warrant further investigation, they conclude.
 

Promising but preliminary

Commenting for this news organization, Francesco Costa, MD, PhD, interventional cardiologist and assistant professor, University of Messina, Sicily, Italy, said he thinks it’s “too early for extensive clinical translation of these findings.”

Rather, larger and more extensive randomized trials are “on their way to give more precise estimates regarding the risks and benefits of early aspirin withdrawal in ACS.”

However, added Dr. Costa, who was not involved with the current research, “in this setting, adding colchicine early looks very promising to mitigate potential thrombotic risk without increasing bleeding risk.”

In the meantime, the study “provides novel insights on early aspirin withdrawal and P2Y12 monotherapy in an unselected population, including [those with] STEMI,” said Dr. Costa, also the coauthor of an accompanying editorial. The findings “could be of particular interest for those patients at extremely high bleeding risk or who are truly intolerant to aspirin, a scenario in which options are limited.”

This study was supported by the Cardiovascular Research Center, Seoul, South Korea. Dr. Lee reports no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Costa has served on an advisory board for AstraZeneca and has received speaker fees from Chiesi Farmaceutici. His coauthor reports no relevant financial relationships.

A version of this article appeared on Medscape.com.

Dual antiplatelet therapy (DAPT) consisting of aspirin plus a P2Y12 inhibitor has been the standard of care to prevent thrombotic events in patients with acute coronary syndrome (ACS) undergoing percutaneous coronary intervention (PCI).

A new pilot study suggests that aspirin can be discontinued on the day after the PCI, and colchicine, an anti-inflammatory agent, could be added to reduce the risk for ischemic events in these patients, while mitigating the increased bleeding risk associated with aspirin.

Investigators conducted a pilot trial in ACS patients treated with drug-eluting stents (DES) who received low-dose colchicine the day after PCI, together with P2Y12 inhibitor (ticagrelor or prasugrel) maintenance therapy. Aspirin use was discontinued.

At 3 months, only 1% of the patients experienced stent thrombosis, and only 1 patient showed high platelet reactivity. Moreover, at 1 month, high-sensitivity C-reactive protein (hs-CRP) and platelet reactivity both decreased, pointing to reduced inflammation.

“In ACS patients undergoing PCI, it is feasible to discontinue aspirin therapy and administer low-dose colchicine on the day after PCI in addition to ticagrelor or prasugrel P2Y12 inhibitors,” write Seung-Yul Lee, MD, CHA Bundang Medical Center, CHA University, Seongnam, South Korea, and colleagues. “This approach is associated with favorable platelet function and inflammatory profiles.”

The study was published online in JACC: Cardiovascular Interventions.
 

Safety without compromised efficacy

The U.S. Food and Drug Administration recently approved colchicine 0.5-mg tablets (Lodoco, Agepha Pharma) as the first anti-inflammatory drug shown to reduce the risk for myocardial infarction, stroke, coronary revascularization, and cardiovascular death in adult patients with either established atherosclerotic disease or multiple risk factors for cardiovascular disease. It targets residual inflammation as an underlying cause of cardiovascular events.

Patients after PCI are generally treated using DAPT, but given the risk for increased bleeding associated with aspirin – especially when used long-term – there is a “need to identify strategies associated with a more favorable safety profile without compromising efficacy,” the authors write.

Previous research has yielded mixed results in terms of the discontinuation of aspirin therapy after 1-3 months and maintenance on P2Y12 inhibitor monotherapy. But one trial found colchicine to be effective in reducing recurrent ischemia, and its benefits may be more beneficial with early initiation in the hospital.

In this new study, researchers tested a “strategy that substitutes aspirin with colchicine during the acute phase to maximize the treatment effect of reducing recurrent ischemia and bleeding,” they write. The Mono Antiplatelet and Colchicine Therapy (MACT) single-arm, open-label proof-of-concept study was designed to investigate this approach.

The researchers studied 200 patients with non–ST-segment elevation ACS and ST-segment elevation myocardial infarction (STEMI) who underwent PCI with DES (mean [SD] age, 61.4 [10.7] years; 90% male; 100% of Asian ethnicity), who were receiving either ticagrelor or prasugrel plus a loading dose of aspirin.

On the day after PCI, aspirin was discontinued, and low-dose colchicine (0.6 mg once daily) was administered in addition to the P2Y12 inhibitor. In the case of staged PCI, it was performed under the maintenance of colchicine and ticagrelor or prasugrel.

No other antiplatelet or anticoagulant agents were permitted.

Patients underwent platelet function testing using the VerifyNow P2Y12 assay before discharge. Levels of hs-CRP were measured at admission, at 24 and 48 hours after PCI, and at 1-month follow-up. Clinical follow-up was performed at 1 and at 3 months.

The primary outcome was stent thrombosis within 3 months of follow-up. Secondary outcomes included all-cause mortality, MI, revascularization, major bleeding, a composite of cardiac death, target vessel MI, or target lesion revascularization, P2Y12 reaction units (PRUs), and change in hs-CRP levels between 24 hours post-PCI and 1-month follow-up.
 

 

 

The role of inflammation

Of the original 200 patients, 190 completed the full protocol and were available for follow-up.

The primary outcome occurred in only two patients. It turned out that one of the patients had not been adherent with antiplatelet medications.

“Although bleeding occurred in 36 patients, major bleeding occurred in only 1 patient,” the authors report.

The level of platelet reactivity at discharge was 27 ± 42 PRUs. Most patients (91%) met the criteria for low platelet reactivity, while only 0.5% met the criteria for high platelet reactivity. Platelet reactivity was similar, regardless of which P2Y12 inhibitor (ticagrelor or prasugrel) the patients were taking.

In all patients, the level of inflammation was “reduced considerably” over time: After 1 month, the hs-CRP level decreased from 6.1 mg/L (interquartile range [IQR], 2.6-15.9 mg/L) at 24 hours after PCI to 0.6 mg/L (IQR, 0.4-1.2 mg/L; P < .001).

The prevalence of high-inflammation criteria, defined as hs-CRP ≥ 2 mg/L, decreased significantly, from 81.8% at 24 hours after PCI to 11.8% at 1 month (P < .001).

Major bleeding was rare, they report, with a 3-month incidence of 0.5%.

“Inflammation plays a fundamental role in the development and progression of the atherothrombotic process,” the authors explain. A series of factors also trigger “an intense inflammatory response” in the acute phase of MI, which may lead to adverse myocardial remodeling. In the present study, inflammatory levels were rapidly reduced.

They noted several limitations. For example, all enrolled patients were Asian and were at relatively low bleeding and ischemic risk. “Although ticagrelor or prasugrel is effective regardless of ethnicity, clinical data supporting this de-escalation strategy are limited,” they state. Additionally, there was no control group for comparison.

The findings warrant further investigation, they conclude.
 

Promising but preliminary

Commenting for this news organization, Francesco Costa, MD, PhD, interventional cardiologist and assistant professor, University of Messina, Sicily, Italy, said he thinks it’s “too early for extensive clinical translation of these findings.”

Rather, larger and more extensive randomized trials are “on their way to give more precise estimates regarding the risks and benefits of early aspirin withdrawal in ACS.”

However, added Dr. Costa, who was not involved with the current research, “in this setting, adding colchicine early looks very promising to mitigate potential thrombotic risk without increasing bleeding risk.”

In the meantime, the study “provides novel insights on early aspirin withdrawal and P2Y12 monotherapy in an unselected population, including [those with] STEMI,” said Dr. Costa, also the coauthor of an accompanying editorial. The findings “could be of particular interest for those patients at extremely high bleeding risk or who are truly intolerant to aspirin, a scenario in which options are limited.”

This study was supported by the Cardiovascular Research Center, Seoul, South Korea. Dr. Lee reports no relevant financial relationships. The other authors’ disclosures are listed on the original paper. Dr. Costa has served on an advisory board for AstraZeneca and has received speaker fees from Chiesi Farmaceutici. His coauthor reports no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JACC: CARIOVASCULAR INTERVENTIONS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pain 1 year after MI tied to all-cause mortality

Article Type
Changed

Patients reporting moderate or extreme pain a year after a myocardial infarction (MI) – even pain due to other health conditions – are more likely to die within the next 8 years than those without post-MI pain, new research suggests.

In the analysis of post-MI health data for more than 18,300 Swedish adults, those with moderate pain were 35% more likely to die from any cause during follow-up, compared with those with no pain, and those with extreme pain were more than twice as likely to die.

Furthermore, pain was a stronger predictor of mortality than smoking.

“For a long time, pain has been regarded as merely a symptom of disease rather than a disease” in its own right, Linda Vixner, PT, PhD, of Dalarna University in Falun, Sweden, said in an interview.

Updated definitions of chronic pain in the ICD-11, as well as a recent study using data from the UK Biobank showing that chronic pain is associated with an increased risk of cardiovascular disease, prompted the current study, which looks at the effect of pain on long-term survival after an MI.

“We did not expect that pain would have such a strong impact on the risk of death, and it also surprised us that the risk was more pronounced than that of smoking,” Dr. Vixner said. “Clinicians should consider pain an important cardiovascular risk factor.”

The study was published online in the Journal of the American Heart Association.
 

‘Experienced pain’ prognostic

The investigators analyzed data from the SWEDEHEART registry of 18,376 patients who had an MI in 2004-2013. The mean age of patients was 62 years and 75% were men. Follow-up time was 8.5 years (median, 3.37).

Self-reported levels of experienced pain according to the EuroQol five-dimension instrument were recorded 12 months after hospital discharge.

Moderate pain was reported by 38.2% of patients and extreme pain by 4.5%.

In the extreme pain category, women were overrepresented (7.5% vs. 3.6% of men), as were current smokers, and patients with diabetes, previous MI, previous stroke, previous percutaneous coronary intervention, non-ST-segment–elevation MI, and any kind of chest pain. Patients classified as physically inactive also were overrepresented in this category.

In addition, those with extreme pain had a higher body mass index and waist circumference 12 months after hospital discharge.

Most (73%) of the 7,889 patients who reported no pain at the 2-month follow-up after MI were also pain-free at the 12-month follow-up, and 65% of those experiencing pain at 2 months were also experiencing pain at 12 months.

There were 1,067 deaths. The adjusted hazard ratio was 1.35 for moderate pain and 2.06 for extreme pain.

As noted, pain was a stronger mortality predictor than smoking: C-statistics for pain were 0.60, and for smoking, 0.55.

“Clinicians managing patients after MI should recognize the need to consider experienced pain as a prognostic factor comparable to persistent smoking and to address this when designing individually adjusted [cardiac rehabilitation] and secondary prevention treatments,” the authors write.

Pain should be assessed at follow-up after MI, they add, and, as Dr. Vixner suggested, it should be “acknowledged as an important risk factor.”
 

 

 

Managing risks

“These findings parallel prior studies and my own clinical experience,” American Heart Association volunteer expert Gregg C. Fonarow, MD, interim chief of the division of cardiology at the University of California, Los Angeles, and director, Ahmanson-UCLA Cardiomyopathy Center, told this news organization.

“There are many potential causes for patient-reported pain in the year after a heart attack,” he said, including a greater cardiovascular risk burden, more comorbid conditions, less physical activity, and chronic use of nonsteroidal anti-inflammatory medications or opioids for pain control – all of which can contribute to the increased risk of mortality.

Factors beyond those evaluated and adjusted for in the observational study may contribute to the observed associations, he added. “Socioeconomic factors were not accounted for [and] there was no information on the types, doses, and frequency of pain medication use.”

“Clinicians managing patients with prior MI should carefully assess experienced pain and utilize this information to optimize risk factor control recommendations, inform treatment decisions, and consider in terms of prognosis,” he advised.

Further studies should evaluate whether the associations hold true for other patient populations, Dr. Fonarow said. “In addition, intervention trials could evaluate if enhanced management strategies in these higher-risk patients with self-reported pain can successfully lower the mortality risk.”

Dr. Vixner sees a role for physical activity in lowering the mortality risk.

“One of the core treatments for chronic pain is physical activity,” she said. “It positively influences quality of life, activities of daily living, pain intensity, and overall physical function, and reduces the risk of social isolation” and cardiovascular diseases.

Her team recently developed the “eVISualisation of physical activity and pain” (eVIS) intervention, which aims to promote healthy physical activity levels in persons living with chronic pain. The intervention is currently being evaluated in an ongoing registry-based, randomized controlled trial.

The study was supported by Svenska Försäkringsföreningen, Dalarna University, Region Dalarna. Dr. Vixner and coauthors have reported no relevant financial relationships. Dr. Fonarow has disclosed consulting for Abbott, Amgen, AstraZeneca, Bayer, Cytokinetics, Eli Lilly, Johnson & Johnson, Medtronic, Merck, Novartis, and Pfizer.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients reporting moderate or extreme pain a year after a myocardial infarction (MI) – even pain due to other health conditions – are more likely to die within the next 8 years than those without post-MI pain, new research suggests.

In the analysis of post-MI health data for more than 18,300 Swedish adults, those with moderate pain were 35% more likely to die from any cause during follow-up, compared with those with no pain, and those with extreme pain were more than twice as likely to die.

Furthermore, pain was a stronger predictor of mortality than smoking.

“For a long time, pain has been regarded as merely a symptom of disease rather than a disease” in its own right, Linda Vixner, PT, PhD, of Dalarna University in Falun, Sweden, said in an interview.

Updated definitions of chronic pain in the ICD-11, as well as a recent study using data from the UK Biobank showing that chronic pain is associated with an increased risk of cardiovascular disease, prompted the current study, which looks at the effect of pain on long-term survival after an MI.

“We did not expect that pain would have such a strong impact on the risk of death, and it also surprised us that the risk was more pronounced than that of smoking,” Dr. Vixner said. “Clinicians should consider pain an important cardiovascular risk factor.”

The study was published online in the Journal of the American Heart Association.
 

‘Experienced pain’ prognostic

The investigators analyzed data from the SWEDEHEART registry of 18,376 patients who had an MI in 2004-2013. The mean age of patients was 62 years and 75% were men. Follow-up time was 8.5 years (median, 3.37).

Self-reported levels of experienced pain according to the EuroQol five-dimension instrument were recorded 12 months after hospital discharge.

Moderate pain was reported by 38.2% of patients and extreme pain by 4.5%.

In the extreme pain category, women were overrepresented (7.5% vs. 3.6% of men), as were current smokers, and patients with diabetes, previous MI, previous stroke, previous percutaneous coronary intervention, non-ST-segment–elevation MI, and any kind of chest pain. Patients classified as physically inactive also were overrepresented in this category.

In addition, those with extreme pain had a higher body mass index and waist circumference 12 months after hospital discharge.

Most (73%) of the 7,889 patients who reported no pain at the 2-month follow-up after MI were also pain-free at the 12-month follow-up, and 65% of those experiencing pain at 2 months were also experiencing pain at 12 months.

There were 1,067 deaths. The adjusted hazard ratio was 1.35 for moderate pain and 2.06 for extreme pain.

As noted, pain was a stronger mortality predictor than smoking: C-statistics for pain were 0.60, and for smoking, 0.55.

“Clinicians managing patients after MI should recognize the need to consider experienced pain as a prognostic factor comparable to persistent smoking and to address this when designing individually adjusted [cardiac rehabilitation] and secondary prevention treatments,” the authors write.

Pain should be assessed at follow-up after MI, they add, and, as Dr. Vixner suggested, it should be “acknowledged as an important risk factor.”
 

 

 

Managing risks

“These findings parallel prior studies and my own clinical experience,” American Heart Association volunteer expert Gregg C. Fonarow, MD, interim chief of the division of cardiology at the University of California, Los Angeles, and director, Ahmanson-UCLA Cardiomyopathy Center, told this news organization.

“There are many potential causes for patient-reported pain in the year after a heart attack,” he said, including a greater cardiovascular risk burden, more comorbid conditions, less physical activity, and chronic use of nonsteroidal anti-inflammatory medications or opioids for pain control – all of which can contribute to the increased risk of mortality.

Factors beyond those evaluated and adjusted for in the observational study may contribute to the observed associations, he added. “Socioeconomic factors were not accounted for [and] there was no information on the types, doses, and frequency of pain medication use.”

“Clinicians managing patients with prior MI should carefully assess experienced pain and utilize this information to optimize risk factor control recommendations, inform treatment decisions, and consider in terms of prognosis,” he advised.

Further studies should evaluate whether the associations hold true for other patient populations, Dr. Fonarow said. “In addition, intervention trials could evaluate if enhanced management strategies in these higher-risk patients with self-reported pain can successfully lower the mortality risk.”

Dr. Vixner sees a role for physical activity in lowering the mortality risk.

“One of the core treatments for chronic pain is physical activity,” she said. “It positively influences quality of life, activities of daily living, pain intensity, and overall physical function, and reduces the risk of social isolation” and cardiovascular diseases.

Her team recently developed the “eVISualisation of physical activity and pain” (eVIS) intervention, which aims to promote healthy physical activity levels in persons living with chronic pain. The intervention is currently being evaluated in an ongoing registry-based, randomized controlled trial.

The study was supported by Svenska Försäkringsföreningen, Dalarna University, Region Dalarna. Dr. Vixner and coauthors have reported no relevant financial relationships. Dr. Fonarow has disclosed consulting for Abbott, Amgen, AstraZeneca, Bayer, Cytokinetics, Eli Lilly, Johnson & Johnson, Medtronic, Merck, Novartis, and Pfizer.

A version of this article first appeared on Medscape.com.

Patients reporting moderate or extreme pain a year after a myocardial infarction (MI) – even pain due to other health conditions – are more likely to die within the next 8 years than those without post-MI pain, new research suggests.

In the analysis of post-MI health data for more than 18,300 Swedish adults, those with moderate pain were 35% more likely to die from any cause during follow-up, compared with those with no pain, and those with extreme pain were more than twice as likely to die.

Furthermore, pain was a stronger predictor of mortality than smoking.

“For a long time, pain has been regarded as merely a symptom of disease rather than a disease” in its own right, Linda Vixner, PT, PhD, of Dalarna University in Falun, Sweden, said in an interview.

Updated definitions of chronic pain in the ICD-11, as well as a recent study using data from the UK Biobank showing that chronic pain is associated with an increased risk of cardiovascular disease, prompted the current study, which looks at the effect of pain on long-term survival after an MI.

“We did not expect that pain would have such a strong impact on the risk of death, and it also surprised us that the risk was more pronounced than that of smoking,” Dr. Vixner said. “Clinicians should consider pain an important cardiovascular risk factor.”

The study was published online in the Journal of the American Heart Association.
 

‘Experienced pain’ prognostic

The investigators analyzed data from the SWEDEHEART registry of 18,376 patients who had an MI in 2004-2013. The mean age of patients was 62 years and 75% were men. Follow-up time was 8.5 years (median, 3.37).

Self-reported levels of experienced pain according to the EuroQol five-dimension instrument were recorded 12 months after hospital discharge.

Moderate pain was reported by 38.2% of patients and extreme pain by 4.5%.

In the extreme pain category, women were overrepresented (7.5% vs. 3.6% of men), as were current smokers, and patients with diabetes, previous MI, previous stroke, previous percutaneous coronary intervention, non-ST-segment–elevation MI, and any kind of chest pain. Patients classified as physically inactive also were overrepresented in this category.

In addition, those with extreme pain had a higher body mass index and waist circumference 12 months after hospital discharge.

Most (73%) of the 7,889 patients who reported no pain at the 2-month follow-up after MI were also pain-free at the 12-month follow-up, and 65% of those experiencing pain at 2 months were also experiencing pain at 12 months.

There were 1,067 deaths. The adjusted hazard ratio was 1.35 for moderate pain and 2.06 for extreme pain.

As noted, pain was a stronger mortality predictor than smoking: C-statistics for pain were 0.60, and for smoking, 0.55.

“Clinicians managing patients after MI should recognize the need to consider experienced pain as a prognostic factor comparable to persistent smoking and to address this when designing individually adjusted [cardiac rehabilitation] and secondary prevention treatments,” the authors write.

Pain should be assessed at follow-up after MI, they add, and, as Dr. Vixner suggested, it should be “acknowledged as an important risk factor.”
 

 

 

Managing risks

“These findings parallel prior studies and my own clinical experience,” American Heart Association volunteer expert Gregg C. Fonarow, MD, interim chief of the division of cardiology at the University of California, Los Angeles, and director, Ahmanson-UCLA Cardiomyopathy Center, told this news organization.

“There are many potential causes for patient-reported pain in the year after a heart attack,” he said, including a greater cardiovascular risk burden, more comorbid conditions, less physical activity, and chronic use of nonsteroidal anti-inflammatory medications or opioids for pain control – all of which can contribute to the increased risk of mortality.

Factors beyond those evaluated and adjusted for in the observational study may contribute to the observed associations, he added. “Socioeconomic factors were not accounted for [and] there was no information on the types, doses, and frequency of pain medication use.”

“Clinicians managing patients with prior MI should carefully assess experienced pain and utilize this information to optimize risk factor control recommendations, inform treatment decisions, and consider in terms of prognosis,” he advised.

Further studies should evaluate whether the associations hold true for other patient populations, Dr. Fonarow said. “In addition, intervention trials could evaluate if enhanced management strategies in these higher-risk patients with self-reported pain can successfully lower the mortality risk.”

Dr. Vixner sees a role for physical activity in lowering the mortality risk.

“One of the core treatments for chronic pain is physical activity,” she said. “It positively influences quality of life, activities of daily living, pain intensity, and overall physical function, and reduces the risk of social isolation” and cardiovascular diseases.

Her team recently developed the “eVISualisation of physical activity and pain” (eVIS) intervention, which aims to promote healthy physical activity levels in persons living with chronic pain. The intervention is currently being evaluated in an ongoing registry-based, randomized controlled trial.

The study was supported by Svenska Försäkringsföreningen, Dalarna University, Region Dalarna. Dr. Vixner and coauthors have reported no relevant financial relationships. Dr. Fonarow has disclosed consulting for Abbott, Amgen, AstraZeneca, Bayer, Cytokinetics, Eli Lilly, Johnson & Johnson, Medtronic, Merck, Novartis, and Pfizer.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN HEART ASSOCIATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Better than dialysis? Artificial kidney could be the future

Article Type
Changed

Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.

Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.

Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.

This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
 

Question: Could you summarize the clinical problem with chronic kidney disease?

Dr. Fissell:
Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce. Our goal has been to develop a mass-produced, universal donor kidney to render the issue of scarcity – scarcity of time, scarcity of resources, scarcity of money, scarcity of donor organs – irrelevant.

Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?

Dr. Roy:
We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.

Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.

How did you two come up with this idea? How did you get started working together?

Dr. Roy:
I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.

How does the artificial kidney differ from dialysis?

Dr. Fissell:
Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.

The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
 

How does the artificial kidney work?

Dr. Fissell:
Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.

Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.

We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells –  not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
 

Would patients need immunosuppressive or anticoagulation medication?

Dr. Fissell:
They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.

What is the state of the technology now?

Dr. Fissell:
We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.

So it’s more a matter of money than time until the first clinical trials?

Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.

Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.

Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.

This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
 

Question: Could you summarize the clinical problem with chronic kidney disease?

Dr. Fissell:
Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce. Our goal has been to develop a mass-produced, universal donor kidney to render the issue of scarcity – scarcity of time, scarcity of resources, scarcity of money, scarcity of donor organs – irrelevant.

Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?

Dr. Roy:
We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.

Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.

How did you two come up with this idea? How did you get started working together?

Dr. Roy:
I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.

How does the artificial kidney differ from dialysis?

Dr. Fissell:
Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.

The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
 

How does the artificial kidney work?

Dr. Fissell:
Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.

Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.

We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells –  not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
 

Would patients need immunosuppressive or anticoagulation medication?

Dr. Fissell:
They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.

What is the state of the technology now?

Dr. Fissell:
We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.

So it’s more a matter of money than time until the first clinical trials?

Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.

A version of this article first appeared on Medscape.com.

Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.

Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.

Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.

This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
 

Question: Could you summarize the clinical problem with chronic kidney disease?

Dr. Fissell:
Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce. Our goal has been to develop a mass-produced, universal donor kidney to render the issue of scarcity – scarcity of time, scarcity of resources, scarcity of money, scarcity of donor organs – irrelevant.

Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?

Dr. Roy:
We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.

Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.

How did you two come up with this idea? How did you get started working together?

Dr. Roy:
I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.

How does the artificial kidney differ from dialysis?

Dr. Fissell:
Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.

The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
 

How does the artificial kidney work?

Dr. Fissell:
Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.

Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.

We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells –  not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
 

Would patients need immunosuppressive or anticoagulation medication?

Dr. Fissell:
They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.

What is the state of the technology now?

Dr. Fissell:
We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.

So it’s more a matter of money than time until the first clinical trials?

Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Liver transplant in CRC: Who might benefit?

Article Type
Changed

For carefully selected patients with colorectal cancer (CRC), a liver transplant may offer long-term survival and potentially even cure unresectable liver metastases.

Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.

“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.

The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.

Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.

In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.

The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.

Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.

The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.

Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?

The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.

Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.

As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.

In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.

Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.

Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.

The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.

Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.

But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.

A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.

Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.

The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

For carefully selected patients with colorectal cancer (CRC), a liver transplant may offer long-term survival and potentially even cure unresectable liver metastases.

Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.

“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.

The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.

Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.

In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.

The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.

Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.

The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.

Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?

The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.

Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.

As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.

In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.

Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.

Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.

The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.

Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.

But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.

A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.

Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.

The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

For carefully selected patients with colorectal cancer (CRC), a liver transplant may offer long-term survival and potentially even cure unresectable liver metastases.

Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.

“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.

The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.

Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.

In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.

The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.

Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.

The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.

Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?

The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.

Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.

As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.

In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.

Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.

Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.

The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.

Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.

But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.

A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.

Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.

The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Which factors distinguish superagers from the rest of us?

Article Type
Changed

Even at an advanced age, superagers have the memory of someone 20 or 30 years their junior. But why is that? A new study shows that, in superagers, age-related atrophy of the gray matter, especially in the areas responsible for memory, develops much more slowly than in normal older adults. However, the study also emphasizes the importance of physical and mental fitness for a healthy aging process.

“One of the most important unanswered questions with regard to superagers is: ‘Are they resistant to age-related memory loss, or do they have coping mechanisms that allow them to better offset this memory loss?’ ” wrote Marta Garo-Pascual, a PhD candidate at the Autonomous University of Madrid, Spain, and colleagues in the Lancet Healthy Longevity. “Our results indicate that superagers are resistant to these processes.”
 

Six years’ monitoring

From a cohort of older adults who had participated in a study aiming to identify early indicators of Alzheimer’s disease, the research group chose 64 superagers and 55 normal senior citizens. The latter served as the control group. While the superagers performed just as well in a memory test as people 30 years their junior, the control group’s performance was in line with their age and level of education.

All study participants were over age 79 years. Both the group of superagers and the control group included more females than males. On average, they were monitored for 6 years. During this period, a checkup was scheduled annually with an MRI examination, clinical tests, blood tests, and documentation of lifestyle factors.

For Alessandro Cellerino, PhD, of the Leibniz Institute on Aging–Fritz Lipmann Institute in Jena, Germany, this is the most crucial aspect of the study. “Even before this study, we knew that superagers demonstrated less atrophy in certain areas of the brain, but this was always only ever based on a single measurement.”
 

Memory centers protected

The MRI examinations confirmed that in superagers, gray matter atrophy in the regions responsible for memory (such as the medial temporal lobe and cholinergic forebrain), as well in regions important for movement (such as the motor thalamus), was less pronounced. In addition, the volume of gray matter in these regions, especially in the medial temporal lobe, decreased much more slowly in the superagers than in the control subjects over the study period.

Ms. Garo-Pascual and associates used a machine-learning algorithm to differentiate between superagers and normal older adults. From the 89 demographic, lifestyle, and clinical factors entered into the algorithm, two were the most important for the classification: the ability to move and mental health.
 

Mobility and mental health

Clinical tests such as the Timed Up-and-Go Test and the Finger Tapping Test revealed that superagers can be distinguished from the normally aging control subjects with regard to their mobility and fine motor skills. Their physical condition was better, although they, by their own admission, did not move any more than the control subjects in day-to-day life. According to Dr. Cellerino, this finding confirms that physical activity is paramount for cognitive function. “These people were over 80 years old – the fact that there was not much difference between their levels of activity is not surprising. Much more relevant is the question of how you get there – i.e., how active you are at the ages of 40, 50 or even 60 years old.”

 

 

Remaining active is important

As a matter of fact, the superagers indicated that generally they had been more active than the control subjects during their middle years. “Attempting to stay physically fit is essential; even if it just means going for a walk or taking the stairs,” said Dr. Cellerino.

On average, the superagers also fared much better in tests on physical health than the control subjects. They suffered significantly less from depression or anxiety disorders. “Earlier studies suggest that depression and anxiety disorders may influence performance in memory tests across all ages and that they are risk factors for developing dementia,” said Dr. Cellerino.

To avoid mental health issues in later life, gerontologist Dr. Cellerino recommended remaining socially engaged and involved. “Depression and anxiety are commonly also a consequence of social isolation,” he said.
 

Potential genetic differences

Blood sample analyses demonstrated that the superagers exhibited lower concentrations of biomarkers for neurodegenerative diseases than the control group did. In contrast, there was no difference between the two groups in the prevalence of the apo e4 allele, one of the most important genetic risk factors for Alzheimer’s disease. Nevertheless, Ms. Garo-Pascual and associates assume that genetics also play a role. They found that, despite 89 variables employed, the algorithm used could only distinguish superagers from normal older adults 66% of the time. This suggests that additional factors must be in play, such as genetic differences.

Body and mind

Since this is an observational study, whether the determined factors have a direct effect on superaging cannot be ascertained, the authors wrote. However, the results are consistent with earlier findings.

“Regarding the management of old age, we actually haven’t learned anything more than what we already knew. But it does confirm that physical and mental function are closely entwined and that we must maintain both to age healthily,” Dr. Cellerino concluded.

This article was translated from the Medscape German Edition. A version appeared on Medscape.com.

Publications
Topics
Sections

Even at an advanced age, superagers have the memory of someone 20 or 30 years their junior. But why is that? A new study shows that, in superagers, age-related atrophy of the gray matter, especially in the areas responsible for memory, develops much more slowly than in normal older adults. However, the study also emphasizes the importance of physical and mental fitness for a healthy aging process.

“One of the most important unanswered questions with regard to superagers is: ‘Are they resistant to age-related memory loss, or do they have coping mechanisms that allow them to better offset this memory loss?’ ” wrote Marta Garo-Pascual, a PhD candidate at the Autonomous University of Madrid, Spain, and colleagues in the Lancet Healthy Longevity. “Our results indicate that superagers are resistant to these processes.”
 

Six years’ monitoring

From a cohort of older adults who had participated in a study aiming to identify early indicators of Alzheimer’s disease, the research group chose 64 superagers and 55 normal senior citizens. The latter served as the control group. While the superagers performed just as well in a memory test as people 30 years their junior, the control group’s performance was in line with their age and level of education.

All study participants were over age 79 years. Both the group of superagers and the control group included more females than males. On average, they were monitored for 6 years. During this period, a checkup was scheduled annually with an MRI examination, clinical tests, blood tests, and documentation of lifestyle factors.

For Alessandro Cellerino, PhD, of the Leibniz Institute on Aging–Fritz Lipmann Institute in Jena, Germany, this is the most crucial aspect of the study. “Even before this study, we knew that superagers demonstrated less atrophy in certain areas of the brain, but this was always only ever based on a single measurement.”
 

Memory centers protected

The MRI examinations confirmed that in superagers, gray matter atrophy in the regions responsible for memory (such as the medial temporal lobe and cholinergic forebrain), as well in regions important for movement (such as the motor thalamus), was less pronounced. In addition, the volume of gray matter in these regions, especially in the medial temporal lobe, decreased much more slowly in the superagers than in the control subjects over the study period.

Ms. Garo-Pascual and associates used a machine-learning algorithm to differentiate between superagers and normal older adults. From the 89 demographic, lifestyle, and clinical factors entered into the algorithm, two were the most important for the classification: the ability to move and mental health.
 

Mobility and mental health

Clinical tests such as the Timed Up-and-Go Test and the Finger Tapping Test revealed that superagers can be distinguished from the normally aging control subjects with regard to their mobility and fine motor skills. Their physical condition was better, although they, by their own admission, did not move any more than the control subjects in day-to-day life. According to Dr. Cellerino, this finding confirms that physical activity is paramount for cognitive function. “These people were over 80 years old – the fact that there was not much difference between their levels of activity is not surprising. Much more relevant is the question of how you get there – i.e., how active you are at the ages of 40, 50 or even 60 years old.”

 

 

Remaining active is important

As a matter of fact, the superagers indicated that generally they had been more active than the control subjects during their middle years. “Attempting to stay physically fit is essential; even if it just means going for a walk or taking the stairs,” said Dr. Cellerino.

On average, the superagers also fared much better in tests on physical health than the control subjects. They suffered significantly less from depression or anxiety disorders. “Earlier studies suggest that depression and anxiety disorders may influence performance in memory tests across all ages and that they are risk factors for developing dementia,” said Dr. Cellerino.

To avoid mental health issues in later life, gerontologist Dr. Cellerino recommended remaining socially engaged and involved. “Depression and anxiety are commonly also a consequence of social isolation,” he said.
 

Potential genetic differences

Blood sample analyses demonstrated that the superagers exhibited lower concentrations of biomarkers for neurodegenerative diseases than the control group did. In contrast, there was no difference between the two groups in the prevalence of the apo e4 allele, one of the most important genetic risk factors for Alzheimer’s disease. Nevertheless, Ms. Garo-Pascual and associates assume that genetics also play a role. They found that, despite 89 variables employed, the algorithm used could only distinguish superagers from normal older adults 66% of the time. This suggests that additional factors must be in play, such as genetic differences.

Body and mind

Since this is an observational study, whether the determined factors have a direct effect on superaging cannot be ascertained, the authors wrote. However, the results are consistent with earlier findings.

“Regarding the management of old age, we actually haven’t learned anything more than what we already knew. But it does confirm that physical and mental function are closely entwined and that we must maintain both to age healthily,” Dr. Cellerino concluded.

This article was translated from the Medscape German Edition. A version appeared on Medscape.com.

Even at an advanced age, superagers have the memory of someone 20 or 30 years their junior. But why is that? A new study shows that, in superagers, age-related atrophy of the gray matter, especially in the areas responsible for memory, develops much more slowly than in normal older adults. However, the study also emphasizes the importance of physical and mental fitness for a healthy aging process.

“One of the most important unanswered questions with regard to superagers is: ‘Are they resistant to age-related memory loss, or do they have coping mechanisms that allow them to better offset this memory loss?’ ” wrote Marta Garo-Pascual, a PhD candidate at the Autonomous University of Madrid, Spain, and colleagues in the Lancet Healthy Longevity. “Our results indicate that superagers are resistant to these processes.”
 

Six years’ monitoring

From a cohort of older adults who had participated in a study aiming to identify early indicators of Alzheimer’s disease, the research group chose 64 superagers and 55 normal senior citizens. The latter served as the control group. While the superagers performed just as well in a memory test as people 30 years their junior, the control group’s performance was in line with their age and level of education.

All study participants were over age 79 years. Both the group of superagers and the control group included more females than males. On average, they were monitored for 6 years. During this period, a checkup was scheduled annually with an MRI examination, clinical tests, blood tests, and documentation of lifestyle factors.

For Alessandro Cellerino, PhD, of the Leibniz Institute on Aging–Fritz Lipmann Institute in Jena, Germany, this is the most crucial aspect of the study. “Even before this study, we knew that superagers demonstrated less atrophy in certain areas of the brain, but this was always only ever based on a single measurement.”
 

Memory centers protected

The MRI examinations confirmed that in superagers, gray matter atrophy in the regions responsible for memory (such as the medial temporal lobe and cholinergic forebrain), as well in regions important for movement (such as the motor thalamus), was less pronounced. In addition, the volume of gray matter in these regions, especially in the medial temporal lobe, decreased much more slowly in the superagers than in the control subjects over the study period.

Ms. Garo-Pascual and associates used a machine-learning algorithm to differentiate between superagers and normal older adults. From the 89 demographic, lifestyle, and clinical factors entered into the algorithm, two were the most important for the classification: the ability to move and mental health.
 

Mobility and mental health

Clinical tests such as the Timed Up-and-Go Test and the Finger Tapping Test revealed that superagers can be distinguished from the normally aging control subjects with regard to their mobility and fine motor skills. Their physical condition was better, although they, by their own admission, did not move any more than the control subjects in day-to-day life. According to Dr. Cellerino, this finding confirms that physical activity is paramount for cognitive function. “These people were over 80 years old – the fact that there was not much difference between their levels of activity is not surprising. Much more relevant is the question of how you get there – i.e., how active you are at the ages of 40, 50 or even 60 years old.”

 

 

Remaining active is important

As a matter of fact, the superagers indicated that generally they had been more active than the control subjects during their middle years. “Attempting to stay physically fit is essential; even if it just means going for a walk or taking the stairs,” said Dr. Cellerino.

On average, the superagers also fared much better in tests on physical health than the control subjects. They suffered significantly less from depression or anxiety disorders. “Earlier studies suggest that depression and anxiety disorders may influence performance in memory tests across all ages and that they are risk factors for developing dementia,” said Dr. Cellerino.

To avoid mental health issues in later life, gerontologist Dr. Cellerino recommended remaining socially engaged and involved. “Depression and anxiety are commonly also a consequence of social isolation,” he said.
 

Potential genetic differences

Blood sample analyses demonstrated that the superagers exhibited lower concentrations of biomarkers for neurodegenerative diseases than the control group did. In contrast, there was no difference between the two groups in the prevalence of the apo e4 allele, one of the most important genetic risk factors for Alzheimer’s disease. Nevertheless, Ms. Garo-Pascual and associates assume that genetics also play a role. They found that, despite 89 variables employed, the algorithm used could only distinguish superagers from normal older adults 66% of the time. This suggests that additional factors must be in play, such as genetic differences.

Body and mind

Since this is an observational study, whether the determined factors have a direct effect on superaging cannot be ascertained, the authors wrote. However, the results are consistent with earlier findings.

“Regarding the management of old age, we actually haven’t learned anything more than what we already knew. But it does confirm that physical and mental function are closely entwined and that we must maintain both to age healthily,” Dr. Cellerino concluded.

This article was translated from the Medscape German Edition. A version appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET HEALTHY LONGEVITY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Patient safety vs. public health: The ethylene oxide dilemma

Article Type
Changed

Ethylene oxide is a compound used to sterilize more than 20 billion devices sold in the U.S. every year. Although this sterilization process helps keep medical devices – and patients – safe, the odorless, flammable gas may also be harming people who live near sterilization plants and who may inhale the compound, which has been linked to an elevated risk of cancer.

Regulatory agencies are currently feuding over the best way to address the dilemma: preserving patient safety versus protecting public health. Lawmakers are weighing in on the matter, which has been the source of multiple civil lawsuits filed by individuals who say they have suffered health problems as a result of exposure to ethylene oxide.

The Environmental Protection Agency and the U.S. Food and Drug Administration agree that use of the compound should be limited, but they are at odds about how quickly limits should be put in place, according to Axios.

A new commercial standard for ethylene oxide proposed by the EPA in April would impose stricter emission restrictions for sterilization facilities and chemical plants – a move that would cut ethylene oxide emissions by 80%, the EPA estimates.

While the FDA says it “shares concerns about the release of ethylene oxide at unsafe levels into the environment,” the agency cautions that moving too fast to cut emissions would disrupt the medical supply chain, which is already experiencing turbulence. The U.S. has been facing the worst drug supply shortages in a decade in addition to severe medical device shortages.

Currently, other methods of sterilization cannot replace the use of ethylene oxide for many devices. Ethylene oxide is used to sterilize about half of all medical devices in the U.S., the FDA says. Given the country’s reliance on this compound for sterilization, the FDA says it is “equally concerned about the potential impact of shortages of sterilized medical devices that would result from disruptions in commercial sterilizer facility operations.”

In 2019, Illinois temporarily closed a sterilization facility over concern regarding ethylene oxide emissions. The closure caused a shortage of a pediatric breathing tube.

Some lawmakers agree that an Interior-Environment bill would require FDA certification that any action by the EPA would not cause a medical device shortage.

The FDA has been working to identify safe alternatives to ethylene oxide for sterilizing medical supplies as well as strategies to reduce emissions of ethylene oxide by capturing the gas or by turning it into a harmless byproduct. In 2019, the FDA launched a pilot program to incentivize companies to develop new sterilization technologies.

“The FDA remains focused in our commitment to encourage novel ways to sterilize medical devices while reducing adverse impacts on the environment and public health and developing solutions to avoid potential shortages of devices that the American public relies upon,” the agency said.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Ethylene oxide is a compound used to sterilize more than 20 billion devices sold in the U.S. every year. Although this sterilization process helps keep medical devices – and patients – safe, the odorless, flammable gas may also be harming people who live near sterilization plants and who may inhale the compound, which has been linked to an elevated risk of cancer.

Regulatory agencies are currently feuding over the best way to address the dilemma: preserving patient safety versus protecting public health. Lawmakers are weighing in on the matter, which has been the source of multiple civil lawsuits filed by individuals who say they have suffered health problems as a result of exposure to ethylene oxide.

The Environmental Protection Agency and the U.S. Food and Drug Administration agree that use of the compound should be limited, but they are at odds about how quickly limits should be put in place, according to Axios.

A new commercial standard for ethylene oxide proposed by the EPA in April would impose stricter emission restrictions for sterilization facilities and chemical plants – a move that would cut ethylene oxide emissions by 80%, the EPA estimates.

While the FDA says it “shares concerns about the release of ethylene oxide at unsafe levels into the environment,” the agency cautions that moving too fast to cut emissions would disrupt the medical supply chain, which is already experiencing turbulence. The U.S. has been facing the worst drug supply shortages in a decade in addition to severe medical device shortages.

Currently, other methods of sterilization cannot replace the use of ethylene oxide for many devices. Ethylene oxide is used to sterilize about half of all medical devices in the U.S., the FDA says. Given the country’s reliance on this compound for sterilization, the FDA says it is “equally concerned about the potential impact of shortages of sterilized medical devices that would result from disruptions in commercial sterilizer facility operations.”

In 2019, Illinois temporarily closed a sterilization facility over concern regarding ethylene oxide emissions. The closure caused a shortage of a pediatric breathing tube.

Some lawmakers agree that an Interior-Environment bill would require FDA certification that any action by the EPA would not cause a medical device shortage.

The FDA has been working to identify safe alternatives to ethylene oxide for sterilizing medical supplies as well as strategies to reduce emissions of ethylene oxide by capturing the gas or by turning it into a harmless byproduct. In 2019, the FDA launched a pilot program to incentivize companies to develop new sterilization technologies.

“The FDA remains focused in our commitment to encourage novel ways to sterilize medical devices while reducing adverse impacts on the environment and public health and developing solutions to avoid potential shortages of devices that the American public relies upon,” the agency said.

A version of this article first appeared on Medscape.com.

Ethylene oxide is a compound used to sterilize more than 20 billion devices sold in the U.S. every year. Although this sterilization process helps keep medical devices – and patients – safe, the odorless, flammable gas may also be harming people who live near sterilization plants and who may inhale the compound, which has been linked to an elevated risk of cancer.

Regulatory agencies are currently feuding over the best way to address the dilemma: preserving patient safety versus protecting public health. Lawmakers are weighing in on the matter, which has been the source of multiple civil lawsuits filed by individuals who say they have suffered health problems as a result of exposure to ethylene oxide.

The Environmental Protection Agency and the U.S. Food and Drug Administration agree that use of the compound should be limited, but they are at odds about how quickly limits should be put in place, according to Axios.

A new commercial standard for ethylene oxide proposed by the EPA in April would impose stricter emission restrictions for sterilization facilities and chemical plants – a move that would cut ethylene oxide emissions by 80%, the EPA estimates.

While the FDA says it “shares concerns about the release of ethylene oxide at unsafe levels into the environment,” the agency cautions that moving too fast to cut emissions would disrupt the medical supply chain, which is already experiencing turbulence. The U.S. has been facing the worst drug supply shortages in a decade in addition to severe medical device shortages.

Currently, other methods of sterilization cannot replace the use of ethylene oxide for many devices. Ethylene oxide is used to sterilize about half of all medical devices in the U.S., the FDA says. Given the country’s reliance on this compound for sterilization, the FDA says it is “equally concerned about the potential impact of shortages of sterilized medical devices that would result from disruptions in commercial sterilizer facility operations.”

In 2019, Illinois temporarily closed a sterilization facility over concern regarding ethylene oxide emissions. The closure caused a shortage of a pediatric breathing tube.

Some lawmakers agree that an Interior-Environment bill would require FDA certification that any action by the EPA would not cause a medical device shortage.

The FDA has been working to identify safe alternatives to ethylene oxide for sterilizing medical supplies as well as strategies to reduce emissions of ethylene oxide by capturing the gas or by turning it into a harmless byproduct. In 2019, the FDA launched a pilot program to incentivize companies to develop new sterilization technologies.

“The FDA remains focused in our commitment to encourage novel ways to sterilize medical devices while reducing adverse impacts on the environment and public health and developing solutions to avoid potential shortages of devices that the American public relies upon,” the agency said.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Alcohol consumption may not influence breast cancer prognosis, study

Article Type
Changed

Drinking alcohol around the time of a breast cancer diagnosis may not have effects on prognosis that is mediated by body mass index (BMI), an analysis of data from a prospective cohort study suggests.

Kaiser Permanente
Dr. Marilyn Kwan

The study appears to show that drinking up to one serving of alcohol daily, including wine, beer, and liquor, was not associated with any specific outcomes after breast cancer diagnosis. The authors say these findings could have implications for developing more specific guidelines on alcohol use as it relates to the prevention of death and recurrence for cancer survivors.

Among 3,659 women followed for a mean of 11.2 years after a breast cancer diagnosis, overall alcohol consumption in the months before and up to 6 months after diagnosis was not associated with recurrence or mortality after adjusting for numerous factors such as age at diagnosis, cancer stage, socioeconomic details, smoking history, and preexisting conditions.

However, women with obesity (body mass index of 30 kg/m2 or greater) had a lower risk of mortality with increasing alcohol consumption for occasional drinking of 2 or more alcohol servings per week (hazard ratio, 0.71), and regular drinking of at least one alcohol serving daily (HR, 0.77), in a dose-response manner, Marilyn L. Kwan, PhD, and colleagues found.

Dr. Kwan is a senior research scientist at Kaiser Permanente Northern California Division of Research, Oakland.

Women with BMI less than 30 kg/m2 did not have a higher risk of mortality but a nonsignificant increase in the risk of recurrence was observed for those who consumed alcohol occasionally (HR, 1.29) and regularly (HR, 1.19), the investigators reported.

The findings were published online in Cancer.

Women included in the current study were participants in the Pathways Study and were diagnosed with stage I-IV breast cancer between 2003 and 2015. During follow-up, 524 recurrences and 834 deaths occurred, including 369 breast cancer-specific deaths, 314 cardiovascular disease-specific deaths, and 151 deaths from other health problems.

Alcohol consumption was assessed for the 6 months prior to cohort entry, which occurred at an average of about 2 months after diagnosis, as well as 6 months later – at an average of about 8 months after diagnosis – using a food-frequency questionnaire.

Compared with nondrinkers (36.9%), drinkers were more likely younger, more educated, and current or past smokers, the investigators noted.

“This profile appears counterintuitive yet might reflect a healthier lifestyle contributing to better overall survival. Furthermore, higher levels of alcohol consumption could lead to improvement in insulin sensitivity and reduction in insulin-like growth factor-1,” they speculated, noting that reduced fasting insulin concentrations and lower insulin-like growth factor-1 levels are linked with a decreased risk of type 2 diabetes, cardiovascular disease, and cancer.

“Many women with a history of breast cancer are interested in how to improve their prognosis and survival by making lifestyle changes after diagnosis,” they wrote, explaining the rationale for the study. “Current cancer prevention guidelines recommend avoiding alcohol intake or limiting consumption to no more than one drink per day for women. However, no specific guideline exists for cancer survivors other than following the cancer prevention guidelines to reduce the risk of a second cancer.”

High-quality studies on the impact of alcohol consumption on breast cancer prognosis are lacking, they added.

“Given that consuming alcohol is a potentially modifiable lifestyle factor after breast cancer diagnosis, further confirmation is warranted in other large prospective studies of breast cancer survivors with detailed exposure assessment and focus on body size,” they concluded.

The group is the first to report this finding in obese women, and they “strongly believe more research is needed to see if the same association is seen in other studies,” Dr. Kwan told this news organization.

“After a cancer diagnosis, many patients are motivated to make lifestyle changes,” she said. “That often includes adding exercise to their daily routine and eating a healthier diet. Our study findings suggest that doctors can tell patients that having up to a glass of alcohol a day is not likely to increase their risk of a breast cancer recurrence.”

This study was funded by the National Cancer Institute. The authors reported having no disclosures.
 

Publications
Topics
Sections

Drinking alcohol around the time of a breast cancer diagnosis may not have effects on prognosis that is mediated by body mass index (BMI), an analysis of data from a prospective cohort study suggests.

Kaiser Permanente
Dr. Marilyn Kwan

The study appears to show that drinking up to one serving of alcohol daily, including wine, beer, and liquor, was not associated with any specific outcomes after breast cancer diagnosis. The authors say these findings could have implications for developing more specific guidelines on alcohol use as it relates to the prevention of death and recurrence for cancer survivors.

Among 3,659 women followed for a mean of 11.2 years after a breast cancer diagnosis, overall alcohol consumption in the months before and up to 6 months after diagnosis was not associated with recurrence or mortality after adjusting for numerous factors such as age at diagnosis, cancer stage, socioeconomic details, smoking history, and preexisting conditions.

However, women with obesity (body mass index of 30 kg/m2 or greater) had a lower risk of mortality with increasing alcohol consumption for occasional drinking of 2 or more alcohol servings per week (hazard ratio, 0.71), and regular drinking of at least one alcohol serving daily (HR, 0.77), in a dose-response manner, Marilyn L. Kwan, PhD, and colleagues found.

Dr. Kwan is a senior research scientist at Kaiser Permanente Northern California Division of Research, Oakland.

Women with BMI less than 30 kg/m2 did not have a higher risk of mortality but a nonsignificant increase in the risk of recurrence was observed for those who consumed alcohol occasionally (HR, 1.29) and regularly (HR, 1.19), the investigators reported.

The findings were published online in Cancer.

Women included in the current study were participants in the Pathways Study and were diagnosed with stage I-IV breast cancer between 2003 and 2015. During follow-up, 524 recurrences and 834 deaths occurred, including 369 breast cancer-specific deaths, 314 cardiovascular disease-specific deaths, and 151 deaths from other health problems.

Alcohol consumption was assessed for the 6 months prior to cohort entry, which occurred at an average of about 2 months after diagnosis, as well as 6 months later – at an average of about 8 months after diagnosis – using a food-frequency questionnaire.

Compared with nondrinkers (36.9%), drinkers were more likely younger, more educated, and current or past smokers, the investigators noted.

“This profile appears counterintuitive yet might reflect a healthier lifestyle contributing to better overall survival. Furthermore, higher levels of alcohol consumption could lead to improvement in insulin sensitivity and reduction in insulin-like growth factor-1,” they speculated, noting that reduced fasting insulin concentrations and lower insulin-like growth factor-1 levels are linked with a decreased risk of type 2 diabetes, cardiovascular disease, and cancer.

“Many women with a history of breast cancer are interested in how to improve their prognosis and survival by making lifestyle changes after diagnosis,” they wrote, explaining the rationale for the study. “Current cancer prevention guidelines recommend avoiding alcohol intake or limiting consumption to no more than one drink per day for women. However, no specific guideline exists for cancer survivors other than following the cancer prevention guidelines to reduce the risk of a second cancer.”

High-quality studies on the impact of alcohol consumption on breast cancer prognosis are lacking, they added.

“Given that consuming alcohol is a potentially modifiable lifestyle factor after breast cancer diagnosis, further confirmation is warranted in other large prospective studies of breast cancer survivors with detailed exposure assessment and focus on body size,” they concluded.

The group is the first to report this finding in obese women, and they “strongly believe more research is needed to see if the same association is seen in other studies,” Dr. Kwan told this news organization.

“After a cancer diagnosis, many patients are motivated to make lifestyle changes,” she said. “That often includes adding exercise to their daily routine and eating a healthier diet. Our study findings suggest that doctors can tell patients that having up to a glass of alcohol a day is not likely to increase their risk of a breast cancer recurrence.”

This study was funded by the National Cancer Institute. The authors reported having no disclosures.
 

Drinking alcohol around the time of a breast cancer diagnosis may not have effects on prognosis that is mediated by body mass index (BMI), an analysis of data from a prospective cohort study suggests.

Kaiser Permanente
Dr. Marilyn Kwan

The study appears to show that drinking up to one serving of alcohol daily, including wine, beer, and liquor, was not associated with any specific outcomes after breast cancer diagnosis. The authors say these findings could have implications for developing more specific guidelines on alcohol use as it relates to the prevention of death and recurrence for cancer survivors.

Among 3,659 women followed for a mean of 11.2 years after a breast cancer diagnosis, overall alcohol consumption in the months before and up to 6 months after diagnosis was not associated with recurrence or mortality after adjusting for numerous factors such as age at diagnosis, cancer stage, socioeconomic details, smoking history, and preexisting conditions.

However, women with obesity (body mass index of 30 kg/m2 or greater) had a lower risk of mortality with increasing alcohol consumption for occasional drinking of 2 or more alcohol servings per week (hazard ratio, 0.71), and regular drinking of at least one alcohol serving daily (HR, 0.77), in a dose-response manner, Marilyn L. Kwan, PhD, and colleagues found.

Dr. Kwan is a senior research scientist at Kaiser Permanente Northern California Division of Research, Oakland.

Women with BMI less than 30 kg/m2 did not have a higher risk of mortality but a nonsignificant increase in the risk of recurrence was observed for those who consumed alcohol occasionally (HR, 1.29) and regularly (HR, 1.19), the investigators reported.

The findings were published online in Cancer.

Women included in the current study were participants in the Pathways Study and were diagnosed with stage I-IV breast cancer between 2003 and 2015. During follow-up, 524 recurrences and 834 deaths occurred, including 369 breast cancer-specific deaths, 314 cardiovascular disease-specific deaths, and 151 deaths from other health problems.

Alcohol consumption was assessed for the 6 months prior to cohort entry, which occurred at an average of about 2 months after diagnosis, as well as 6 months later – at an average of about 8 months after diagnosis – using a food-frequency questionnaire.

Compared with nondrinkers (36.9%), drinkers were more likely younger, more educated, and current or past smokers, the investigators noted.

“This profile appears counterintuitive yet might reflect a healthier lifestyle contributing to better overall survival. Furthermore, higher levels of alcohol consumption could lead to improvement in insulin sensitivity and reduction in insulin-like growth factor-1,” they speculated, noting that reduced fasting insulin concentrations and lower insulin-like growth factor-1 levels are linked with a decreased risk of type 2 diabetes, cardiovascular disease, and cancer.

“Many women with a history of breast cancer are interested in how to improve their prognosis and survival by making lifestyle changes after diagnosis,” they wrote, explaining the rationale for the study. “Current cancer prevention guidelines recommend avoiding alcohol intake or limiting consumption to no more than one drink per day for women. However, no specific guideline exists for cancer survivors other than following the cancer prevention guidelines to reduce the risk of a second cancer.”

High-quality studies on the impact of alcohol consumption on breast cancer prognosis are lacking, they added.

“Given that consuming alcohol is a potentially modifiable lifestyle factor after breast cancer diagnosis, further confirmation is warranted in other large prospective studies of breast cancer survivors with detailed exposure assessment and focus on body size,” they concluded.

The group is the first to report this finding in obese women, and they “strongly believe more research is needed to see if the same association is seen in other studies,” Dr. Kwan told this news organization.

“After a cancer diagnosis, many patients are motivated to make lifestyle changes,” she said. “That often includes adding exercise to their daily routine and eating a healthier diet. Our study findings suggest that doctors can tell patients that having up to a glass of alcohol a day is not likely to increase their risk of a breast cancer recurrence.”

This study was funded by the National Cancer Institute. The authors reported having no disclosures.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article