Diet and Cancer: Here's What I Tell Patients

Article Type
Changed
Display Headline

Diet and Cancer: Here's What I Tell Patients

One of the most common questions my patients ask is, “What diet can help me beat this cancer?” It is a profoundly important question that is worthy of our efforts to answer. In this brief essay, I will take a deep dive into this question in depth and explore the broader clinical and scientific themes it brings into play.

Low-Hanging Fruit: Nutrition Science

A cancer diagnosis can be a deeply disempowering experience. Although I have not lived with cancer myself, I have seen this play out repeatedly over the past 5 years in my role as an oncologist treating patients with hematologic malignancies.

Our diet is an important part of our personal identity, culturally and spiritually. If lifestyle changes, such as a modified diet or more exercise, can contribute to cancer treatment, it may help us regain a sense of control over our lives, one that cancer so often cruelly strips away. I hypothesize that, among other factors, this is why diet is so important to our patients.

Another factor is exposure to a compelling diet-cancer narrative. Nearly every day, a media headline appears claiming that eating a particular food, or drinking coffee, can either increase or decrease your risk for a certain disease.

These claims, however, are often based on studies of large observational datasets where individuals fill out surveys about their dietary habits and are subsequently assessed for disease outcomes. In these studies, people aren’t asked to eat a particular diet; instead, their dietary habits are analyzed by researchers who have endless permutations to explore. This, in a nutshell, is the field of nutritional epidemiology.

In my opinion, nutritional epidemiology represents the collision of the well-intentioned effort to answer clinically meaningful questions with the ease — and near-infinite permutations — of dietary questions that can be asked from an increasingly larger number of different datasets.

Now, factor in the never-ending appetite (pun intended) of journalism and the public’s desire for dietary studies, and you create the perfect storm of incentives that drives a flood of low-quality nutritional science. These studies are highly malleable to analytical choices and can essentially produce results consistent with your prior beliefs, regardless of the philosophical inclination you have (pro keto-diet, pro-vegan, etc.). I love quoting this study to my trainees that, depending on what variables are included and how the analysis is conducted, the same dataset could be used to show that red meat either increases, decreases, or has no effect on all-cause mortality. Unfortunately, much of the evidence base for diet in cancer comes from similarly confounded, low-quality studies.

Diet and Cancer

So, what do randomized trials show for diet and cancer?

The highest-quality evidence is generated from randomized controlled trials. One of their key advantages is the ability to control both measured and unmeasured confounders.

Unfortunately, the evidence supporting diet as an anticancer modality in randomized trials in patients with cancer is bleak. We did a systematic review of all randomized trials of dietary intervention ever done in patients with cancer. Most of the trials measured outcomes such as feasibility (often small pilot studies that measure variables such as weight changes or lab values). The trials that measure clinical endpoints, such as survival, were largely negative and demonstrated no meaningful effect of diet on outcomes. Take trials exploring whether a Mediterranean diet helps prevent breast cancer recurrence, or whether a diet rich in fruits and vegetables improves prostate cancer outcomes. Although these diets may offer benefits, these studies found that specific diets did not change the natural history of cancer.

Myeloma and Diet

In my specialty, multiple myeloma, I am thankful that some trials are beginning to shed light on whether diet influences cancer outcomes.

One study, which was recently published in Cancer Discovery, explored whether a high-fiber, plant-based diet could potentially slow or delay progression from myeloma precursor conditions toward full-blown multiple myeloma. The trial enrolled 23 participants, with the primary endpoints of dietary adherence and changes to BMI. Measures of progression to multiple myeloma were exploratory at best. Yet, the media coverage, as well as the majority of the discussion and results sections of this study manuscript, claimed that the diet changes can prevent progression to myeloma.

However, the study design and conclusions were flawed. The paper focused on two patients who had some improvement in disease trajectory, while descriptions of patients who had an increase in their bone marrow plasma cell percentage were relegated to the supplemental section.

As a primary investigator of a trial in smoldering myeloma where we use advanced imaging as an alternative to pharmacologic treatment, I frequently see myeloma markers fluctuate and often decrease. I attribute these changes to random variation, or possibly regression to the mean, rather than the effect of any intervention.

Future randomized studies by this group used primary endpoints of stool butyrate level and implement dietary interventions for a limited period— 2 weeks in one study and 12 weeks in another — to again assess the impact of a high-fiber, plant-based diet on progression to myeloma. Although there are no data yet, the limited timeframes in these studies severely limits generalizability for outcomes that would truly matter, such as cancer control and longevity. There is also no evidence that changes in stool butyrate levels influence patient outcomes.

High-quality science — whether it is evaluating diet or other interventions—requires high-quality data, effort, funding, and time. It is not impossible.

We can draw inspiration from the CHALLENGE trial. This large, randomized trial, which took over a decade to complete, assessed the benefit of a structured exercise program in the adjuvant setting for colon cancer. The endpoint of this study was disease-free survival, and the intervention was deployed over a much longer period: 3 years, as opposed to a 2-week intervention. This trial took years from inception to completion, but it yielded a conclusive result and will probably lead to more dedicated efforts to facilitate exercise programs for patients with cancer.

Our patients deserve the same effort as the CHALLENGE trial to answer their important dietary questions. Until such trials are completed, we must acknowledge, with humility, that despite the common sense and feel-good factor that many diets offer us, their impact on cancer remains uncertain.

Conversely, we must recognize that even if diet does not cure or alter the course of a certain cancer, it can still impact quality of life, treatment tolerance, and other supportive care outcomes, making it an important factor in patient care.

This is what I tell my patients that it is unlikely any one diet will change the trajectory of your cancer. Focus on eating healthy, and remember that most things in moderation are fine. Your diet remains an important risk factor and determinant for health outcomes beyond cancer. Eat what makes you happy. You are going through a tough time, and this is not the moment to impose stringent restrictions on yourself.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

One of the most common questions my patients ask is, “What diet can help me beat this cancer?” It is a profoundly important question that is worthy of our efforts to answer. In this brief essay, I will take a deep dive into this question in depth and explore the broader clinical and scientific themes it brings into play.

Low-Hanging Fruit: Nutrition Science

A cancer diagnosis can be a deeply disempowering experience. Although I have not lived with cancer myself, I have seen this play out repeatedly over the past 5 years in my role as an oncologist treating patients with hematologic malignancies.

Our diet is an important part of our personal identity, culturally and spiritually. If lifestyle changes, such as a modified diet or more exercise, can contribute to cancer treatment, it may help us regain a sense of control over our lives, one that cancer so often cruelly strips away. I hypothesize that, among other factors, this is why diet is so important to our patients.

Another factor is exposure to a compelling diet-cancer narrative. Nearly every day, a media headline appears claiming that eating a particular food, or drinking coffee, can either increase or decrease your risk for a certain disease.

These claims, however, are often based on studies of large observational datasets where individuals fill out surveys about their dietary habits and are subsequently assessed for disease outcomes. In these studies, people aren’t asked to eat a particular diet; instead, their dietary habits are analyzed by researchers who have endless permutations to explore. This, in a nutshell, is the field of nutritional epidemiology.

In my opinion, nutritional epidemiology represents the collision of the well-intentioned effort to answer clinically meaningful questions with the ease — and near-infinite permutations — of dietary questions that can be asked from an increasingly larger number of different datasets.

Now, factor in the never-ending appetite (pun intended) of journalism and the public’s desire for dietary studies, and you create the perfect storm of incentives that drives a flood of low-quality nutritional science. These studies are highly malleable to analytical choices and can essentially produce results consistent with your prior beliefs, regardless of the philosophical inclination you have (pro keto-diet, pro-vegan, etc.). I love quoting this study to my trainees that, depending on what variables are included and how the analysis is conducted, the same dataset could be used to show that red meat either increases, decreases, or has no effect on all-cause mortality. Unfortunately, much of the evidence base for diet in cancer comes from similarly confounded, low-quality studies.

Diet and Cancer

So, what do randomized trials show for diet and cancer?

The highest-quality evidence is generated from randomized controlled trials. One of their key advantages is the ability to control both measured and unmeasured confounders.

Unfortunately, the evidence supporting diet as an anticancer modality in randomized trials in patients with cancer is bleak. We did a systematic review of all randomized trials of dietary intervention ever done in patients with cancer. Most of the trials measured outcomes such as feasibility (often small pilot studies that measure variables such as weight changes or lab values). The trials that measure clinical endpoints, such as survival, were largely negative and demonstrated no meaningful effect of diet on outcomes. Take trials exploring whether a Mediterranean diet helps prevent breast cancer recurrence, or whether a diet rich in fruits and vegetables improves prostate cancer outcomes. Although these diets may offer benefits, these studies found that specific diets did not change the natural history of cancer.

Myeloma and Diet

In my specialty, multiple myeloma, I am thankful that some trials are beginning to shed light on whether diet influences cancer outcomes.

One study, which was recently published in Cancer Discovery, explored whether a high-fiber, plant-based diet could potentially slow or delay progression from myeloma precursor conditions toward full-blown multiple myeloma. The trial enrolled 23 participants, with the primary endpoints of dietary adherence and changes to BMI. Measures of progression to multiple myeloma were exploratory at best. Yet, the media coverage, as well as the majority of the discussion and results sections of this study manuscript, claimed that the diet changes can prevent progression to myeloma.

However, the study design and conclusions were flawed. The paper focused on two patients who had some improvement in disease trajectory, while descriptions of patients who had an increase in their bone marrow plasma cell percentage were relegated to the supplemental section.

As a primary investigator of a trial in smoldering myeloma where we use advanced imaging as an alternative to pharmacologic treatment, I frequently see myeloma markers fluctuate and often decrease. I attribute these changes to random variation, or possibly regression to the mean, rather than the effect of any intervention.

Future randomized studies by this group used primary endpoints of stool butyrate level and implement dietary interventions for a limited period— 2 weeks in one study and 12 weeks in another — to again assess the impact of a high-fiber, plant-based diet on progression to myeloma. Although there are no data yet, the limited timeframes in these studies severely limits generalizability for outcomes that would truly matter, such as cancer control and longevity. There is also no evidence that changes in stool butyrate levels influence patient outcomes.

High-quality science — whether it is evaluating diet or other interventions—requires high-quality data, effort, funding, and time. It is not impossible.

We can draw inspiration from the CHALLENGE trial. This large, randomized trial, which took over a decade to complete, assessed the benefit of a structured exercise program in the adjuvant setting for colon cancer. The endpoint of this study was disease-free survival, and the intervention was deployed over a much longer period: 3 years, as opposed to a 2-week intervention. This trial took years from inception to completion, but it yielded a conclusive result and will probably lead to more dedicated efforts to facilitate exercise programs for patients with cancer.

Our patients deserve the same effort as the CHALLENGE trial to answer their important dietary questions. Until such trials are completed, we must acknowledge, with humility, that despite the common sense and feel-good factor that many diets offer us, their impact on cancer remains uncertain.

Conversely, we must recognize that even if diet does not cure or alter the course of a certain cancer, it can still impact quality of life, treatment tolerance, and other supportive care outcomes, making it an important factor in patient care.

This is what I tell my patients that it is unlikely any one diet will change the trajectory of your cancer. Focus on eating healthy, and remember that most things in moderation are fine. Your diet remains an important risk factor and determinant for health outcomes beyond cancer. Eat what makes you happy. You are going through a tough time, and this is not the moment to impose stringent restrictions on yourself.

A version of this article first appeared on Medscape.com.

One of the most common questions my patients ask is, “What diet can help me beat this cancer?” It is a profoundly important question that is worthy of our efforts to answer. In this brief essay, I will take a deep dive into this question in depth and explore the broader clinical and scientific themes it brings into play.

Low-Hanging Fruit: Nutrition Science

A cancer diagnosis can be a deeply disempowering experience. Although I have not lived with cancer myself, I have seen this play out repeatedly over the past 5 years in my role as an oncologist treating patients with hematologic malignancies.

Our diet is an important part of our personal identity, culturally and spiritually. If lifestyle changes, such as a modified diet or more exercise, can contribute to cancer treatment, it may help us regain a sense of control over our lives, one that cancer so often cruelly strips away. I hypothesize that, among other factors, this is why diet is so important to our patients.

Another factor is exposure to a compelling diet-cancer narrative. Nearly every day, a media headline appears claiming that eating a particular food, or drinking coffee, can either increase or decrease your risk for a certain disease.

These claims, however, are often based on studies of large observational datasets where individuals fill out surveys about their dietary habits and are subsequently assessed for disease outcomes. In these studies, people aren’t asked to eat a particular diet; instead, their dietary habits are analyzed by researchers who have endless permutations to explore. This, in a nutshell, is the field of nutritional epidemiology.

In my opinion, nutritional epidemiology represents the collision of the well-intentioned effort to answer clinically meaningful questions with the ease — and near-infinite permutations — of dietary questions that can be asked from an increasingly larger number of different datasets.

Now, factor in the never-ending appetite (pun intended) of journalism and the public’s desire for dietary studies, and you create the perfect storm of incentives that drives a flood of low-quality nutritional science. These studies are highly malleable to analytical choices and can essentially produce results consistent with your prior beliefs, regardless of the philosophical inclination you have (pro keto-diet, pro-vegan, etc.). I love quoting this study to my trainees that, depending on what variables are included and how the analysis is conducted, the same dataset could be used to show that red meat either increases, decreases, or has no effect on all-cause mortality. Unfortunately, much of the evidence base for diet in cancer comes from similarly confounded, low-quality studies.

Diet and Cancer

So, what do randomized trials show for diet and cancer?

The highest-quality evidence is generated from randomized controlled trials. One of their key advantages is the ability to control both measured and unmeasured confounders.

Unfortunately, the evidence supporting diet as an anticancer modality in randomized trials in patients with cancer is bleak. We did a systematic review of all randomized trials of dietary intervention ever done in patients with cancer. Most of the trials measured outcomes such as feasibility (often small pilot studies that measure variables such as weight changes or lab values). The trials that measure clinical endpoints, such as survival, were largely negative and demonstrated no meaningful effect of diet on outcomes. Take trials exploring whether a Mediterranean diet helps prevent breast cancer recurrence, or whether a diet rich in fruits and vegetables improves prostate cancer outcomes. Although these diets may offer benefits, these studies found that specific diets did not change the natural history of cancer.

Myeloma and Diet

In my specialty, multiple myeloma, I am thankful that some trials are beginning to shed light on whether diet influences cancer outcomes.

One study, which was recently published in Cancer Discovery, explored whether a high-fiber, plant-based diet could potentially slow or delay progression from myeloma precursor conditions toward full-blown multiple myeloma. The trial enrolled 23 participants, with the primary endpoints of dietary adherence and changes to BMI. Measures of progression to multiple myeloma were exploratory at best. Yet, the media coverage, as well as the majority of the discussion and results sections of this study manuscript, claimed that the diet changes can prevent progression to myeloma.

However, the study design and conclusions were flawed. The paper focused on two patients who had some improvement in disease trajectory, while descriptions of patients who had an increase in their bone marrow plasma cell percentage were relegated to the supplemental section.

As a primary investigator of a trial in smoldering myeloma where we use advanced imaging as an alternative to pharmacologic treatment, I frequently see myeloma markers fluctuate and often decrease. I attribute these changes to random variation, or possibly regression to the mean, rather than the effect of any intervention.

Future randomized studies by this group used primary endpoints of stool butyrate level and implement dietary interventions for a limited period— 2 weeks in one study and 12 weeks in another — to again assess the impact of a high-fiber, plant-based diet on progression to myeloma. Although there are no data yet, the limited timeframes in these studies severely limits generalizability for outcomes that would truly matter, such as cancer control and longevity. There is also no evidence that changes in stool butyrate levels influence patient outcomes.

High-quality science — whether it is evaluating diet or other interventions—requires high-quality data, effort, funding, and time. It is not impossible.

We can draw inspiration from the CHALLENGE trial. This large, randomized trial, which took over a decade to complete, assessed the benefit of a structured exercise program in the adjuvant setting for colon cancer. The endpoint of this study was disease-free survival, and the intervention was deployed over a much longer period: 3 years, as opposed to a 2-week intervention. This trial took years from inception to completion, but it yielded a conclusive result and will probably lead to more dedicated efforts to facilitate exercise programs for patients with cancer.

Our patients deserve the same effort as the CHALLENGE trial to answer their important dietary questions. Until such trials are completed, we must acknowledge, with humility, that despite the common sense and feel-good factor that many diets offer us, their impact on cancer remains uncertain.

Conversely, we must recognize that even if diet does not cure or alter the course of a certain cancer, it can still impact quality of life, treatment tolerance, and other supportive care outcomes, making it an important factor in patient care.

This is what I tell my patients that it is unlikely any one diet will change the trajectory of your cancer. Focus on eating healthy, and remember that most things in moderation are fine. Your diet remains an important risk factor and determinant for health outcomes beyond cancer. Eat what makes you happy. You are going through a tough time, and this is not the moment to impose stringent restrictions on yourself.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline

Diet and Cancer: Here's What I Tell Patients

Display Headline

Diet and Cancer: Here's What I Tell Patients

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

FDA Grants Full Approval to Encorafenib in Metastatic CRC

Article Type
Changed
Display Headline

FDA Grants Full Approval to Encorafenib in Metastatic CRC

The FDA has granted traditional approval to encorafenib (Braftovi, Pfizer) in combination with cetuximab (Erbitux, Eli Lilly) and fluorouracil-based chemotherapy for treatment of adults with metastatic colorectal cancer with a BRAF V600E mutation, as detected by an FDA-authorized test.

Encorafenib received accelerated approval for use with cetuximab plus mFOLFOX6 in this patient population in 2024, based on results from the BREAKWATER trial showing improved objective response rates. The conversion to full approval is supported by progression-free and overall survival outcomes.

As reported previously by Medscape Medical News, the combination of encorafenib/cetuximab/mFOLFOX6 doubled median overall survival compared with standard chemotherapy with or without bevacizumab. At a median follow-up of 22 months, overall survival was 30 months with the encorafenib regimen vs 15 months with standard chemotherapy (hazard ratio [HR], 0.49; P < .0001).

At median follow up of 16.8 months, median progression-free survival was 12.8 in the encorafenib group vs 7.1 months in the standard chemotherapy group (HR, 0.53; P < .0001).

The survival results are “unprecedented” and “practice changing” for these patients, who historically have a poor prognosis, lead investigator Elena Élez, MD, PhD, of Vall d’Hebron University Hospital in Barcelona, Spain, said in presenting the findings at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.

The results were simultaneously published in The New England Journal of Medicine.

Speaking at the ASCO meeting, study discussant Andrea Sartore-Bianchi, MD, of the University of Milan, Italy, called the results “striking” and said the encorafenib combination should be considered the first-line standard of care.

As for safety, the rate of treatment-related grade 3/4 adverse events in the trial was 76% with encorafenib vs 59% with standard chemotherapy. Patients receiving encorafenib also had higher rates of anemia, arthralgia, rash, and pyrexia, but there was no substantial increase in treatment discontinuation.

The recommended encorafenib dose is 300 mg (four 75 mg capsules) once daily, in combination with cetuximab and mFOLFOX6 or in combination with cetuximab and FOLFIRI until disease progression or unacceptable toxicity, the FDA said in its approval announcement.

Prescribing information includes warnings and precautions for new primary malignancies (cutaneous and noncutaneous), tumor promotion in BRAF-wild-type tumors, cardiomyopathy, hepatotoxicity, hemorrhage, uveitis, QT prolongation, and embryo-fetal toxicity.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The FDA has granted traditional approval to encorafenib (Braftovi, Pfizer) in combination with cetuximab (Erbitux, Eli Lilly) and fluorouracil-based chemotherapy for treatment of adults with metastatic colorectal cancer with a BRAF V600E mutation, as detected by an FDA-authorized test.

Encorafenib received accelerated approval for use with cetuximab plus mFOLFOX6 in this patient population in 2024, based on results from the BREAKWATER trial showing improved objective response rates. The conversion to full approval is supported by progression-free and overall survival outcomes.

As reported previously by Medscape Medical News, the combination of encorafenib/cetuximab/mFOLFOX6 doubled median overall survival compared with standard chemotherapy with or without bevacizumab. At a median follow-up of 22 months, overall survival was 30 months with the encorafenib regimen vs 15 months with standard chemotherapy (hazard ratio [HR], 0.49; P < .0001).

At median follow up of 16.8 months, median progression-free survival was 12.8 in the encorafenib group vs 7.1 months in the standard chemotherapy group (HR, 0.53; P < .0001).

The survival results are “unprecedented” and “practice changing” for these patients, who historically have a poor prognosis, lead investigator Elena Élez, MD, PhD, of Vall d’Hebron University Hospital in Barcelona, Spain, said in presenting the findings at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.

The results were simultaneously published in The New England Journal of Medicine.

Speaking at the ASCO meeting, study discussant Andrea Sartore-Bianchi, MD, of the University of Milan, Italy, called the results “striking” and said the encorafenib combination should be considered the first-line standard of care.

As for safety, the rate of treatment-related grade 3/4 adverse events in the trial was 76% with encorafenib vs 59% with standard chemotherapy. Patients receiving encorafenib also had higher rates of anemia, arthralgia, rash, and pyrexia, but there was no substantial increase in treatment discontinuation.

The recommended encorafenib dose is 300 mg (four 75 mg capsules) once daily, in combination with cetuximab and mFOLFOX6 or in combination with cetuximab and FOLFIRI until disease progression or unacceptable toxicity, the FDA said in its approval announcement.

Prescribing information includes warnings and precautions for new primary malignancies (cutaneous and noncutaneous), tumor promotion in BRAF-wild-type tumors, cardiomyopathy, hepatotoxicity, hemorrhage, uveitis, QT prolongation, and embryo-fetal toxicity.

A version of this article first appeared on Medscape.com.

The FDA has granted traditional approval to encorafenib (Braftovi, Pfizer) in combination with cetuximab (Erbitux, Eli Lilly) and fluorouracil-based chemotherapy for treatment of adults with metastatic colorectal cancer with a BRAF V600E mutation, as detected by an FDA-authorized test.

Encorafenib received accelerated approval for use with cetuximab plus mFOLFOX6 in this patient population in 2024, based on results from the BREAKWATER trial showing improved objective response rates. The conversion to full approval is supported by progression-free and overall survival outcomes.

As reported previously by Medscape Medical News, the combination of encorafenib/cetuximab/mFOLFOX6 doubled median overall survival compared with standard chemotherapy with or without bevacizumab. At a median follow-up of 22 months, overall survival was 30 months with the encorafenib regimen vs 15 months with standard chemotherapy (hazard ratio [HR], 0.49; P < .0001).

At median follow up of 16.8 months, median progression-free survival was 12.8 in the encorafenib group vs 7.1 months in the standard chemotherapy group (HR, 0.53; P < .0001).

The survival results are “unprecedented” and “practice changing” for these patients, who historically have a poor prognosis, lead investigator Elena Élez, MD, PhD, of Vall d’Hebron University Hospital in Barcelona, Spain, said in presenting the findings at the American Society of Clinical Oncology (ASCO) 2025 annual meeting.

The results were simultaneously published in The New England Journal of Medicine.

Speaking at the ASCO meeting, study discussant Andrea Sartore-Bianchi, MD, of the University of Milan, Italy, called the results “striking” and said the encorafenib combination should be considered the first-line standard of care.

As for safety, the rate of treatment-related grade 3/4 adverse events in the trial was 76% with encorafenib vs 59% with standard chemotherapy. Patients receiving encorafenib also had higher rates of anemia, arthralgia, rash, and pyrexia, but there was no substantial increase in treatment discontinuation.

The recommended encorafenib dose is 300 mg (four 75 mg capsules) once daily, in combination with cetuximab and mFOLFOX6 or in combination with cetuximab and FOLFIRI until disease progression or unacceptable toxicity, the FDA said in its approval announcement.

Prescribing information includes warnings and precautions for new primary malignancies (cutaneous and noncutaneous), tumor promotion in BRAF-wild-type tumors, cardiomyopathy, hepatotoxicity, hemorrhage, uveitis, QT prolongation, and embryo-fetal toxicity.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline

FDA Grants Full Approval to Encorafenib in Metastatic CRC

Display Headline

FDA Grants Full Approval to Encorafenib in Metastatic CRC

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Housing Support May Boost CRC Screening in Vets Experiencing Homelessness

Article Type
Changed

TOPLINE: Among Veterans Health Administration (VHA) patients experiencing homelessness, gaining housing is linked to higher 24-month colorectal (CRC) and breast cancer screening completion. In cohorts of 117,619 veterans eligible for colorectal screening and 6517 veterans eligible for breast cancer screening veterans, screening occurs in 36.1% and 47.9% after housing gain vs 18.8% and 23.7% if homelessness persists.

METHODOLOGY

  • A retrospective cohort study examined all veterans experiencing homelessness who received care at the VHA from 2011 to 2021 and were eligible for but not up to date on CRC and breast cancer screening.

  • 117,619 veterans experiencing homelessness were eligible for but not up to date on CRC screening (aged 50-75 years without prior cancer diagnosis, inflammatory bowel disease, or colectomy) and 6517 veterans experiencing homelessness were eligible for but not up to date on breast cancer screening (women aged 50-75 years without prior cancer diagnosis, lumpectomy, or mastectomy) were included at their index clinic visit.

  • Exposure was defined as gaining housing within 24 months following index clinic visit, identified through the Homeless Screening Clinical Reminder, US Department of Veterans Affairs (VA) Homeless Operations, Management, and Evaluation System assessments, or US Department of Housing and Urban Development—VA Supportive Housing program move-in dates.

  • Primary outcome were undergoing screening for CRC (colonoscopy, flexible sigmoidoscopy, computed tomography colonography, barium enema, or stool-based study) or breast cancer (mammogram) that was at a VHA facility or paid by VA within 24 months following index clinic visit.

TAKEAWAY

  • Among veterans who gained housing, 36.1% underwent CRC screening and 47.9% underwent breast cancer screening during the 24-month observation period, compared with 18.8% and 23.7% of veterans, respectively, among those who remained homeless.

  • Veterans who gained housing had 2.3 times the adjusted hazard ratio (aHR) of undergoing CRC screening compared with those who remained homeless (AHR, 2.3; 95% CI, 2.2-2.3; P < .001).

  • Veterans who gained housing had 2.4 times the adjusted hazard of undergoing breast cancer screening compared with those who remained homeless (AHR, 2.4; 95% CI, 2.2-2.7; P < .001).

  • Median (interquartile range [IQR]) time from index visit to cancer screening was 8 months (4-15) for CRC screening and 8 months (3-14) for breast cancer screening; median (IQR) time from gaining housing to screening was 4 months (1-9) and 3 months (1-8), respectively.

IN PRACTICE: Veterans experiencing homelessness who gain housing have higher rates of cancer screening. “This finding supports promotion of housing to improve health outcomes for homeless individuals," wrote the authors of the study.

SOURCE: The study was led by researchers at the University of California, San Francisco. It was published online in Annals of Family Medicine.

LIMITATIONS: Residual unmeasured confounding was likely due to the observational design of this study, because veterans able to navigate services to obtain housing may also be more likely to complete preventive care. Housing transitions may be misclassified because the Homeless Screening Clinical Reminder was not designed to track changes and may not be administered to veterans already identified as experiencing homelessness. The study did not capture data for screening completed outside VHA or that was not paid for by it. The study cohort only includes veterans with VHA contact, which may limit generalizability.

DISCLOSURES: Benioff Homelessness and Housing Initiative provided grant support for the work; Project Grant K24AG046372 was also awarded to Kushel for the study. Decker is a National Clinician Scholar with salary support from the US Department of Veterans Affairs and reported receiving personal fees from Moon Surgical. Kanzaria and Kushel are faculty members of the Benioff Homelessness and Housing Initiative; Kanzaria also reported advisory work for Amae Health. Kushel is listed as serving on boards including Housing California, National Homelessness Law Center, and Steinberg Institute; other authors reported no conflicts.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Publications
Topics
Sections

TOPLINE: Among Veterans Health Administration (VHA) patients experiencing homelessness, gaining housing is linked to higher 24-month colorectal (CRC) and breast cancer screening completion. In cohorts of 117,619 veterans eligible for colorectal screening and 6517 veterans eligible for breast cancer screening veterans, screening occurs in 36.1% and 47.9% after housing gain vs 18.8% and 23.7% if homelessness persists.

METHODOLOGY

  • A retrospective cohort study examined all veterans experiencing homelessness who received care at the VHA from 2011 to 2021 and were eligible for but not up to date on CRC and breast cancer screening.

  • 117,619 veterans experiencing homelessness were eligible for but not up to date on CRC screening (aged 50-75 years without prior cancer diagnosis, inflammatory bowel disease, or colectomy) and 6517 veterans experiencing homelessness were eligible for but not up to date on breast cancer screening (women aged 50-75 years without prior cancer diagnosis, lumpectomy, or mastectomy) were included at their index clinic visit.

  • Exposure was defined as gaining housing within 24 months following index clinic visit, identified through the Homeless Screening Clinical Reminder, US Department of Veterans Affairs (VA) Homeless Operations, Management, and Evaluation System assessments, or US Department of Housing and Urban Development—VA Supportive Housing program move-in dates.

  • Primary outcome were undergoing screening for CRC (colonoscopy, flexible sigmoidoscopy, computed tomography colonography, barium enema, or stool-based study) or breast cancer (mammogram) that was at a VHA facility or paid by VA within 24 months following index clinic visit.

TAKEAWAY

  • Among veterans who gained housing, 36.1% underwent CRC screening and 47.9% underwent breast cancer screening during the 24-month observation period, compared with 18.8% and 23.7% of veterans, respectively, among those who remained homeless.

  • Veterans who gained housing had 2.3 times the adjusted hazard ratio (aHR) of undergoing CRC screening compared with those who remained homeless (AHR, 2.3; 95% CI, 2.2-2.3; P < .001).

  • Veterans who gained housing had 2.4 times the adjusted hazard of undergoing breast cancer screening compared with those who remained homeless (AHR, 2.4; 95% CI, 2.2-2.7; P < .001).

  • Median (interquartile range [IQR]) time from index visit to cancer screening was 8 months (4-15) for CRC screening and 8 months (3-14) for breast cancer screening; median (IQR) time from gaining housing to screening was 4 months (1-9) and 3 months (1-8), respectively.

IN PRACTICE: Veterans experiencing homelessness who gain housing have higher rates of cancer screening. “This finding supports promotion of housing to improve health outcomes for homeless individuals," wrote the authors of the study.

SOURCE: The study was led by researchers at the University of California, San Francisco. It was published online in Annals of Family Medicine.

LIMITATIONS: Residual unmeasured confounding was likely due to the observational design of this study, because veterans able to navigate services to obtain housing may also be more likely to complete preventive care. Housing transitions may be misclassified because the Homeless Screening Clinical Reminder was not designed to track changes and may not be administered to veterans already identified as experiencing homelessness. The study did not capture data for screening completed outside VHA or that was not paid for by it. The study cohort only includes veterans with VHA contact, which may limit generalizability.

DISCLOSURES: Benioff Homelessness and Housing Initiative provided grant support for the work; Project Grant K24AG046372 was also awarded to Kushel for the study. Decker is a National Clinician Scholar with salary support from the US Department of Veterans Affairs and reported receiving personal fees from Moon Surgical. Kanzaria and Kushel are faculty members of the Benioff Homelessness and Housing Initiative; Kanzaria also reported advisory work for Amae Health. Kushel is listed as serving on boards including Housing California, National Homelessness Law Center, and Steinberg Institute; other authors reported no conflicts.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

TOPLINE: Among Veterans Health Administration (VHA) patients experiencing homelessness, gaining housing is linked to higher 24-month colorectal (CRC) and breast cancer screening completion. In cohorts of 117,619 veterans eligible for colorectal screening and 6517 veterans eligible for breast cancer screening veterans, screening occurs in 36.1% and 47.9% after housing gain vs 18.8% and 23.7% if homelessness persists.

METHODOLOGY

  • A retrospective cohort study examined all veterans experiencing homelessness who received care at the VHA from 2011 to 2021 and were eligible for but not up to date on CRC and breast cancer screening.

  • 117,619 veterans experiencing homelessness were eligible for but not up to date on CRC screening (aged 50-75 years without prior cancer diagnosis, inflammatory bowel disease, or colectomy) and 6517 veterans experiencing homelessness were eligible for but not up to date on breast cancer screening (women aged 50-75 years without prior cancer diagnosis, lumpectomy, or mastectomy) were included at their index clinic visit.

  • Exposure was defined as gaining housing within 24 months following index clinic visit, identified through the Homeless Screening Clinical Reminder, US Department of Veterans Affairs (VA) Homeless Operations, Management, and Evaluation System assessments, or US Department of Housing and Urban Development—VA Supportive Housing program move-in dates.

  • Primary outcome were undergoing screening for CRC (colonoscopy, flexible sigmoidoscopy, computed tomography colonography, barium enema, or stool-based study) or breast cancer (mammogram) that was at a VHA facility or paid by VA within 24 months following index clinic visit.

TAKEAWAY

  • Among veterans who gained housing, 36.1% underwent CRC screening and 47.9% underwent breast cancer screening during the 24-month observation period, compared with 18.8% and 23.7% of veterans, respectively, among those who remained homeless.

  • Veterans who gained housing had 2.3 times the adjusted hazard ratio (aHR) of undergoing CRC screening compared with those who remained homeless (AHR, 2.3; 95% CI, 2.2-2.3; P < .001).

  • Veterans who gained housing had 2.4 times the adjusted hazard of undergoing breast cancer screening compared with those who remained homeless (AHR, 2.4; 95% CI, 2.2-2.7; P < .001).

  • Median (interquartile range [IQR]) time from index visit to cancer screening was 8 months (4-15) for CRC screening and 8 months (3-14) for breast cancer screening; median (IQR) time from gaining housing to screening was 4 months (1-9) and 3 months (1-8), respectively.

IN PRACTICE: Veterans experiencing homelessness who gain housing have higher rates of cancer screening. “This finding supports promotion of housing to improve health outcomes for homeless individuals," wrote the authors of the study.

SOURCE: The study was led by researchers at the University of California, San Francisco. It was published online in Annals of Family Medicine.

LIMITATIONS: Residual unmeasured confounding was likely due to the observational design of this study, because veterans able to navigate services to obtain housing may also be more likely to complete preventive care. Housing transitions may be misclassified because the Homeless Screening Clinical Reminder was not designed to track changes and may not be administered to veterans already identified as experiencing homelessness. The study did not capture data for screening completed outside VHA or that was not paid for by it. The study cohort only includes veterans with VHA contact, which may limit generalizability.

DISCLOSURES: Benioff Homelessness and Housing Initiative provided grant support for the work; Project Grant K24AG046372 was also awarded to Kushel for the study. Decker is a National Clinician Scholar with salary support from the US Department of Veterans Affairs and reported receiving personal fees from Moon Surgical. Kanzaria and Kushel are faculty members of the Benioff Homelessness and Housing Initiative; Kanzaria also reported advisory work for Amae Health. Kushel is listed as serving on boards including Housing California, National Homelessness Law Center, and Steinberg Institute; other authors reported no conflicts.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Advanced CTE Associated With Dementia in Veterans Study

Article Type
Changed

A study in veterans has found a link between dementia and severe chronic traumatic encephalopathy (CTE)—a degenerative brain disorder diagnosed after death that typically affects contact sports athletes and military personnel. Brain donors with advanced CTE (stage 4) were nearly 4.5 times more likely to have developed dementia than those without CTE. Individuals with stage 3 CTE had more than double the risk of dementia. The study was published in January in Alzheimer's and Dementia. 

CTE stages 1 and 2 were not associated with dementia, cognitive impairment, or functional decline. Researchers also did not observe mood or behavioral symptoms at any stage of the disease. Researchers from the Boston University CTE Center and Veterans Affairs Boston Healthcare System (VABHS) led the study, which was funded by grants from the National Institutes of Health (NIH).

“This study proves that CTE is not a benign brain disease and that it has a significant impact on people’s lives,” coauthor Ann C. McKee, MD, chief of neuropathology at VABHS and director of the Boston University CTE Center, told Federal Practitioner. 

McKee added that this research “provides evidence of a robust association between CTE and dementia, as well as cognitive symptoms, supporting our suspicions of CTE being a possible cause of dementia.”

Because CTE can only be diagnosed after death, researchers analyzed 614 donated brains from individuals with known exposure to repetitive head impacts. Among these donors, 366 (59.6%) had CTE and 248 (40.4%) did not. Most donors were male (97%), and most played American football (80.3%). Of the 614 donated brains, 20 (3.3%) were female. The average age of death from these 614 was 52 years, ranging from 13 to 98 years.

None of the donors had any of the 3 most common neurodegenerative causes of dementia: Alzheimer disease, dementia with Lewy bodies, or frontotemporal lobar degeneration.

Researchers also collected clinical information from individuals close to the donors. Typically, these are family members or close contacts through retrospective evaluations that combined online surveys, telephone interviews, and medical records. 

Data collected included demographics; educational attainment; athletic history (including sport, level of play, position, age at first exposure, and duration); military history; traumatic brain injury history; substance use; and medical, social, and family histories. 

CTE is often misdiagnosed as Alzheimer disease. In this study, among those diagnosed with dementia, 40% were informed they had Alzheimer, yet autopsy findings later showed no evidence of the disease. Another 38% were told the cause of dementia was unknown or could not be specified.

“In cases of dementia, when there is a history of repetitive head impacts from contact sports, military activities, or other exposures, CTE should be considered in the differential diagnosis,” McKee said. “Efforts should be made to distinguish CTE from Alzheimer disease and other causes of dementia during life.”

CTE shares features with Alzheimer, specifically the accumulation of abnormal tau protein. In healthy brains, tau helps maintain the stability and proper function of nerve cells. In CTE, however, tau accumulates in small clumps inside nerve cells that eventually form larger tangles.

Normally, the body clears excess tau protein, but in neurodegenerative diseases this process fails. The ensuing buildup damages brain cells, leading to cell death and the progressive symptoms of dementia.

Understanding how brain changes, including those related to CTE, relate to symptoms is of “paramount importance,” said Heather M. Snyder, PhD, senior vice president of medical and scientific relations at the Alzheimer’s Association in Chicago, who was not involved in the study.

Snyder described the research as “the first study to definitely demonstrate that brain changes caused by CTE are associated with the presence of dementia symptoms.” She also noted that the findings suggest a dose-response relationship, with more severe brain changes linked to worse cognitive symptoms.

The findings “open up new paths of research,” Snyder told Federal Practitioner, but also emphasized that improved tools are needed to detect these CTE-related brain changes in living individuals. 

“While we have made significant progress in understanding the diseases that cause dementia, we have much to learn,” Snyder said. “Continued and steadfast investment in research remains a priority to improve early detection during life and develop personalized approaches.”

Ann McKee reported that she is a member of the Mackey-White Committee of the National Football League Players Association and received funding from the National Institutes of Health, US Department of Veteran Affairs, the Buoniconti Foundation and the MacParkman Foundation during the conduct of the study. She reports honorarium for speaking engagements.

Heather Snyder is a full-time employee of the Alzheimer’s Association, Chicago, IL and has a spouse who is employed by Abbott in an unrelated area. She has no financial conflicts to disclose.

Publications
Topics
Sections

A study in veterans has found a link between dementia and severe chronic traumatic encephalopathy (CTE)—a degenerative brain disorder diagnosed after death that typically affects contact sports athletes and military personnel. Brain donors with advanced CTE (stage 4) were nearly 4.5 times more likely to have developed dementia than those without CTE. Individuals with stage 3 CTE had more than double the risk of dementia. The study was published in January in Alzheimer's and Dementia. 

CTE stages 1 and 2 were not associated with dementia, cognitive impairment, or functional decline. Researchers also did not observe mood or behavioral symptoms at any stage of the disease. Researchers from the Boston University CTE Center and Veterans Affairs Boston Healthcare System (VABHS) led the study, which was funded by grants from the National Institutes of Health (NIH).

“This study proves that CTE is not a benign brain disease and that it has a significant impact on people’s lives,” coauthor Ann C. McKee, MD, chief of neuropathology at VABHS and director of the Boston University CTE Center, told Federal Practitioner. 

McKee added that this research “provides evidence of a robust association between CTE and dementia, as well as cognitive symptoms, supporting our suspicions of CTE being a possible cause of dementia.”

Because CTE can only be diagnosed after death, researchers analyzed 614 donated brains from individuals with known exposure to repetitive head impacts. Among these donors, 366 (59.6%) had CTE and 248 (40.4%) did not. Most donors were male (97%), and most played American football (80.3%). Of the 614 donated brains, 20 (3.3%) were female. The average age of death from these 614 was 52 years, ranging from 13 to 98 years.

None of the donors had any of the 3 most common neurodegenerative causes of dementia: Alzheimer disease, dementia with Lewy bodies, or frontotemporal lobar degeneration.

Researchers also collected clinical information from individuals close to the donors. Typically, these are family members or close contacts through retrospective evaluations that combined online surveys, telephone interviews, and medical records. 

Data collected included demographics; educational attainment; athletic history (including sport, level of play, position, age at first exposure, and duration); military history; traumatic brain injury history; substance use; and medical, social, and family histories. 

CTE is often misdiagnosed as Alzheimer disease. In this study, among those diagnosed with dementia, 40% were informed they had Alzheimer, yet autopsy findings later showed no evidence of the disease. Another 38% were told the cause of dementia was unknown or could not be specified.

“In cases of dementia, when there is a history of repetitive head impacts from contact sports, military activities, or other exposures, CTE should be considered in the differential diagnosis,” McKee said. “Efforts should be made to distinguish CTE from Alzheimer disease and other causes of dementia during life.”

CTE shares features with Alzheimer, specifically the accumulation of abnormal tau protein. In healthy brains, tau helps maintain the stability and proper function of nerve cells. In CTE, however, tau accumulates in small clumps inside nerve cells that eventually form larger tangles.

Normally, the body clears excess tau protein, but in neurodegenerative diseases this process fails. The ensuing buildup damages brain cells, leading to cell death and the progressive symptoms of dementia.

Understanding how brain changes, including those related to CTE, relate to symptoms is of “paramount importance,” said Heather M. Snyder, PhD, senior vice president of medical and scientific relations at the Alzheimer’s Association in Chicago, who was not involved in the study.

Snyder described the research as “the first study to definitely demonstrate that brain changes caused by CTE are associated with the presence of dementia symptoms.” She also noted that the findings suggest a dose-response relationship, with more severe brain changes linked to worse cognitive symptoms.

The findings “open up new paths of research,” Snyder told Federal Practitioner, but also emphasized that improved tools are needed to detect these CTE-related brain changes in living individuals. 

“While we have made significant progress in understanding the diseases that cause dementia, we have much to learn,” Snyder said. “Continued and steadfast investment in research remains a priority to improve early detection during life and develop personalized approaches.”

Ann McKee reported that she is a member of the Mackey-White Committee of the National Football League Players Association and received funding from the National Institutes of Health, US Department of Veteran Affairs, the Buoniconti Foundation and the MacParkman Foundation during the conduct of the study. She reports honorarium for speaking engagements.

Heather Snyder is a full-time employee of the Alzheimer’s Association, Chicago, IL and has a spouse who is employed by Abbott in an unrelated area. She has no financial conflicts to disclose.

A study in veterans has found a link between dementia and severe chronic traumatic encephalopathy (CTE)—a degenerative brain disorder diagnosed after death that typically affects contact sports athletes and military personnel. Brain donors with advanced CTE (stage 4) were nearly 4.5 times more likely to have developed dementia than those without CTE. Individuals with stage 3 CTE had more than double the risk of dementia. The study was published in January in Alzheimer's and Dementia. 

CTE stages 1 and 2 were not associated with dementia, cognitive impairment, or functional decline. Researchers also did not observe mood or behavioral symptoms at any stage of the disease. Researchers from the Boston University CTE Center and Veterans Affairs Boston Healthcare System (VABHS) led the study, which was funded by grants from the National Institutes of Health (NIH).

“This study proves that CTE is not a benign brain disease and that it has a significant impact on people’s lives,” coauthor Ann C. McKee, MD, chief of neuropathology at VABHS and director of the Boston University CTE Center, told Federal Practitioner. 

McKee added that this research “provides evidence of a robust association between CTE and dementia, as well as cognitive symptoms, supporting our suspicions of CTE being a possible cause of dementia.”

Because CTE can only be diagnosed after death, researchers analyzed 614 donated brains from individuals with known exposure to repetitive head impacts. Among these donors, 366 (59.6%) had CTE and 248 (40.4%) did not. Most donors were male (97%), and most played American football (80.3%). Of the 614 donated brains, 20 (3.3%) were female. The average age of death from these 614 was 52 years, ranging from 13 to 98 years.

None of the donors had any of the 3 most common neurodegenerative causes of dementia: Alzheimer disease, dementia with Lewy bodies, or frontotemporal lobar degeneration.

Researchers also collected clinical information from individuals close to the donors. Typically, these are family members or close contacts through retrospective evaluations that combined online surveys, telephone interviews, and medical records. 

Data collected included demographics; educational attainment; athletic history (including sport, level of play, position, age at first exposure, and duration); military history; traumatic brain injury history; substance use; and medical, social, and family histories. 

CTE is often misdiagnosed as Alzheimer disease. In this study, among those diagnosed with dementia, 40% were informed they had Alzheimer, yet autopsy findings later showed no evidence of the disease. Another 38% were told the cause of dementia was unknown or could not be specified.

“In cases of dementia, when there is a history of repetitive head impacts from contact sports, military activities, or other exposures, CTE should be considered in the differential diagnosis,” McKee said. “Efforts should be made to distinguish CTE from Alzheimer disease and other causes of dementia during life.”

CTE shares features with Alzheimer, specifically the accumulation of abnormal tau protein. In healthy brains, tau helps maintain the stability and proper function of nerve cells. In CTE, however, tau accumulates in small clumps inside nerve cells that eventually form larger tangles.

Normally, the body clears excess tau protein, but in neurodegenerative diseases this process fails. The ensuing buildup damages brain cells, leading to cell death and the progressive symptoms of dementia.

Understanding how brain changes, including those related to CTE, relate to symptoms is of “paramount importance,” said Heather M. Snyder, PhD, senior vice president of medical and scientific relations at the Alzheimer’s Association in Chicago, who was not involved in the study.

Snyder described the research as “the first study to definitely demonstrate that brain changes caused by CTE are associated with the presence of dementia symptoms.” She also noted that the findings suggest a dose-response relationship, with more severe brain changes linked to worse cognitive symptoms.

The findings “open up new paths of research,” Snyder told Federal Practitioner, but also emphasized that improved tools are needed to detect these CTE-related brain changes in living individuals. 

“While we have made significant progress in understanding the diseases that cause dementia, we have much to learn,” Snyder said. “Continued and steadfast investment in research remains a priority to improve early detection during life and develop personalized approaches.”

Ann McKee reported that she is a member of the Mackey-White Committee of the National Football League Players Association and received funding from the National Institutes of Health, US Department of Veteran Affairs, the Buoniconti Foundation and the MacParkman Foundation during the conduct of the study. She reports honorarium for speaking engagements.

Heather Snyder is a full-time employee of the Alzheimer’s Association, Chicago, IL and has a spouse who is employed by Abbott in an unrelated area. She has no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Stereotactic Radiation Linked to Better Brain Mets Outcomes

Article Type
Changed
Display Headline

Stereotactic Radiation Linked to Better Brain Mets Outcomes

TOPLINE:

In patients with 5-20 brain metastases, stereotactic radiation improved symptoms and reduced interference with daily functioning compared to hippocampal-avoidance whole brain radiation. The weighted composite MD Anderson Symptom Inventory-Brain Tumor score changed from 2.69 to 2.37 with stereotactic radiation compared with 2.29 to 3.03 with hippocampal-avoidance whole brain radiation.

METHODOLOGY:

  • Randomized trials have shown stereotactic radiation preserves neurocognitive function and patient-reported outcomes compared with whole brain radiation in patients with four or less brain metastases. For patients with more than four brain metastases, published randomized comparisons of stereotactic radiation vs whole brain radiation were lacking prior to this study.
  • Researchers conducted a phase 3, open-label, randomized clinical trial at four US-based centers, enrolling 196 patients between April 2017 and May 2024, with final follow-up in March 2025.
  • Participants included patients with 5-20 brain metastases and no prior brain-directed radiation, with a median of 14 brain metastases per patient and 25% having undergone prior neurosurgical resection.
  • The primary outcome was the mean weighted patient-reported symptom severity and interference score change over 6 months. The researchers used the MD Anderson Symptom Inventory-Brain Tumor instrument, with scores ranging from 0-10 and change range of -10 to 10, to measure outcomes.
  • Stereotactic radiation was delivered in either 1 day (20 Gy) or five daily fractions (30 Gy, or 25 Gy for surgically removed tumors), while hippocampal-avoidance whole brain radiation was administered as 30 Gy in 10 daily fractions with memantine.

TAKEAWAY:

  • Primary outcome analysis showed that stereotactic radiation was linked to a change in the weighted composite MD Anderson Symptom Inventory-Brain Tumor score of 2.69 to 2.37 (mean change, -0.32) compared with 2.29 to 3.03 (mean change, 0.74) with hippocampal-avoidance whole brain radiation (mean difference, -1.06; 95% CI, -1.54 to -0.58; P < .001).
  • Functional independence via the Barthel Index was better in the stereotactic radiation group at 4 months (mean difference, 6.79; 95% CI, 1.19-12.38; P = .02) and 12 months (mean difference, 7.92; 95% CI, 1.34-14.49; P = .02).
  • New brain metastases were more frequent with stereotactic radiation (1-year cumulative incidence, 45.4% vs 24.2%; P = .003), while local recurrence was lower (3.2% vs 39.5%; P < .001).
  • Grade 3-5 adverse events occurred in 12% of stereotactic radiation patients vs 13% in the hippocampal-avoidance whole brain radiation group, with fatigue being most common (28% vs 44%).

IN PRACTICE:

“While [the trial] clearly demonstrates that patients with 5-20 brain metastases have improved symptom burden and lowered interference with daily functioning, there are questions that remain for stereotactic radiosurgery in this population. Patients receiving stereotactic radiosurgery for brain metastases have a higher need for future salvage procedures, and this rate of salvage procedures is higher for patients with an increased number of brain metastases at diagnosis… Moreover, it has been shown that the upfront decision between stereotactic radiosurgery and whole brain radiotherapy is the single decision that contributes most to the cost of care of a patient with brain metastases,” said Michael Chan, MD, in an accompanying editorial published in JAMA.

SOURCE:

The study was led by Ayal A. Aizer, MD, MHS, Brigham and Women’s Hospital/Dana-Farber Cancer Institute, Boston. It was published online on February 19 in JAMA.

LIMITATIONS:

According to the authors, the study was not blinded, and the primary outcome was subjective. High mortality limited long-term data collection, reducing precision and biasing outcomes toward survivors. Additionally, randomization was not stratified by treating center, allowing possible unmeasured imbalances. The minimal clinically important difference had not been defined for many study outcome measures.

DISCLOSURES:

The trial was supported by Varian, a Siemens Healthineers Company. Aizer disclosed receiving grants from NH TherAguix Research outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

In patients with 5-20 brain metastases, stereotactic radiation improved symptoms and reduced interference with daily functioning compared to hippocampal-avoidance whole brain radiation. The weighted composite MD Anderson Symptom Inventory-Brain Tumor score changed from 2.69 to 2.37 with stereotactic radiation compared with 2.29 to 3.03 with hippocampal-avoidance whole brain radiation.

METHODOLOGY:

  • Randomized trials have shown stereotactic radiation preserves neurocognitive function and patient-reported outcomes compared with whole brain radiation in patients with four or less brain metastases. For patients with more than four brain metastases, published randomized comparisons of stereotactic radiation vs whole brain radiation were lacking prior to this study.
  • Researchers conducted a phase 3, open-label, randomized clinical trial at four US-based centers, enrolling 196 patients between April 2017 and May 2024, with final follow-up in March 2025.
  • Participants included patients with 5-20 brain metastases and no prior brain-directed radiation, with a median of 14 brain metastases per patient and 25% having undergone prior neurosurgical resection.
  • The primary outcome was the mean weighted patient-reported symptom severity and interference score change over 6 months. The researchers used the MD Anderson Symptom Inventory-Brain Tumor instrument, with scores ranging from 0-10 and change range of -10 to 10, to measure outcomes.
  • Stereotactic radiation was delivered in either 1 day (20 Gy) or five daily fractions (30 Gy, or 25 Gy for surgically removed tumors), while hippocampal-avoidance whole brain radiation was administered as 30 Gy in 10 daily fractions with memantine.

TAKEAWAY:

  • Primary outcome analysis showed that stereotactic radiation was linked to a change in the weighted composite MD Anderson Symptom Inventory-Brain Tumor score of 2.69 to 2.37 (mean change, -0.32) compared with 2.29 to 3.03 (mean change, 0.74) with hippocampal-avoidance whole brain radiation (mean difference, -1.06; 95% CI, -1.54 to -0.58; P < .001).
  • Functional independence via the Barthel Index was better in the stereotactic radiation group at 4 months (mean difference, 6.79; 95% CI, 1.19-12.38; P = .02) and 12 months (mean difference, 7.92; 95% CI, 1.34-14.49; P = .02).
  • New brain metastases were more frequent with stereotactic radiation (1-year cumulative incidence, 45.4% vs 24.2%; P = .003), while local recurrence was lower (3.2% vs 39.5%; P < .001).
  • Grade 3-5 adverse events occurred in 12% of stereotactic radiation patients vs 13% in the hippocampal-avoidance whole brain radiation group, with fatigue being most common (28% vs 44%).

IN PRACTICE:

“While [the trial] clearly demonstrates that patients with 5-20 brain metastases have improved symptom burden and lowered interference with daily functioning, there are questions that remain for stereotactic radiosurgery in this population. Patients receiving stereotactic radiosurgery for brain metastases have a higher need for future salvage procedures, and this rate of salvage procedures is higher for patients with an increased number of brain metastases at diagnosis… Moreover, it has been shown that the upfront decision between stereotactic radiosurgery and whole brain radiotherapy is the single decision that contributes most to the cost of care of a patient with brain metastases,” said Michael Chan, MD, in an accompanying editorial published in JAMA.

SOURCE:

The study was led by Ayal A. Aizer, MD, MHS, Brigham and Women’s Hospital/Dana-Farber Cancer Institute, Boston. It was published online on February 19 in JAMA.

LIMITATIONS:

According to the authors, the study was not blinded, and the primary outcome was subjective. High mortality limited long-term data collection, reducing precision and biasing outcomes toward survivors. Additionally, randomization was not stratified by treating center, allowing possible unmeasured imbalances. The minimal clinically important difference had not been defined for many study outcome measures.

DISCLOSURES:

The trial was supported by Varian, a Siemens Healthineers Company. Aizer disclosed receiving grants from NH TherAguix Research outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

TOPLINE:

In patients with 5-20 brain metastases, stereotactic radiation improved symptoms and reduced interference with daily functioning compared to hippocampal-avoidance whole brain radiation. The weighted composite MD Anderson Symptom Inventory-Brain Tumor score changed from 2.69 to 2.37 with stereotactic radiation compared with 2.29 to 3.03 with hippocampal-avoidance whole brain radiation.

METHODOLOGY:

  • Randomized trials have shown stereotactic radiation preserves neurocognitive function and patient-reported outcomes compared with whole brain radiation in patients with four or less brain metastases. For patients with more than four brain metastases, published randomized comparisons of stereotactic radiation vs whole brain radiation were lacking prior to this study.
  • Researchers conducted a phase 3, open-label, randomized clinical trial at four US-based centers, enrolling 196 patients between April 2017 and May 2024, with final follow-up in March 2025.
  • Participants included patients with 5-20 brain metastases and no prior brain-directed radiation, with a median of 14 brain metastases per patient and 25% having undergone prior neurosurgical resection.
  • The primary outcome was the mean weighted patient-reported symptom severity and interference score change over 6 months. The researchers used the MD Anderson Symptom Inventory-Brain Tumor instrument, with scores ranging from 0-10 and change range of -10 to 10, to measure outcomes.
  • Stereotactic radiation was delivered in either 1 day (20 Gy) or five daily fractions (30 Gy, or 25 Gy for surgically removed tumors), while hippocampal-avoidance whole brain radiation was administered as 30 Gy in 10 daily fractions with memantine.

TAKEAWAY:

  • Primary outcome analysis showed that stereotactic radiation was linked to a change in the weighted composite MD Anderson Symptom Inventory-Brain Tumor score of 2.69 to 2.37 (mean change, -0.32) compared with 2.29 to 3.03 (mean change, 0.74) with hippocampal-avoidance whole brain radiation (mean difference, -1.06; 95% CI, -1.54 to -0.58; P < .001).
  • Functional independence via the Barthel Index was better in the stereotactic radiation group at 4 months (mean difference, 6.79; 95% CI, 1.19-12.38; P = .02) and 12 months (mean difference, 7.92; 95% CI, 1.34-14.49; P = .02).
  • New brain metastases were more frequent with stereotactic radiation (1-year cumulative incidence, 45.4% vs 24.2%; P = .003), while local recurrence was lower (3.2% vs 39.5%; P < .001).
  • Grade 3-5 adverse events occurred in 12% of stereotactic radiation patients vs 13% in the hippocampal-avoidance whole brain radiation group, with fatigue being most common (28% vs 44%).

IN PRACTICE:

“While [the trial] clearly demonstrates that patients with 5-20 brain metastases have improved symptom burden and lowered interference with daily functioning, there are questions that remain for stereotactic radiosurgery in this population. Patients receiving stereotactic radiosurgery for brain metastases have a higher need for future salvage procedures, and this rate of salvage procedures is higher for patients with an increased number of brain metastases at diagnosis… Moreover, it has been shown that the upfront decision between stereotactic radiosurgery and whole brain radiotherapy is the single decision that contributes most to the cost of care of a patient with brain metastases,” said Michael Chan, MD, in an accompanying editorial published in JAMA.

SOURCE:

The study was led by Ayal A. Aizer, MD, MHS, Brigham and Women’s Hospital/Dana-Farber Cancer Institute, Boston. It was published online on February 19 in JAMA.

LIMITATIONS:

According to the authors, the study was not blinded, and the primary outcome was subjective. High mortality limited long-term data collection, reducing precision and biasing outcomes toward survivors. Additionally, randomization was not stratified by treating center, allowing possible unmeasured imbalances. The minimal clinically important difference had not been defined for many study outcome measures.

DISCLOSURES:

The trial was supported by Varian, a Siemens Healthineers Company. Aizer disclosed receiving grants from NH TherAguix Research outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline

Stereotactic Radiation Linked to Better Brain Mets Outcomes

Display Headline

Stereotactic Radiation Linked to Better Brain Mets Outcomes

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes

Article Type
Changed
Display Headline

Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes

Transcript generated from video captions.

Hello. I'm Dr Maurie Markman, from City of Hope. I'd like to discuss over the next few minutes an absolutely provocative — and I don't use that term loosely — report that I would humbly suggest may, or perhaps even should, change standard of practice in the care of patients with breast cancer. The paper was published in the Journal of the National Cancer Institute, entitled, “Aprepitant Use During Chemotherapy and Association With Survival in Women With Early Breast Cancer.”

This is a very complex, important, and provocative topic, and I'm only going to have a short time to summarize these results, but again, I would suggest this is a topic worthy of very serious consideration in terms of the implications.

Aprepitant, as many of you know, is a standard antiemetic that has been used for many years. It’s very effective and very well tolerated. There’s not any question about that. It’s a supportive-care medication that may be used or not used; a variety of drugs might be used in its place.

However, there are preclinical data —I cannot go into any kind of detail here—that have revealed that aprepitant in these preclinical settings will slow breast cancer growth and progression.

What we're looking at in this report is retrospective data linking a nationwide registry of 13,811 women diagnosed with early breast cancer between 2008 and 2020 in Norway. These are population-based data that were very well documented because that's how things work in Scandinavian countries in general, but in Norway in particular. They know what patients receive nationally, over time, and there's follow-up.

The point is that they had knowledge of the diagnoses and the therapy. These women that I'm referring to had received chemotherapy and antiemetics, which, of course, is standard of care and has been for decades. These women were followed for the development of metastatic disease and death from 1 year after diagnosis to the end of 2021, which was the duration of this particular report.

During this period of time, of these 13,811 women, 7047 were given aprepitant, which is, interestingly, 51% or about half of the population. Here's the bottom line: Aprepitant use resulted in superior distant disease-free survival, with a hazard ratio of 0.89, and breast cancer-specific survival, with a hazard ratio of 0.83.

Increasingly interesting, only nonluminal breast cancer had this demonstrated benefit, with a hazard ratio of 0.69. Again, that's a hazard ratio for metastatic disease or death of 0.69 if aprepitant was used. It was strongest in triple-negative breast cancer, with a hazard ratio of 0.66. Let me repeat that: a hazard ratio of 0.66 for the reduction in the risk of distant disease or death. This was a difference that was able to be documented with the use of aprepitant or not.

Finally, in this analysis, survival outcomes were not observed with any other class of antiemetics, only aprepitant. In the nonluminal breast cancer population, the longer duration of aprepitant use — presumably multiple cycles over time — was associated with increasingly favorable survival outcomes. This was a trend analysis, so the longer it was used, the more superior the outcomes.

I’m not surprised. To get this paper published in a high-impact journal, the authors had to conclude that clinical trials are required to confirm these findings. Really?

If you're a patient, a family member, or an oncologist caring for a woman with triple-negative breast cancer, you are going to wait for a phase 3, randomized trial to be conducted and reported maybe in 5 or 10 years? When you're talking about a drug that is widely used and is safe, you're going to make a decision to wait for the clinical trial before you conclude that aprepitant should be used in this setting, based upon these excellent data?

I would challenge that and ask, on average today, certainly in patients that I'm seeing or counseling, aprepitant should become a component of the standard of care unless there's a contraindication to the use of the drug, based upon these excellent registry and population-based data.

We don't have to wait for randomized phase 3 trials to answer every question if what we see here makes sense, based on a plausible biological explanation and well-analyzed data. Obviously, other databases can look at this and see if they come up with different answers, but we do not need to wait for a phase 3, randomized trial before we incorporate something that we believe the data support as having a favorable impact on the outcome of patients we are seeing today.

I thank you for your attention.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Transcript generated from video captions.

Hello. I'm Dr Maurie Markman, from City of Hope. I'd like to discuss over the next few minutes an absolutely provocative — and I don't use that term loosely — report that I would humbly suggest may, or perhaps even should, change standard of practice in the care of patients with breast cancer. The paper was published in the Journal of the National Cancer Institute, entitled, “Aprepitant Use During Chemotherapy and Association With Survival in Women With Early Breast Cancer.”

This is a very complex, important, and provocative topic, and I'm only going to have a short time to summarize these results, but again, I would suggest this is a topic worthy of very serious consideration in terms of the implications.

Aprepitant, as many of you know, is a standard antiemetic that has been used for many years. It’s very effective and very well tolerated. There’s not any question about that. It’s a supportive-care medication that may be used or not used; a variety of drugs might be used in its place.

However, there are preclinical data —I cannot go into any kind of detail here—that have revealed that aprepitant in these preclinical settings will slow breast cancer growth and progression.

What we're looking at in this report is retrospective data linking a nationwide registry of 13,811 women diagnosed with early breast cancer between 2008 and 2020 in Norway. These are population-based data that were very well documented because that's how things work in Scandinavian countries in general, but in Norway in particular. They know what patients receive nationally, over time, and there's follow-up.

The point is that they had knowledge of the diagnoses and the therapy. These women that I'm referring to had received chemotherapy and antiemetics, which, of course, is standard of care and has been for decades. These women were followed for the development of metastatic disease and death from 1 year after diagnosis to the end of 2021, which was the duration of this particular report.

During this period of time, of these 13,811 women, 7047 were given aprepitant, which is, interestingly, 51% or about half of the population. Here's the bottom line: Aprepitant use resulted in superior distant disease-free survival, with a hazard ratio of 0.89, and breast cancer-specific survival, with a hazard ratio of 0.83.

Increasingly interesting, only nonluminal breast cancer had this demonstrated benefit, with a hazard ratio of 0.69. Again, that's a hazard ratio for metastatic disease or death of 0.69 if aprepitant was used. It was strongest in triple-negative breast cancer, with a hazard ratio of 0.66. Let me repeat that: a hazard ratio of 0.66 for the reduction in the risk of distant disease or death. This was a difference that was able to be documented with the use of aprepitant or not.

Finally, in this analysis, survival outcomes were not observed with any other class of antiemetics, only aprepitant. In the nonluminal breast cancer population, the longer duration of aprepitant use — presumably multiple cycles over time — was associated with increasingly favorable survival outcomes. This was a trend analysis, so the longer it was used, the more superior the outcomes.

I’m not surprised. To get this paper published in a high-impact journal, the authors had to conclude that clinical trials are required to confirm these findings. Really?

If you're a patient, a family member, or an oncologist caring for a woman with triple-negative breast cancer, you are going to wait for a phase 3, randomized trial to be conducted and reported maybe in 5 or 10 years? When you're talking about a drug that is widely used and is safe, you're going to make a decision to wait for the clinical trial before you conclude that aprepitant should be used in this setting, based upon these excellent data?

I would challenge that and ask, on average today, certainly in patients that I'm seeing or counseling, aprepitant should become a component of the standard of care unless there's a contraindication to the use of the drug, based upon these excellent registry and population-based data.

We don't have to wait for randomized phase 3 trials to answer every question if what we see here makes sense, based on a plausible biological explanation and well-analyzed data. Obviously, other databases can look at this and see if they come up with different answers, but we do not need to wait for a phase 3, randomized trial before we incorporate something that we believe the data support as having a favorable impact on the outcome of patients we are seeing today.

I thank you for your attention.

A version of this article first appeared on Medscape.com.

Transcript generated from video captions.

Hello. I'm Dr Maurie Markman, from City of Hope. I'd like to discuss over the next few minutes an absolutely provocative — and I don't use that term loosely — report that I would humbly suggest may, or perhaps even should, change standard of practice in the care of patients with breast cancer. The paper was published in the Journal of the National Cancer Institute, entitled, “Aprepitant Use During Chemotherapy and Association With Survival in Women With Early Breast Cancer.”

This is a very complex, important, and provocative topic, and I'm only going to have a short time to summarize these results, but again, I would suggest this is a topic worthy of very serious consideration in terms of the implications.

Aprepitant, as many of you know, is a standard antiemetic that has been used for many years. It’s very effective and very well tolerated. There’s not any question about that. It’s a supportive-care medication that may be used or not used; a variety of drugs might be used in its place.

However, there are preclinical data —I cannot go into any kind of detail here—that have revealed that aprepitant in these preclinical settings will slow breast cancer growth and progression.

What we're looking at in this report is retrospective data linking a nationwide registry of 13,811 women diagnosed with early breast cancer between 2008 and 2020 in Norway. These are population-based data that were very well documented because that's how things work in Scandinavian countries in general, but in Norway in particular. They know what patients receive nationally, over time, and there's follow-up.

The point is that they had knowledge of the diagnoses and the therapy. These women that I'm referring to had received chemotherapy and antiemetics, which, of course, is standard of care and has been for decades. These women were followed for the development of metastatic disease and death from 1 year after diagnosis to the end of 2021, which was the duration of this particular report.

During this period of time, of these 13,811 women, 7047 were given aprepitant, which is, interestingly, 51% or about half of the population. Here's the bottom line: Aprepitant use resulted in superior distant disease-free survival, with a hazard ratio of 0.89, and breast cancer-specific survival, with a hazard ratio of 0.83.

Increasingly interesting, only nonluminal breast cancer had this demonstrated benefit, with a hazard ratio of 0.69. Again, that's a hazard ratio for metastatic disease or death of 0.69 if aprepitant was used. It was strongest in triple-negative breast cancer, with a hazard ratio of 0.66. Let me repeat that: a hazard ratio of 0.66 for the reduction in the risk of distant disease or death. This was a difference that was able to be documented with the use of aprepitant or not.

Finally, in this analysis, survival outcomes were not observed with any other class of antiemetics, only aprepitant. In the nonluminal breast cancer population, the longer duration of aprepitant use — presumably multiple cycles over time — was associated with increasingly favorable survival outcomes. This was a trend analysis, so the longer it was used, the more superior the outcomes.

I’m not surprised. To get this paper published in a high-impact journal, the authors had to conclude that clinical trials are required to confirm these findings. Really?

If you're a patient, a family member, or an oncologist caring for a woman with triple-negative breast cancer, you are going to wait for a phase 3, randomized trial to be conducted and reported maybe in 5 or 10 years? When you're talking about a drug that is widely used and is safe, you're going to make a decision to wait for the clinical trial before you conclude that aprepitant should be used in this setting, based upon these excellent data?

I would challenge that and ask, on average today, certainly in patients that I'm seeing or counseling, aprepitant should become a component of the standard of care unless there's a contraindication to the use of the drug, based upon these excellent registry and population-based data.

We don't have to wait for randomized phase 3 trials to answer every question if what we see here makes sense, based on a plausible biological explanation and well-analyzed data. Obviously, other databases can look at this and see if they come up with different answers, but we do not need to wait for a phase 3, randomized trial before we incorporate something that we believe the data support as having a favorable impact on the outcome of patients we are seeing today.

I thank you for your attention.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline

Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes

Display Headline

Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Veteran Suicide Rate Declines Slightly, VA Report Shows

Article Type
Changed
Display Headline

Veteran Suicide Rate Declines Slightly, VA Report Shows

Fewer veterans died by suicide in 2023 than 2022, according to the recently released 2025 National Veteran Suicide Prevention Annual Report from the US Department of Veterans Affairs (VA).

More than half of suicides, the Veterans Health Administration (VHA) found, were driven by pain (52.3%) or sleep problems (51.5%). Increased health problems were factors in 43.1% of cases, particularly traumatic brain injury (TBI) and cancer diagnosis. The suicide rate was 77.6 per 100,000 for veterans with a recent diagnosis of TBI, 94.3% higher than the rate of individuals without such a diagnosis. The suicide rate following a cancer diagnosis was 10.3% higher than for other veterans in VHA care—emphasizing the need, according to the VHA, to continue to expand efforts to integrate suicide prevention resources across all areas serving high-risk veteran groups.

VA has published the National Veteran Suicide Prevention report annually since 2016, with its release typically occurring in December. Release of the 2025 report was delayed until February 2026. The VA attributed the delay, however, due to the federal government shutdown from October 1 to November 12, 2025. At a January 2026 Senate Veterans’ Affairs Committee hearing, VA Secretary Doug Collins denied that there was an effort to halt its release.

Veteran deaths by suicide have often been called an epidemic, with the suicide rate having risen faster for veterans than it has for nonveterans since 2005. Veterans are 1.5 times more likely to die by suicide, a statistic that led Collins, veteran advocates, and members of Congress to identify veteran suicide prevention as a top priority.

The report indicates that the number of veteran suicides per year has remained relatively constant in the 6 most recent years of available data: 6738 in 2018, 6510 in 2019, 6347 in 2020, 6429 in 2021, 6442 in 2022, and 6398 in 2023. The fewest veteran suicides in the last 25 years happened both in 2001 and 2004 (6021), while the most (6738) came in 2018.

Although the overall veteran population has declined over time, more veterans are enrolling in VHA care, increasing from 3.8 million in 2001 to 6.1 million in 2023. However, the VHA found that 61% of veterans who died by suicide in 2023 were not receiving VHA care in the final year of their life.

The suicide rate among veterans in VHA care with mental health or substance use disorder diagnoses fell 34.7%, highlighting “the importance of both strengthening VA’s direct care system and expanding outreach and suicide prevention efforts for veterans who are not engaged in VHA health care,” Sen. Richard Blumenthal (D-CT), Ranking Member on the Senate Veterans’ Affairs Committee, said in a Feb. 5 statement about the report. 

Aligning with previous VA data, the report presented information suggesting VHA services such as the Veterans Crisis Line (VCL) may reduce veteran suicide rates. Twelve months after the first contact with the VCL, the suicide rate for veterans in VHA care in 2022 was 16.1% lower than for those in 2021. 

More than 2800 local and state coalitions are “actively working to meet community needs, expand available resources, and raise awareness” about suicide risks and prevention, the report says. The Staff Sergeant Parker Gordon Fox Suicide Prevention Grants Program, for example, provides community-based services for veterans, service members, and their families.

“Veteran suicide has been a scourge on our nation for far too long,” Collins said in a press release. “Most veterans who die by suicide were not in recent VA care, so making it easier for those who have worn the uniform to access the VA benefits they have earned is key.”

Publications
Topics
Sections

Fewer veterans died by suicide in 2023 than 2022, according to the recently released 2025 National Veteran Suicide Prevention Annual Report from the US Department of Veterans Affairs (VA).

More than half of suicides, the Veterans Health Administration (VHA) found, were driven by pain (52.3%) or sleep problems (51.5%). Increased health problems were factors in 43.1% of cases, particularly traumatic brain injury (TBI) and cancer diagnosis. The suicide rate was 77.6 per 100,000 for veterans with a recent diagnosis of TBI, 94.3% higher than the rate of individuals without such a diagnosis. The suicide rate following a cancer diagnosis was 10.3% higher than for other veterans in VHA care—emphasizing the need, according to the VHA, to continue to expand efforts to integrate suicide prevention resources across all areas serving high-risk veteran groups.

VA has published the National Veteran Suicide Prevention report annually since 2016, with its release typically occurring in December. Release of the 2025 report was delayed until February 2026. The VA attributed the delay, however, due to the federal government shutdown from October 1 to November 12, 2025. At a January 2026 Senate Veterans’ Affairs Committee hearing, VA Secretary Doug Collins denied that there was an effort to halt its release.

Veteran deaths by suicide have often been called an epidemic, with the suicide rate having risen faster for veterans than it has for nonveterans since 2005. Veterans are 1.5 times more likely to die by suicide, a statistic that led Collins, veteran advocates, and members of Congress to identify veteran suicide prevention as a top priority.

The report indicates that the number of veteran suicides per year has remained relatively constant in the 6 most recent years of available data: 6738 in 2018, 6510 in 2019, 6347 in 2020, 6429 in 2021, 6442 in 2022, and 6398 in 2023. The fewest veteran suicides in the last 25 years happened both in 2001 and 2004 (6021), while the most (6738) came in 2018.

Although the overall veteran population has declined over time, more veterans are enrolling in VHA care, increasing from 3.8 million in 2001 to 6.1 million in 2023. However, the VHA found that 61% of veterans who died by suicide in 2023 were not receiving VHA care in the final year of their life.

The suicide rate among veterans in VHA care with mental health or substance use disorder diagnoses fell 34.7%, highlighting “the importance of both strengthening VA’s direct care system and expanding outreach and suicide prevention efforts for veterans who are not engaged in VHA health care,” Sen. Richard Blumenthal (D-CT), Ranking Member on the Senate Veterans’ Affairs Committee, said in a Feb. 5 statement about the report. 

Aligning with previous VA data, the report presented information suggesting VHA services such as the Veterans Crisis Line (VCL) may reduce veteran suicide rates. Twelve months after the first contact with the VCL, the suicide rate for veterans in VHA care in 2022 was 16.1% lower than for those in 2021. 

More than 2800 local and state coalitions are “actively working to meet community needs, expand available resources, and raise awareness” about suicide risks and prevention, the report says. The Staff Sergeant Parker Gordon Fox Suicide Prevention Grants Program, for example, provides community-based services for veterans, service members, and their families.

“Veteran suicide has been a scourge on our nation for far too long,” Collins said in a press release. “Most veterans who die by suicide were not in recent VA care, so making it easier for those who have worn the uniform to access the VA benefits they have earned is key.”

Fewer veterans died by suicide in 2023 than 2022, according to the recently released 2025 National Veteran Suicide Prevention Annual Report from the US Department of Veterans Affairs (VA).

More than half of suicides, the Veterans Health Administration (VHA) found, were driven by pain (52.3%) or sleep problems (51.5%). Increased health problems were factors in 43.1% of cases, particularly traumatic brain injury (TBI) and cancer diagnosis. The suicide rate was 77.6 per 100,000 for veterans with a recent diagnosis of TBI, 94.3% higher than the rate of individuals without such a diagnosis. The suicide rate following a cancer diagnosis was 10.3% higher than for other veterans in VHA care—emphasizing the need, according to the VHA, to continue to expand efforts to integrate suicide prevention resources across all areas serving high-risk veteran groups.

VA has published the National Veteran Suicide Prevention report annually since 2016, with its release typically occurring in December. Release of the 2025 report was delayed until February 2026. The VA attributed the delay, however, due to the federal government shutdown from October 1 to November 12, 2025. At a January 2026 Senate Veterans’ Affairs Committee hearing, VA Secretary Doug Collins denied that there was an effort to halt its release.

Veteran deaths by suicide have often been called an epidemic, with the suicide rate having risen faster for veterans than it has for nonveterans since 2005. Veterans are 1.5 times more likely to die by suicide, a statistic that led Collins, veteran advocates, and members of Congress to identify veteran suicide prevention as a top priority.

The report indicates that the number of veteran suicides per year has remained relatively constant in the 6 most recent years of available data: 6738 in 2018, 6510 in 2019, 6347 in 2020, 6429 in 2021, 6442 in 2022, and 6398 in 2023. The fewest veteran suicides in the last 25 years happened both in 2001 and 2004 (6021), while the most (6738) came in 2018.

Although the overall veteran population has declined over time, more veterans are enrolling in VHA care, increasing from 3.8 million in 2001 to 6.1 million in 2023. However, the VHA found that 61% of veterans who died by suicide in 2023 were not receiving VHA care in the final year of their life.

The suicide rate among veterans in VHA care with mental health or substance use disorder diagnoses fell 34.7%, highlighting “the importance of both strengthening VA’s direct care system and expanding outreach and suicide prevention efforts for veterans who are not engaged in VHA health care,” Sen. Richard Blumenthal (D-CT), Ranking Member on the Senate Veterans’ Affairs Committee, said in a Feb. 5 statement about the report. 

Aligning with previous VA data, the report presented information suggesting VHA services such as the Veterans Crisis Line (VCL) may reduce veteran suicide rates. Twelve months after the first contact with the VCL, the suicide rate for veterans in VHA care in 2022 was 16.1% lower than for those in 2021. 

More than 2800 local and state coalitions are “actively working to meet community needs, expand available resources, and raise awareness” about suicide risks and prevention, the report says. The Staff Sergeant Parker Gordon Fox Suicide Prevention Grants Program, for example, provides community-based services for veterans, service members, and their families.

“Veteran suicide has been a scourge on our nation for far too long,” Collins said in a press release. “Most veterans who die by suicide were not in recent VA care, so making it easier for those who have worn the uniform to access the VA benefits they have earned is key.”

Publications
Publications
Topics
Article Type
Display Headline

Veteran Suicide Rate Declines Slightly, VA Report Shows

Display Headline

Veteran Suicide Rate Declines Slightly, VA Report Shows

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments

Article Type
Changed
Display Headline

Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments

Spending time in a war zone can lead to chronic mental and physical pain. Now, research points to a link between two common disorders that can leave service members struggling.

Published in the journal Arthritis Care & Research, a longitudinal cohort study of 1761 US military service members found that those who had posttraumatic stress disorder (PTSD) before deployment were nearly 3times more likely to develop fibromyalgia after returning home (odds ratio, 2.96; 95% CI, 2.08-4.22). Those with fibromyalgia before deployment had more than threefold greater likelihood of developing PTSD after deployment (odds ratio, 3.12; 95% CI, 1.63-5.95).

This is the largest prospective study to date linking the stress of combat deployment to the onset of fibromyalgia.

“We had the advantage of observing a large population before and after exposure to an environment that often involves significant stress,” said lead study author Jay Higgs, MD, a retired rheumatologist with Brooke Army Medical Center and the University of Texas Health Science Center at San Antonio.

Here’s what the team found and why it matters.

Significant Increase in Fibromyalgia After Development

Service members were checked for fibromyalgia using the 2011 questionnaire modification of the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia. They were assessed for PTSD using the PTSD Checklist Stressor-Specific Version.

Before deployment, service members had similar rates of fibromyalgia as the general population: 2.2% in men and 2.0% in women. After deployment, fibromyalgia rates increased significantly to 8.0% in men and 11.1% in women.

While fibromyalgia tends to be underreported in men, the findings suggest it should not be overlooked in this population. “Our results are consistent with the notion that there should be no gender bias when considering the possibility of fibromyalgia in an individual patient,” Higgs said.

Before deployment, 20.7% of men and 18.3% of women had PTSD symptoms. After deployment, the PTSD rate increased slightly to 22.7% in men and 25.5% in women.

The Link Between Fibromyalgia and PTSD

The researchers said the results suggest that PTSD and fibromyalgia might be linked through central nervous system mechanisms such as central sensitization, elevated hypothalamic-pituitary-adrenal axis activity, elevated cortisol, and proinflammatory cytokines. However, shared causation, associated risk factors, selection bias, or alternative mechanisms within the central and peripheral neuroendocrine and cytokine systems could also be part of the story.

“What we do not know is how much of what we see clinically represents central nervous system pathology, peripheral problems, or a combination of the 2,” Higgs said. “Neurotransmission in the central nervous system is highly complex, and may not only involve specific structures, but a web of communications between them.”

Loci in the midbrain appear especially important, he said.

Elizabeth Hoge, MD, professor and director of the Anxiety Disorders Research Program at Georgetown University School of Medicine, Washington, DC, said that patients with PTSD often have pain, headaches, sleep disturbances, and other symptoms that are part of the picture of fibromyalgia. It’s plausible that pain syndromes could be manifestations of PTSD or groupings of symptoms that suggest a subtype.

“Pain is one way that people experience distress, and we know that in PTSD, sometimes the trauma memories are encoded too strongly, more stressful and more alarming to the body system,” she said.

When patients have symptoms such as chronic pain, headaches, fatigue, or cognitive brain fog, clinicians should remember to ask about trauma exposure, Hoge said. You might be the first to broach the subject.

“I’ve certainly seen patients in clinic who never get asked about the exposure to trauma, including sexual trauma, so sometimes that can be the first pathway to helping people feel better is just to have their trauma recognized,” Hoge said.

If a patient has experienced or witnessed violence, consider a referral to a psychiatrist or psychologist to evaluate them for PTSD. Higgs said he collaborated closely with a psychologist to complement his treatment plans for active duty and retired military service members and families.

The US Department of Veterans Affairs and the Department of Defense (DoD) recommend trauma-focused psychotherapy as the first line of treatment for PTSD. This form of therapy deliberately focuses on bringing trauma memories into the open, Hoge said.

“When a person talks about their trauma, and it comes into direct consciousness, somehow it’s malleable, and so when it goes back down into the memory banks, it’s changed somewhat,” she said.

This study was supported by the DoD through awards from the US Army Medical Research and Materiel Command, Congressionally Directed Medical Research Programs, and Psychological Health and Traumatic Brain Injury Research Program. The funding organizations played no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Higgs’s comments are his own and do not necessarily reflect the official policy or position of the Defense Health Agency, Brooke Army Medical Center, Carl R. Darnall Army Medical Center, the DoD, the Department of Veterans Affairs, or any agencies under the US government. Hoge had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Spending time in a war zone can lead to chronic mental and physical pain. Now, research points to a link between two common disorders that can leave service members struggling.

Published in the journal Arthritis Care & Research, a longitudinal cohort study of 1761 US military service members found that those who had posttraumatic stress disorder (PTSD) before deployment were nearly 3times more likely to develop fibromyalgia after returning home (odds ratio, 2.96; 95% CI, 2.08-4.22). Those with fibromyalgia before deployment had more than threefold greater likelihood of developing PTSD after deployment (odds ratio, 3.12; 95% CI, 1.63-5.95).

This is the largest prospective study to date linking the stress of combat deployment to the onset of fibromyalgia.

“We had the advantage of observing a large population before and after exposure to an environment that often involves significant stress,” said lead study author Jay Higgs, MD, a retired rheumatologist with Brooke Army Medical Center and the University of Texas Health Science Center at San Antonio.

Here’s what the team found and why it matters.

Significant Increase in Fibromyalgia After Development

Service members were checked for fibromyalgia using the 2011 questionnaire modification of the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia. They were assessed for PTSD using the PTSD Checklist Stressor-Specific Version.

Before deployment, service members had similar rates of fibromyalgia as the general population: 2.2% in men and 2.0% in women. After deployment, fibromyalgia rates increased significantly to 8.0% in men and 11.1% in women.

While fibromyalgia tends to be underreported in men, the findings suggest it should not be overlooked in this population. “Our results are consistent with the notion that there should be no gender bias when considering the possibility of fibromyalgia in an individual patient,” Higgs said.

Before deployment, 20.7% of men and 18.3% of women had PTSD symptoms. After deployment, the PTSD rate increased slightly to 22.7% in men and 25.5% in women.

The Link Between Fibromyalgia and PTSD

The researchers said the results suggest that PTSD and fibromyalgia might be linked through central nervous system mechanisms such as central sensitization, elevated hypothalamic-pituitary-adrenal axis activity, elevated cortisol, and proinflammatory cytokines. However, shared causation, associated risk factors, selection bias, or alternative mechanisms within the central and peripheral neuroendocrine and cytokine systems could also be part of the story.

“What we do not know is how much of what we see clinically represents central nervous system pathology, peripheral problems, or a combination of the 2,” Higgs said. “Neurotransmission in the central nervous system is highly complex, and may not only involve specific structures, but a web of communications between them.”

Loci in the midbrain appear especially important, he said.

Elizabeth Hoge, MD, professor and director of the Anxiety Disorders Research Program at Georgetown University School of Medicine, Washington, DC, said that patients with PTSD often have pain, headaches, sleep disturbances, and other symptoms that are part of the picture of fibromyalgia. It’s plausible that pain syndromes could be manifestations of PTSD or groupings of symptoms that suggest a subtype.

“Pain is one way that people experience distress, and we know that in PTSD, sometimes the trauma memories are encoded too strongly, more stressful and more alarming to the body system,” she said.

When patients have symptoms such as chronic pain, headaches, fatigue, or cognitive brain fog, clinicians should remember to ask about trauma exposure, Hoge said. You might be the first to broach the subject.

“I’ve certainly seen patients in clinic who never get asked about the exposure to trauma, including sexual trauma, so sometimes that can be the first pathway to helping people feel better is just to have their trauma recognized,” Hoge said.

If a patient has experienced or witnessed violence, consider a referral to a psychiatrist or psychologist to evaluate them for PTSD. Higgs said he collaborated closely with a psychologist to complement his treatment plans for active duty and retired military service members and families.

The US Department of Veterans Affairs and the Department of Defense (DoD) recommend trauma-focused psychotherapy as the first line of treatment for PTSD. This form of therapy deliberately focuses on bringing trauma memories into the open, Hoge said.

“When a person talks about their trauma, and it comes into direct consciousness, somehow it’s malleable, and so when it goes back down into the memory banks, it’s changed somewhat,” she said.

This study was supported by the DoD through awards from the US Army Medical Research and Materiel Command, Congressionally Directed Medical Research Programs, and Psychological Health and Traumatic Brain Injury Research Program. The funding organizations played no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Higgs’s comments are his own and do not necessarily reflect the official policy or position of the Defense Health Agency, Brooke Army Medical Center, Carl R. Darnall Army Medical Center, the DoD, the Department of Veterans Affairs, or any agencies under the US government. Hoge had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Spending time in a war zone can lead to chronic mental and physical pain. Now, research points to a link between two common disorders that can leave service members struggling.

Published in the journal Arthritis Care & Research, a longitudinal cohort study of 1761 US military service members found that those who had posttraumatic stress disorder (PTSD) before deployment were nearly 3times more likely to develop fibromyalgia after returning home (odds ratio, 2.96; 95% CI, 2.08-4.22). Those with fibromyalgia before deployment had more than threefold greater likelihood of developing PTSD after deployment (odds ratio, 3.12; 95% CI, 1.63-5.95).

This is the largest prospective study to date linking the stress of combat deployment to the onset of fibromyalgia.

“We had the advantage of observing a large population before and after exposure to an environment that often involves significant stress,” said lead study author Jay Higgs, MD, a retired rheumatologist with Brooke Army Medical Center and the University of Texas Health Science Center at San Antonio.

Here’s what the team found and why it matters.

Significant Increase in Fibromyalgia After Development

Service members were checked for fibromyalgia using the 2011 questionnaire modification of the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia. They were assessed for PTSD using the PTSD Checklist Stressor-Specific Version.

Before deployment, service members had similar rates of fibromyalgia as the general population: 2.2% in men and 2.0% in women. After deployment, fibromyalgia rates increased significantly to 8.0% in men and 11.1% in women.

While fibromyalgia tends to be underreported in men, the findings suggest it should not be overlooked in this population. “Our results are consistent with the notion that there should be no gender bias when considering the possibility of fibromyalgia in an individual patient,” Higgs said.

Before deployment, 20.7% of men and 18.3% of women had PTSD symptoms. After deployment, the PTSD rate increased slightly to 22.7% in men and 25.5% in women.

The Link Between Fibromyalgia and PTSD

The researchers said the results suggest that PTSD and fibromyalgia might be linked through central nervous system mechanisms such as central sensitization, elevated hypothalamic-pituitary-adrenal axis activity, elevated cortisol, and proinflammatory cytokines. However, shared causation, associated risk factors, selection bias, or alternative mechanisms within the central and peripheral neuroendocrine and cytokine systems could also be part of the story.

“What we do not know is how much of what we see clinically represents central nervous system pathology, peripheral problems, or a combination of the 2,” Higgs said. “Neurotransmission in the central nervous system is highly complex, and may not only involve specific structures, but a web of communications between them.”

Loci in the midbrain appear especially important, he said.

Elizabeth Hoge, MD, professor and director of the Anxiety Disorders Research Program at Georgetown University School of Medicine, Washington, DC, said that patients with PTSD often have pain, headaches, sleep disturbances, and other symptoms that are part of the picture of fibromyalgia. It’s plausible that pain syndromes could be manifestations of PTSD or groupings of symptoms that suggest a subtype.

“Pain is one way that people experience distress, and we know that in PTSD, sometimes the trauma memories are encoded too strongly, more stressful and more alarming to the body system,” she said.

When patients have symptoms such as chronic pain, headaches, fatigue, or cognitive brain fog, clinicians should remember to ask about trauma exposure, Hoge said. You might be the first to broach the subject.

“I’ve certainly seen patients in clinic who never get asked about the exposure to trauma, including sexual trauma, so sometimes that can be the first pathway to helping people feel better is just to have their trauma recognized,” Hoge said.

If a patient has experienced or witnessed violence, consider a referral to a psychiatrist or psychologist to evaluate them for PTSD. Higgs said he collaborated closely with a psychologist to complement his treatment plans for active duty and retired military service members and families.

The US Department of Veterans Affairs and the Department of Defense (DoD) recommend trauma-focused psychotherapy as the first line of treatment for PTSD. This form of therapy deliberately focuses on bringing trauma memories into the open, Hoge said.

“When a person talks about their trauma, and it comes into direct consciousness, somehow it’s malleable, and so when it goes back down into the memory banks, it’s changed somewhat,” she said.

This study was supported by the DoD through awards from the US Army Medical Research and Materiel Command, Congressionally Directed Medical Research Programs, and Psychological Health and Traumatic Brain Injury Research Program. The funding organizations played no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Higgs’s comments are his own and do not necessarily reflect the official policy or position of the Defense Health Agency, Brooke Army Medical Center, Carl R. Darnall Army Medical Center, the DoD, the Department of Veterans Affairs, or any agencies under the US government. Hoge had no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline

Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments

Display Headline

Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Adding Protein EpiScores May Better Predict CRC Survival

Article Type
Changed
Display Headline

Adding Protein EpiScores May Better Predict CRC Survival

DNA methylation-derived biomarkers called Protein EpiScores may improve the accuracy of disease-free and overall survival prediction in patients with colorectal cancer (CRC), compared with traditional clinical risk factors alone, suggest results of a prospective study.

Although Protein EpiScores require further validation before they are ready for clinical use, the present data offer insights into the underlying processes shaping CRC outcomes, lead author Alicia R. Richards, PhD, of Moffitt Cancer Center, Tampa, Florida, and colleagues wrote in Clinical Epigenetics.

“The immediate value of our findings is highlighting biological pathways like immune suppression and coagulation as drivers of poor outcomes,” senior author Jacob K. Kresovich, PhD, of Moffitt Cancer Center, told Medscape Medical News.

What Are Protein EpiScores?

Previous studies have evaluated epigenetic clocks, which are derived from DNA methylation profiles, as markers for CRC risk. However, these clocks cannot pinpoint specific biological drivers of cancer progression, the investigators wrote.

Protein EpiScores may fill this gap; they were developed based on previous work suggesting that DNA methylation profiles may improve disease prediction based on circulating proteins (eg, C-reactive protein) and physiologic traits (eg, smoking status) beyond directly measuring those same variables.

“Protein EpiScores may therefore represent a complementary class of biomarker to direct measurements,” the investigators wrote.

Although Protein EpiScores have helped uncover biological processes driving various conditions such as cardiovascular disease and cancer, this is the first study to evaluate them specifically in the context of cancer survival.

How Did This Study Evaluate Protein EpiScores in Patients With CRC?

The present study involved 136 patients with newly diagnosed CRC from the prospective ColoCare Study.

For each patient, the investigators recorded 107 Protein EpiScores from pretreatment whole blood samples. Disease-free and overall survival were monitored over a median follow-up of 7.3 years and as long as 13.8 years. During follow-up, 26% of patients experienced disease recurrence, and 35% died.

With these data, the investigators compared the predictive power of the Protein EpiScores vs traditional clinical risk factors for disease-free and overall survival. “We used the standard factors doctors routinely collect before treatment starts to assess prognosis, including tumor stage, age at cancer diagnosis, sex, body mass index, race, and tumor location,” Kresovich said. “These are well-established predictors readily available from medical records.”

What Were the Key Findings?

Adding specific Protein EpiScores to the standard clinical risk factors significantly improved prognostic accuracy for survival.

After adjusting for confounding variables, the HCII, VEGFA, CCL17, and LGALS3BP Protein EpiScores were each independently associated with worse disease-free survival, with hazard ratios ranging from 1.62 to 1.71. Adding these scores to the clinical model improved the concordance index (C-index) from 0.64 to 0.70.

The LGALS3BP Protein EpiScore was also independently linked to overall survival, with a hazard ratio of 1.80. Adding this score to the model raised the C-index from 0.70 to 0.75.

Finally, the HCII, LGALS3BP, MMP12, and VEGFA Protein EpiScores were tied to both disease-free and overall survival with hazard ratios above 1.50.

Are These Findings Practice-Changing?

“The improvements [in prognostic accuracy] are modest but potentially meaningful and comparable to gains from other established biomarkers,” Kresovich said. “The 6-point improvement for recurrence (C-index 0.64 to 0.70) resulted in 34% of patients being reclassified into more accurate risk categories.”

In theory, this could have a meaningful clinical impact.

“In cancer care, even incremental gains matter if they prevent undertreating high-risk patients or overtreating low-risk ones,” Kresovich said.

Despite this potential, he was clear that more work is needed.

“If our findings are validated in other epidemiologic settings, these Protein EpiScores could eventually complement existing risk tools, but we’re realistically several years from clinical implementation,” Kresovich said. “We see these current findings more as a research tool that requires validation in larger cohorts before clinical use.”

How Might These Findings Shape Future Research?

Although more studies are needed before clinical rollout, the present findings point to key biological pathways, such as those involving immune suppression and coagulation, which may be driving worse outcomes in patients with CRC.

“This information can guide basic scientists and mechanistic studies to identify potential therapeutic targets,” Kresovich said.

Beyond evaluating Protein EpiScores in larger patient populations, future studies may also need to recruit a more diverse patient population, given the present cohort was 93% White.

Although the investigators noted that “the racial homogeneity reduced potential confounding by ancestry,” they also explained that “Protein EpiScores were developed in European populations, and their translation to individuals with different ancestries has not been closely examined.”

The study was supported by the Miles for Moffitt Team Science Mechanism. The investigators reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

DNA methylation-derived biomarkers called Protein EpiScores may improve the accuracy of disease-free and overall survival prediction in patients with colorectal cancer (CRC), compared with traditional clinical risk factors alone, suggest results of a prospective study.

Although Protein EpiScores require further validation before they are ready for clinical use, the present data offer insights into the underlying processes shaping CRC outcomes, lead author Alicia R. Richards, PhD, of Moffitt Cancer Center, Tampa, Florida, and colleagues wrote in Clinical Epigenetics.

“The immediate value of our findings is highlighting biological pathways like immune suppression and coagulation as drivers of poor outcomes,” senior author Jacob K. Kresovich, PhD, of Moffitt Cancer Center, told Medscape Medical News.

What Are Protein EpiScores?

Previous studies have evaluated epigenetic clocks, which are derived from DNA methylation profiles, as markers for CRC risk. However, these clocks cannot pinpoint specific biological drivers of cancer progression, the investigators wrote.

Protein EpiScores may fill this gap; they were developed based on previous work suggesting that DNA methylation profiles may improve disease prediction based on circulating proteins (eg, C-reactive protein) and physiologic traits (eg, smoking status) beyond directly measuring those same variables.

“Protein EpiScores may therefore represent a complementary class of biomarker to direct measurements,” the investigators wrote.

Although Protein EpiScores have helped uncover biological processes driving various conditions such as cardiovascular disease and cancer, this is the first study to evaluate them specifically in the context of cancer survival.

How Did This Study Evaluate Protein EpiScores in Patients With CRC?

The present study involved 136 patients with newly diagnosed CRC from the prospective ColoCare Study.

For each patient, the investigators recorded 107 Protein EpiScores from pretreatment whole blood samples. Disease-free and overall survival were monitored over a median follow-up of 7.3 years and as long as 13.8 years. During follow-up, 26% of patients experienced disease recurrence, and 35% died.

With these data, the investigators compared the predictive power of the Protein EpiScores vs traditional clinical risk factors for disease-free and overall survival. “We used the standard factors doctors routinely collect before treatment starts to assess prognosis, including tumor stage, age at cancer diagnosis, sex, body mass index, race, and tumor location,” Kresovich said. “These are well-established predictors readily available from medical records.”

What Were the Key Findings?

Adding specific Protein EpiScores to the standard clinical risk factors significantly improved prognostic accuracy for survival.

After adjusting for confounding variables, the HCII, VEGFA, CCL17, and LGALS3BP Protein EpiScores were each independently associated with worse disease-free survival, with hazard ratios ranging from 1.62 to 1.71. Adding these scores to the clinical model improved the concordance index (C-index) from 0.64 to 0.70.

The LGALS3BP Protein EpiScore was also independently linked to overall survival, with a hazard ratio of 1.80. Adding this score to the model raised the C-index from 0.70 to 0.75.

Finally, the HCII, LGALS3BP, MMP12, and VEGFA Protein EpiScores were tied to both disease-free and overall survival with hazard ratios above 1.50.

Are These Findings Practice-Changing?

“The improvements [in prognostic accuracy] are modest but potentially meaningful and comparable to gains from other established biomarkers,” Kresovich said. “The 6-point improvement for recurrence (C-index 0.64 to 0.70) resulted in 34% of patients being reclassified into more accurate risk categories.”

In theory, this could have a meaningful clinical impact.

“In cancer care, even incremental gains matter if they prevent undertreating high-risk patients or overtreating low-risk ones,” Kresovich said.

Despite this potential, he was clear that more work is needed.

“If our findings are validated in other epidemiologic settings, these Protein EpiScores could eventually complement existing risk tools, but we’re realistically several years from clinical implementation,” Kresovich said. “We see these current findings more as a research tool that requires validation in larger cohorts before clinical use.”

How Might These Findings Shape Future Research?

Although more studies are needed before clinical rollout, the present findings point to key biological pathways, such as those involving immune suppression and coagulation, which may be driving worse outcomes in patients with CRC.

“This information can guide basic scientists and mechanistic studies to identify potential therapeutic targets,” Kresovich said.

Beyond evaluating Protein EpiScores in larger patient populations, future studies may also need to recruit a more diverse patient population, given the present cohort was 93% White.

Although the investigators noted that “the racial homogeneity reduced potential confounding by ancestry,” they also explained that “Protein EpiScores were developed in European populations, and their translation to individuals with different ancestries has not been closely examined.”

The study was supported by the Miles for Moffitt Team Science Mechanism. The investigators reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

DNA methylation-derived biomarkers called Protein EpiScores may improve the accuracy of disease-free and overall survival prediction in patients with colorectal cancer (CRC), compared with traditional clinical risk factors alone, suggest results of a prospective study.

Although Protein EpiScores require further validation before they are ready for clinical use, the present data offer insights into the underlying processes shaping CRC outcomes, lead author Alicia R. Richards, PhD, of Moffitt Cancer Center, Tampa, Florida, and colleagues wrote in Clinical Epigenetics.

“The immediate value of our findings is highlighting biological pathways like immune suppression and coagulation as drivers of poor outcomes,” senior author Jacob K. Kresovich, PhD, of Moffitt Cancer Center, told Medscape Medical News.

What Are Protein EpiScores?

Previous studies have evaluated epigenetic clocks, which are derived from DNA methylation profiles, as markers for CRC risk. However, these clocks cannot pinpoint specific biological drivers of cancer progression, the investigators wrote.

Protein EpiScores may fill this gap; they were developed based on previous work suggesting that DNA methylation profiles may improve disease prediction based on circulating proteins (eg, C-reactive protein) and physiologic traits (eg, smoking status) beyond directly measuring those same variables.

“Protein EpiScores may therefore represent a complementary class of biomarker to direct measurements,” the investigators wrote.

Although Protein EpiScores have helped uncover biological processes driving various conditions such as cardiovascular disease and cancer, this is the first study to evaluate them specifically in the context of cancer survival.

How Did This Study Evaluate Protein EpiScores in Patients With CRC?

The present study involved 136 patients with newly diagnosed CRC from the prospective ColoCare Study.

For each patient, the investigators recorded 107 Protein EpiScores from pretreatment whole blood samples. Disease-free and overall survival were monitored over a median follow-up of 7.3 years and as long as 13.8 years. During follow-up, 26% of patients experienced disease recurrence, and 35% died.

With these data, the investigators compared the predictive power of the Protein EpiScores vs traditional clinical risk factors for disease-free and overall survival. “We used the standard factors doctors routinely collect before treatment starts to assess prognosis, including tumor stage, age at cancer diagnosis, sex, body mass index, race, and tumor location,” Kresovich said. “These are well-established predictors readily available from medical records.”

What Were the Key Findings?

Adding specific Protein EpiScores to the standard clinical risk factors significantly improved prognostic accuracy for survival.

After adjusting for confounding variables, the HCII, VEGFA, CCL17, and LGALS3BP Protein EpiScores were each independently associated with worse disease-free survival, with hazard ratios ranging from 1.62 to 1.71. Adding these scores to the clinical model improved the concordance index (C-index) from 0.64 to 0.70.

The LGALS3BP Protein EpiScore was also independently linked to overall survival, with a hazard ratio of 1.80. Adding this score to the model raised the C-index from 0.70 to 0.75.

Finally, the HCII, LGALS3BP, MMP12, and VEGFA Protein EpiScores were tied to both disease-free and overall survival with hazard ratios above 1.50.

Are These Findings Practice-Changing?

“The improvements [in prognostic accuracy] are modest but potentially meaningful and comparable to gains from other established biomarkers,” Kresovich said. “The 6-point improvement for recurrence (C-index 0.64 to 0.70) resulted in 34% of patients being reclassified into more accurate risk categories.”

In theory, this could have a meaningful clinical impact.

“In cancer care, even incremental gains matter if they prevent undertreating high-risk patients or overtreating low-risk ones,” Kresovich said.

Despite this potential, he was clear that more work is needed.

“If our findings are validated in other epidemiologic settings, these Protein EpiScores could eventually complement existing risk tools, but we’re realistically several years from clinical implementation,” Kresovich said. “We see these current findings more as a research tool that requires validation in larger cohorts before clinical use.”

How Might These Findings Shape Future Research?

Although more studies are needed before clinical rollout, the present findings point to key biological pathways, such as those involving immune suppression and coagulation, which may be driving worse outcomes in patients with CRC.

“This information can guide basic scientists and mechanistic studies to identify potential therapeutic targets,” Kresovich said.

Beyond evaluating Protein EpiScores in larger patient populations, future studies may also need to recruit a more diverse patient population, given the present cohort was 93% White.

Although the investigators noted that “the racial homogeneity reduced potential confounding by ancestry,” they also explained that “Protein EpiScores were developed in European populations, and their translation to individuals with different ancestries has not been closely examined.”

The study was supported by the Miles for Moffitt Team Science Mechanism. The investigators reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline

Adding Protein EpiScores May Better Predict CRC Survival

Display Headline

Adding Protein EpiScores May Better Predict CRC Survival

Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Mailed Tests Boost Colorectal Screening in Veterans

Article Type
Changed

TOPLINE:

Mailed fecal immunochemical test (FIT) kits with reminder phone calls promote colorectal cancer (CRC) screening among veterans without recent primary care visits. Among 782 veterans in a randomized controlled trial (RCT), mailed FITs resulted in a 26.1% screening completion rate within 6 months, compared with 5.8% for usual care and 7.7% for mailed invitations with reminders. Improving screening in this population may help CRC morbidity and mortality among veterans.

METHODOLOGY:

  • Researchers conducted a 3-arm pragmatic RCT at the US Department of Verterans Affairs (VA) Corporal Michael J. Crescenz VA Medical Center (CMC-VAMC), enrolling veterans aged 50 to 75 years without a primary care visit within 18 months.
  • Participants were randomized 1:1:1 to usual care (n = 260), mailed clinic-based screening invitations with reminder calls (n = 261), or mailed home FIT outreach plus prenotification letter and reminder phone calls (n = 261).
  • Outcome measures included documented completion of CRC screening within 6 months after randomization in the electronic health record (EHR); a secondary outcome was FIT return within 6 months among those mailed FIT.
  • Eligibility and exclusions were based on chart review and EHR criteria (eg, excluding symptoms, family history, inflammatory bowel disease, prior resection, or being current by having undergone a colonoscopy within 10 years, sigmoidoscopy or barium enema within 5 years, or fecal occult blood testing within 1 year).

TAKEAWAY

  • CRC screening completion within 6 months is 26.1% with mailed FIT vs 5.8% with usual care (RD, 20.3%; 95% CI, 14.3%-26.3%; RR, 4.5; 95% CI, 2.7-7.7; P < .001).
  • CRC screening completion within 6 months is 26.1% with mailed FIT vs 7.7% with mailed invitation plus reminders (RD, 18.4%; 95% CI, 12.2%-24.6%; RR, 3.4; 95% CI, 2.1-5.4; P < .001).
  • Screening completion does not differ between mailed invitation plus reminders (7.7%) and usual care (5.8%), and the comparison is not statistically supported (RR, 1.3; P = .39).
  • No statistically significant differences in screening completion are reported by age or race/ethnicity, and investigators also report no significant differences in FIT return by age or race/ethnicity in the secondary analysis.

IN PRACTICE

“This research represents the first pragmatic RCT of mailed FIT outreach screening among veterans who have not recently (18 months) used primary care services offered by the VA. In this work, there were large relative, and absolute differences in CRC screening participation rate between veterans offered home FIT screening and those who received usual care (RR = 4.52, RD = 20.2%) or a mailed invitation plus reminders (RR = 3.40, RD = 18.4%)," wrote the authors.

SOURCE

The study was led by Matthew A. Goldshore, MD, PhD, MPH, of the CMC-VAMC . It was published online in Am J Prev Med.

LIMITATIONS

The study was not able identify differences in screening completion or FIT return by patient demographic characteristics such as age and race. The sample was randomized from predominantly male veterans cared for at a single VA medical center, limiting generalizability and reducing external validity. Follow-up and subsequent evaluation of FIT-positive participants is needed for the success of a mailed FIT intervention; of the 3 FIT-positive participants who should have received follow-up evaluation, only 1 underwent colonoscopy, highlighting the challenge of FIT to colonoscopy among participants who do not use care regularly at the CMC-VAMC.

DISCLOSURES

This trial received funding from an VA Health Services Research and Development Service award, with E. Carter Paulson, MD, MSCE, and Chyke A. Doubeni, MD, MPH, serving as principal investigators. Chyke A. Doubeni received support from grant number RO1CA 213645, and Shivan J. Mehta received support from grant number K08CA 234326, both from the National Cancer Institute of the National Institutes of Health. The authors reported no financial disclosures.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Publications
Topics
Sections

TOPLINE:

Mailed fecal immunochemical test (FIT) kits with reminder phone calls promote colorectal cancer (CRC) screening among veterans without recent primary care visits. Among 782 veterans in a randomized controlled trial (RCT), mailed FITs resulted in a 26.1% screening completion rate within 6 months, compared with 5.8% for usual care and 7.7% for mailed invitations with reminders. Improving screening in this population may help CRC morbidity and mortality among veterans.

METHODOLOGY:

  • Researchers conducted a 3-arm pragmatic RCT at the US Department of Verterans Affairs (VA) Corporal Michael J. Crescenz VA Medical Center (CMC-VAMC), enrolling veterans aged 50 to 75 years without a primary care visit within 18 months.
  • Participants were randomized 1:1:1 to usual care (n = 260), mailed clinic-based screening invitations with reminder calls (n = 261), or mailed home FIT outreach plus prenotification letter and reminder phone calls (n = 261).
  • Outcome measures included documented completion of CRC screening within 6 months after randomization in the electronic health record (EHR); a secondary outcome was FIT return within 6 months among those mailed FIT.
  • Eligibility and exclusions were based on chart review and EHR criteria (eg, excluding symptoms, family history, inflammatory bowel disease, prior resection, or being current by having undergone a colonoscopy within 10 years, sigmoidoscopy or barium enema within 5 years, or fecal occult blood testing within 1 year).

TAKEAWAY

  • CRC screening completion within 6 months is 26.1% with mailed FIT vs 5.8% with usual care (RD, 20.3%; 95% CI, 14.3%-26.3%; RR, 4.5; 95% CI, 2.7-7.7; P < .001).
  • CRC screening completion within 6 months is 26.1% with mailed FIT vs 7.7% with mailed invitation plus reminders (RD, 18.4%; 95% CI, 12.2%-24.6%; RR, 3.4; 95% CI, 2.1-5.4; P < .001).
  • Screening completion does not differ between mailed invitation plus reminders (7.7%) and usual care (5.8%), and the comparison is not statistically supported (RR, 1.3; P = .39).
  • No statistically significant differences in screening completion are reported by age or race/ethnicity, and investigators also report no significant differences in FIT return by age or race/ethnicity in the secondary analysis.

IN PRACTICE

“This research represents the first pragmatic RCT of mailed FIT outreach screening among veterans who have not recently (18 months) used primary care services offered by the VA. In this work, there were large relative, and absolute differences in CRC screening participation rate between veterans offered home FIT screening and those who received usual care (RR = 4.52, RD = 20.2%) or a mailed invitation plus reminders (RR = 3.40, RD = 18.4%)," wrote the authors.

SOURCE

The study was led by Matthew A. Goldshore, MD, PhD, MPH, of the CMC-VAMC . It was published online in Am J Prev Med.

LIMITATIONS

The study was not able identify differences in screening completion or FIT return by patient demographic characteristics such as age and race. The sample was randomized from predominantly male veterans cared for at a single VA medical center, limiting generalizability and reducing external validity. Follow-up and subsequent evaluation of FIT-positive participants is needed for the success of a mailed FIT intervention; of the 3 FIT-positive participants who should have received follow-up evaluation, only 1 underwent colonoscopy, highlighting the challenge of FIT to colonoscopy among participants who do not use care regularly at the CMC-VAMC.

DISCLOSURES

This trial received funding from an VA Health Services Research and Development Service award, with E. Carter Paulson, MD, MSCE, and Chyke A. Doubeni, MD, MPH, serving as principal investigators. Chyke A. Doubeni received support from grant number RO1CA 213645, and Shivan J. Mehta received support from grant number K08CA 234326, both from the National Cancer Institute of the National Institutes of Health. The authors reported no financial disclosures.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

TOPLINE:

Mailed fecal immunochemical test (FIT) kits with reminder phone calls promote colorectal cancer (CRC) screening among veterans without recent primary care visits. Among 782 veterans in a randomized controlled trial (RCT), mailed FITs resulted in a 26.1% screening completion rate within 6 months, compared with 5.8% for usual care and 7.7% for mailed invitations with reminders. Improving screening in this population may help CRC morbidity and mortality among veterans.

METHODOLOGY:

  • Researchers conducted a 3-arm pragmatic RCT at the US Department of Verterans Affairs (VA) Corporal Michael J. Crescenz VA Medical Center (CMC-VAMC), enrolling veterans aged 50 to 75 years without a primary care visit within 18 months.
  • Participants were randomized 1:1:1 to usual care (n = 260), mailed clinic-based screening invitations with reminder calls (n = 261), or mailed home FIT outreach plus prenotification letter and reminder phone calls (n = 261).
  • Outcome measures included documented completion of CRC screening within 6 months after randomization in the electronic health record (EHR); a secondary outcome was FIT return within 6 months among those mailed FIT.
  • Eligibility and exclusions were based on chart review and EHR criteria (eg, excluding symptoms, family history, inflammatory bowel disease, prior resection, or being current by having undergone a colonoscopy within 10 years, sigmoidoscopy or barium enema within 5 years, or fecal occult blood testing within 1 year).

TAKEAWAY

  • CRC screening completion within 6 months is 26.1% with mailed FIT vs 5.8% with usual care (RD, 20.3%; 95% CI, 14.3%-26.3%; RR, 4.5; 95% CI, 2.7-7.7; P < .001).
  • CRC screening completion within 6 months is 26.1% with mailed FIT vs 7.7% with mailed invitation plus reminders (RD, 18.4%; 95% CI, 12.2%-24.6%; RR, 3.4; 95% CI, 2.1-5.4; P < .001).
  • Screening completion does not differ between mailed invitation plus reminders (7.7%) and usual care (5.8%), and the comparison is not statistically supported (RR, 1.3; P = .39).
  • No statistically significant differences in screening completion are reported by age or race/ethnicity, and investigators also report no significant differences in FIT return by age or race/ethnicity in the secondary analysis.

IN PRACTICE

“This research represents the first pragmatic RCT of mailed FIT outreach screening among veterans who have not recently (18 months) used primary care services offered by the VA. In this work, there were large relative, and absolute differences in CRC screening participation rate between veterans offered home FIT screening and those who received usual care (RR = 4.52, RD = 20.2%) or a mailed invitation plus reminders (RR = 3.40, RD = 18.4%)," wrote the authors.

SOURCE

The study was led by Matthew A. Goldshore, MD, PhD, MPH, of the CMC-VAMC . It was published online in Am J Prev Med.

LIMITATIONS

The study was not able identify differences in screening completion or FIT return by patient demographic characteristics such as age and race. The sample was randomized from predominantly male veterans cared for at a single VA medical center, limiting generalizability and reducing external validity. Follow-up and subsequent evaluation of FIT-positive participants is needed for the success of a mailed FIT intervention; of the 3 FIT-positive participants who should have received follow-up evaluation, only 1 underwent colonoscopy, highlighting the challenge of FIT to colonoscopy among participants who do not use care regularly at the CMC-VAMC.

DISCLOSURES

This trial received funding from an VA Health Services Research and Development Service award, with E. Carter Paulson, MD, MSCE, and Chyke A. Doubeni, MD, MPH, serving as principal investigators. Chyke A. Doubeni received support from grant number RO1CA 213645, and Shivan J. Mehta received support from grant number K08CA 234326, both from the National Cancer Institute of the National Institutes of Health. The authors reported no financial disclosures.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date