User login
Housing Support May Boost CRC Screening in Vets Experiencing Homelessness
TOPLINE: Among Veterans Health Administration (VHA) patients experiencing homelessness, gaining housing is linked to higher 24-month colorectal (CRC) and breast cancer screening completion. In cohorts of 117,619 veterans eligible for colorectal screening and 6517 veterans eligible for breast cancer screening veterans, screening occurs in 36.1% and 47.9% after housing gain vs 18.8% and 23.7% if homelessness persists.
METHODOLOGY
A retrospective cohort study examined all veterans experiencing homelessness who received care at the VHA from 2011 to 2021 and were eligible for but not up to date on CRC and breast cancer screening.
117,619 veterans experiencing homelessness were eligible for but not up to date on CRC screening (aged 50-75 years without prior cancer diagnosis, inflammatory bowel disease, or colectomy) and 6517 veterans experiencing homelessness were eligible for but not up to date on breast cancer screening (women aged 50-75 years without prior cancer diagnosis, lumpectomy, or mastectomy) were included at their index clinic visit.
Exposure was defined as gaining housing within 24 months following index clinic visit, identified through the Homeless Screening Clinical Reminder, US Department of Veterans Affairs (VA) Homeless Operations, Management, and Evaluation System assessments, or US Department of Housing and Urban Development—VA Supportive Housing program move-in dates.
Primary outcome were undergoing screening for CRC (colonoscopy, flexible sigmoidoscopy, computed tomography colonography, barium enema, or stool-based study) or breast cancer (mammogram) that was at a VHA facility or paid by VA within 24 months following index clinic visit.
TAKEAWAY
Among veterans who gained housing, 36.1% underwent CRC screening and 47.9% underwent breast cancer screening during the 24-month observation period, compared with 18.8% and 23.7% of veterans, respectively, among those who remained homeless.
Veterans who gained housing had 2.3 times the adjusted hazard ratio (aHR) of undergoing CRC screening compared with those who remained homeless (AHR, 2.3; 95% CI, 2.2-2.3; P < .001).
Veterans who gained housing had 2.4 times the adjusted hazard of undergoing breast cancer screening compared with those who remained homeless (AHR, 2.4; 95% CI, 2.2-2.7; P < .001).
Median (interquartile range [IQR]) time from index visit to cancer screening was 8 months (4-15) for CRC screening and 8 months (3-14) for breast cancer screening; median (IQR) time from gaining housing to screening was 4 months (1-9) and 3 months (1-8), respectively.
IN PRACTICE: Veterans experiencing homelessness who gain housing have higher rates of cancer screening. “This finding supports promotion of housing to improve health outcomes for homeless individuals," wrote the authors of the study.
SOURCE: The study was led by researchers at the University of California, San Francisco. It was published online in Annals of Family Medicine.
LIMITATIONS: Residual unmeasured confounding was likely due to the observational design of this study, because veterans able to navigate services to obtain housing may also be more likely to complete preventive care. Housing transitions may be misclassified because the Homeless Screening Clinical Reminder was not designed to track changes and may not be administered to veterans already identified as experiencing homelessness. The study did not capture data for screening completed outside VHA or that was not paid for by it. The study cohort only includes veterans with VHA contact, which may limit generalizability.
DISCLOSURES: Benioff Homelessness and Housing Initiative provided grant support for the work; Project Grant K24AG046372 was also awarded to Kushel for the study. Decker is a National Clinician Scholar with salary support from the US Department of Veterans Affairs and reported receiving personal fees from Moon Surgical. Kanzaria and Kushel are faculty members of the Benioff Homelessness and Housing Initiative; Kanzaria also reported advisory work for Amae Health. Kushel is listed as serving on boards including Housing California, National Homelessness Law Center, and Steinberg Institute; other authors reported no conflicts.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
TOPLINE: Among Veterans Health Administration (VHA) patients experiencing homelessness, gaining housing is linked to higher 24-month colorectal (CRC) and breast cancer screening completion. In cohorts of 117,619 veterans eligible for colorectal screening and 6517 veterans eligible for breast cancer screening veterans, screening occurs in 36.1% and 47.9% after housing gain vs 18.8% and 23.7% if homelessness persists.
METHODOLOGY
A retrospective cohort study examined all veterans experiencing homelessness who received care at the VHA from 2011 to 2021 and were eligible for but not up to date on CRC and breast cancer screening.
117,619 veterans experiencing homelessness were eligible for but not up to date on CRC screening (aged 50-75 years without prior cancer diagnosis, inflammatory bowel disease, or colectomy) and 6517 veterans experiencing homelessness were eligible for but not up to date on breast cancer screening (women aged 50-75 years without prior cancer diagnosis, lumpectomy, or mastectomy) were included at their index clinic visit.
Exposure was defined as gaining housing within 24 months following index clinic visit, identified through the Homeless Screening Clinical Reminder, US Department of Veterans Affairs (VA) Homeless Operations, Management, and Evaluation System assessments, or US Department of Housing and Urban Development—VA Supportive Housing program move-in dates.
Primary outcome were undergoing screening for CRC (colonoscopy, flexible sigmoidoscopy, computed tomography colonography, barium enema, or stool-based study) or breast cancer (mammogram) that was at a VHA facility or paid by VA within 24 months following index clinic visit.
TAKEAWAY
Among veterans who gained housing, 36.1% underwent CRC screening and 47.9% underwent breast cancer screening during the 24-month observation period, compared with 18.8% and 23.7% of veterans, respectively, among those who remained homeless.
Veterans who gained housing had 2.3 times the adjusted hazard ratio (aHR) of undergoing CRC screening compared with those who remained homeless (AHR, 2.3; 95% CI, 2.2-2.3; P < .001).
Veterans who gained housing had 2.4 times the adjusted hazard of undergoing breast cancer screening compared with those who remained homeless (AHR, 2.4; 95% CI, 2.2-2.7; P < .001).
Median (interquartile range [IQR]) time from index visit to cancer screening was 8 months (4-15) for CRC screening and 8 months (3-14) for breast cancer screening; median (IQR) time from gaining housing to screening was 4 months (1-9) and 3 months (1-8), respectively.
IN PRACTICE: Veterans experiencing homelessness who gain housing have higher rates of cancer screening. “This finding supports promotion of housing to improve health outcomes for homeless individuals," wrote the authors of the study.
SOURCE: The study was led by researchers at the University of California, San Francisco. It was published online in Annals of Family Medicine.
LIMITATIONS: Residual unmeasured confounding was likely due to the observational design of this study, because veterans able to navigate services to obtain housing may also be more likely to complete preventive care. Housing transitions may be misclassified because the Homeless Screening Clinical Reminder was not designed to track changes and may not be administered to veterans already identified as experiencing homelessness. The study did not capture data for screening completed outside VHA or that was not paid for by it. The study cohort only includes veterans with VHA contact, which may limit generalizability.
DISCLOSURES: Benioff Homelessness and Housing Initiative provided grant support for the work; Project Grant K24AG046372 was also awarded to Kushel for the study. Decker is a National Clinician Scholar with salary support from the US Department of Veterans Affairs and reported receiving personal fees from Moon Surgical. Kanzaria and Kushel are faculty members of the Benioff Homelessness and Housing Initiative; Kanzaria also reported advisory work for Amae Health. Kushel is listed as serving on boards including Housing California, National Homelessness Law Center, and Steinberg Institute; other authors reported no conflicts.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
TOPLINE: Among Veterans Health Administration (VHA) patients experiencing homelessness, gaining housing is linked to higher 24-month colorectal (CRC) and breast cancer screening completion. In cohorts of 117,619 veterans eligible for colorectal screening and 6517 veterans eligible for breast cancer screening veterans, screening occurs in 36.1% and 47.9% after housing gain vs 18.8% and 23.7% if homelessness persists.
METHODOLOGY
A retrospective cohort study examined all veterans experiencing homelessness who received care at the VHA from 2011 to 2021 and were eligible for but not up to date on CRC and breast cancer screening.
117,619 veterans experiencing homelessness were eligible for but not up to date on CRC screening (aged 50-75 years without prior cancer diagnosis, inflammatory bowel disease, or colectomy) and 6517 veterans experiencing homelessness were eligible for but not up to date on breast cancer screening (women aged 50-75 years without prior cancer diagnosis, lumpectomy, or mastectomy) were included at their index clinic visit.
Exposure was defined as gaining housing within 24 months following index clinic visit, identified through the Homeless Screening Clinical Reminder, US Department of Veterans Affairs (VA) Homeless Operations, Management, and Evaluation System assessments, or US Department of Housing and Urban Development—VA Supportive Housing program move-in dates.
Primary outcome were undergoing screening for CRC (colonoscopy, flexible sigmoidoscopy, computed tomography colonography, barium enema, or stool-based study) or breast cancer (mammogram) that was at a VHA facility or paid by VA within 24 months following index clinic visit.
TAKEAWAY
Among veterans who gained housing, 36.1% underwent CRC screening and 47.9% underwent breast cancer screening during the 24-month observation period, compared with 18.8% and 23.7% of veterans, respectively, among those who remained homeless.
Veterans who gained housing had 2.3 times the adjusted hazard ratio (aHR) of undergoing CRC screening compared with those who remained homeless (AHR, 2.3; 95% CI, 2.2-2.3; P < .001).
Veterans who gained housing had 2.4 times the adjusted hazard of undergoing breast cancer screening compared with those who remained homeless (AHR, 2.4; 95% CI, 2.2-2.7; P < .001).
Median (interquartile range [IQR]) time from index visit to cancer screening was 8 months (4-15) for CRC screening and 8 months (3-14) for breast cancer screening; median (IQR) time from gaining housing to screening was 4 months (1-9) and 3 months (1-8), respectively.
IN PRACTICE: Veterans experiencing homelessness who gain housing have higher rates of cancer screening. “This finding supports promotion of housing to improve health outcomes for homeless individuals," wrote the authors of the study.
SOURCE: The study was led by researchers at the University of California, San Francisco. It was published online in Annals of Family Medicine.
LIMITATIONS: Residual unmeasured confounding was likely due to the observational design of this study, because veterans able to navigate services to obtain housing may also be more likely to complete preventive care. Housing transitions may be misclassified because the Homeless Screening Clinical Reminder was not designed to track changes and may not be administered to veterans already identified as experiencing homelessness. The study did not capture data for screening completed outside VHA or that was not paid for by it. The study cohort only includes veterans with VHA contact, which may limit generalizability.
DISCLOSURES: Benioff Homelessness and Housing Initiative provided grant support for the work; Project Grant K24AG046372 was also awarded to Kushel for the study. Decker is a National Clinician Scholar with salary support from the US Department of Veterans Affairs and reported receiving personal fees from Moon Surgical. Kanzaria and Kushel are faculty members of the Benioff Homelessness and Housing Initiative; Kanzaria also reported advisory work for Amae Health. Kushel is listed as serving on boards including Housing California, National Homelessness Law Center, and Steinberg Institute; other authors reported no conflicts.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
Advanced CTE Associated With Dementia in Veterans Study
A study in veterans has found a link between dementia and severe chronic traumatic encephalopathy (CTE)—a degenerative brain disorder diagnosed after death that typically affects contact sports athletes and military personnel. Brain donors with advanced CTE (stage 4) were nearly 4.5 times more likely to have developed dementia than those without CTE. Individuals with stage 3 CTE had more than double the risk of dementia. The study was published in January in Alzheimer's and Dementia.
CTE stages 1 and 2 were not associated with dementia, cognitive impairment, or functional decline. Researchers also did not observe mood or behavioral symptoms at any stage of the disease. Researchers from the Boston University CTE Center and Veterans Affairs Boston Healthcare System (VABHS) led the study, which was funded by grants from the National Institutes of Health (NIH).
“This study proves that CTE is not a benign brain disease and that it has a significant impact on people’s lives,” coauthor Ann C. McKee, MD, chief of neuropathology at VABHS and director of the Boston University CTE Center, told Federal Practitioner.
McKee added that this research “provides evidence of a robust association between CTE and dementia, as well as cognitive symptoms, supporting our suspicions of CTE being a possible cause of dementia.”
Because CTE can only be diagnosed after death, researchers analyzed 614 donated brains from individuals with known exposure to repetitive head impacts. Among these donors, 366 (59.6%) had CTE and 248 (40.4%) did not. Most donors were male (97%), and most played American football (80.3%). Of the 614 donated brains, 20 (3.3%) were female. The average age of death from these 614 was 52 years, ranging from 13 to 98 years.
None of the donors had any of the 3 most common neurodegenerative causes of dementia: Alzheimer disease, dementia with Lewy bodies, or frontotemporal lobar degeneration.
Researchers also collected clinical information from individuals close to the donors. Typically, these are family members or close contacts through retrospective evaluations that combined online surveys, telephone interviews, and medical records.
Data collected included demographics; educational attainment; athletic history (including sport, level of play, position, age at first exposure, and duration); military history; traumatic brain injury history; substance use; and medical, social, and family histories.
CTE is often misdiagnosed as Alzheimer disease. In this study, among those diagnosed with dementia, 40% were informed they had Alzheimer, yet autopsy findings later showed no evidence of the disease. Another 38% were told the cause of dementia was unknown or could not be specified.
“In cases of dementia, when there is a history of repetitive head impacts from contact sports, military activities, or other exposures, CTE should be considered in the differential diagnosis,” McKee said. “Efforts should be made to distinguish CTE from Alzheimer disease and other causes of dementia during life.”
CTE shares features with Alzheimer, specifically the accumulation of abnormal tau protein. In healthy brains, tau helps maintain the stability and proper function of nerve cells. In CTE, however, tau accumulates in small clumps inside nerve cells that eventually form larger tangles.
Normally, the body clears excess tau protein, but in neurodegenerative diseases this process fails. The ensuing buildup damages brain cells, leading to cell death and the progressive symptoms of dementia.
Understanding how brain changes, including those related to CTE, relate to symptoms is of “paramount importance,” said Heather M. Snyder, PhD, senior vice president of medical and scientific relations at the Alzheimer’s Association in Chicago, who was not involved in the study.
Snyder described the research as “the first study to definitely demonstrate that brain changes caused by CTE are associated with the presence of dementia symptoms.” She also noted that the findings suggest a dose-response relationship, with more severe brain changes linked to worse cognitive symptoms.
The findings “open up new paths of research,” Snyder told Federal Practitioner, but also emphasized that improved tools are needed to detect these CTE-related brain changes in living individuals.
“While we have made significant progress in understanding the diseases that cause dementia, we have much to learn,” Snyder said. “Continued and steadfast investment in research remains a priority to improve early detection during life and develop personalized approaches.”
Ann McKee reported that she is a member of the Mackey-White Committee of the National Football League Players Association and received funding from the National Institutes of Health, US Department of Veteran Affairs, the Buoniconti Foundation and the MacParkman Foundation during the conduct of the study. She reports honorarium for speaking engagements.
Heather Snyder is a full-time employee of the Alzheimer’s Association, Chicago, IL and has a spouse who is employed by Abbott in an unrelated area. She has no financial conflicts to disclose.
A study in veterans has found a link between dementia and severe chronic traumatic encephalopathy (CTE)—a degenerative brain disorder diagnosed after death that typically affects contact sports athletes and military personnel. Brain donors with advanced CTE (stage 4) were nearly 4.5 times more likely to have developed dementia than those without CTE. Individuals with stage 3 CTE had more than double the risk of dementia. The study was published in January in Alzheimer's and Dementia.
CTE stages 1 and 2 were not associated with dementia, cognitive impairment, or functional decline. Researchers also did not observe mood or behavioral symptoms at any stage of the disease. Researchers from the Boston University CTE Center and Veterans Affairs Boston Healthcare System (VABHS) led the study, which was funded by grants from the National Institutes of Health (NIH).
“This study proves that CTE is not a benign brain disease and that it has a significant impact on people’s lives,” coauthor Ann C. McKee, MD, chief of neuropathology at VABHS and director of the Boston University CTE Center, told Federal Practitioner.
McKee added that this research “provides evidence of a robust association between CTE and dementia, as well as cognitive symptoms, supporting our suspicions of CTE being a possible cause of dementia.”
Because CTE can only be diagnosed after death, researchers analyzed 614 donated brains from individuals with known exposure to repetitive head impacts. Among these donors, 366 (59.6%) had CTE and 248 (40.4%) did not. Most donors were male (97%), and most played American football (80.3%). Of the 614 donated brains, 20 (3.3%) were female. The average age of death from these 614 was 52 years, ranging from 13 to 98 years.
None of the donors had any of the 3 most common neurodegenerative causes of dementia: Alzheimer disease, dementia with Lewy bodies, or frontotemporal lobar degeneration.
Researchers also collected clinical information from individuals close to the donors. Typically, these are family members or close contacts through retrospective evaluations that combined online surveys, telephone interviews, and medical records.
Data collected included demographics; educational attainment; athletic history (including sport, level of play, position, age at first exposure, and duration); military history; traumatic brain injury history; substance use; and medical, social, and family histories.
CTE is often misdiagnosed as Alzheimer disease. In this study, among those diagnosed with dementia, 40% were informed they had Alzheimer, yet autopsy findings later showed no evidence of the disease. Another 38% were told the cause of dementia was unknown or could not be specified.
“In cases of dementia, when there is a history of repetitive head impacts from contact sports, military activities, or other exposures, CTE should be considered in the differential diagnosis,” McKee said. “Efforts should be made to distinguish CTE from Alzheimer disease and other causes of dementia during life.”
CTE shares features with Alzheimer, specifically the accumulation of abnormal tau protein. In healthy brains, tau helps maintain the stability and proper function of nerve cells. In CTE, however, tau accumulates in small clumps inside nerve cells that eventually form larger tangles.
Normally, the body clears excess tau protein, but in neurodegenerative diseases this process fails. The ensuing buildup damages brain cells, leading to cell death and the progressive symptoms of dementia.
Understanding how brain changes, including those related to CTE, relate to symptoms is of “paramount importance,” said Heather M. Snyder, PhD, senior vice president of medical and scientific relations at the Alzheimer’s Association in Chicago, who was not involved in the study.
Snyder described the research as “the first study to definitely demonstrate that brain changes caused by CTE are associated with the presence of dementia symptoms.” She also noted that the findings suggest a dose-response relationship, with more severe brain changes linked to worse cognitive symptoms.
The findings “open up new paths of research,” Snyder told Federal Practitioner, but also emphasized that improved tools are needed to detect these CTE-related brain changes in living individuals.
“While we have made significant progress in understanding the diseases that cause dementia, we have much to learn,” Snyder said. “Continued and steadfast investment in research remains a priority to improve early detection during life and develop personalized approaches.”
Ann McKee reported that she is a member of the Mackey-White Committee of the National Football League Players Association and received funding from the National Institutes of Health, US Department of Veteran Affairs, the Buoniconti Foundation and the MacParkman Foundation during the conduct of the study. She reports honorarium for speaking engagements.
Heather Snyder is a full-time employee of the Alzheimer’s Association, Chicago, IL and has a spouse who is employed by Abbott in an unrelated area. She has no financial conflicts to disclose.
A study in veterans has found a link between dementia and severe chronic traumatic encephalopathy (CTE)—a degenerative brain disorder diagnosed after death that typically affects contact sports athletes and military personnel. Brain donors with advanced CTE (stage 4) were nearly 4.5 times more likely to have developed dementia than those without CTE. Individuals with stage 3 CTE had more than double the risk of dementia. The study was published in January in Alzheimer's and Dementia.
CTE stages 1 and 2 were not associated with dementia, cognitive impairment, or functional decline. Researchers also did not observe mood or behavioral symptoms at any stage of the disease. Researchers from the Boston University CTE Center and Veterans Affairs Boston Healthcare System (VABHS) led the study, which was funded by grants from the National Institutes of Health (NIH).
“This study proves that CTE is not a benign brain disease and that it has a significant impact on people’s lives,” coauthor Ann C. McKee, MD, chief of neuropathology at VABHS and director of the Boston University CTE Center, told Federal Practitioner.
McKee added that this research “provides evidence of a robust association between CTE and dementia, as well as cognitive symptoms, supporting our suspicions of CTE being a possible cause of dementia.”
Because CTE can only be diagnosed after death, researchers analyzed 614 donated brains from individuals with known exposure to repetitive head impacts. Among these donors, 366 (59.6%) had CTE and 248 (40.4%) did not. Most donors were male (97%), and most played American football (80.3%). Of the 614 donated brains, 20 (3.3%) were female. The average age of death from these 614 was 52 years, ranging from 13 to 98 years.
None of the donors had any of the 3 most common neurodegenerative causes of dementia: Alzheimer disease, dementia with Lewy bodies, or frontotemporal lobar degeneration.
Researchers also collected clinical information from individuals close to the donors. Typically, these are family members or close contacts through retrospective evaluations that combined online surveys, telephone interviews, and medical records.
Data collected included demographics; educational attainment; athletic history (including sport, level of play, position, age at first exposure, and duration); military history; traumatic brain injury history; substance use; and medical, social, and family histories.
CTE is often misdiagnosed as Alzheimer disease. In this study, among those diagnosed with dementia, 40% were informed they had Alzheimer, yet autopsy findings later showed no evidence of the disease. Another 38% were told the cause of dementia was unknown or could not be specified.
“In cases of dementia, when there is a history of repetitive head impacts from contact sports, military activities, or other exposures, CTE should be considered in the differential diagnosis,” McKee said. “Efforts should be made to distinguish CTE from Alzheimer disease and other causes of dementia during life.”
CTE shares features with Alzheimer, specifically the accumulation of abnormal tau protein. In healthy brains, tau helps maintain the stability and proper function of nerve cells. In CTE, however, tau accumulates in small clumps inside nerve cells that eventually form larger tangles.
Normally, the body clears excess tau protein, but in neurodegenerative diseases this process fails. The ensuing buildup damages brain cells, leading to cell death and the progressive symptoms of dementia.
Understanding how brain changes, including those related to CTE, relate to symptoms is of “paramount importance,” said Heather M. Snyder, PhD, senior vice president of medical and scientific relations at the Alzheimer’s Association in Chicago, who was not involved in the study.
Snyder described the research as “the first study to definitely demonstrate that brain changes caused by CTE are associated with the presence of dementia symptoms.” She also noted that the findings suggest a dose-response relationship, with more severe brain changes linked to worse cognitive symptoms.
The findings “open up new paths of research,” Snyder told Federal Practitioner, but also emphasized that improved tools are needed to detect these CTE-related brain changes in living individuals.
“While we have made significant progress in understanding the diseases that cause dementia, we have much to learn,” Snyder said. “Continued and steadfast investment in research remains a priority to improve early detection during life and develop personalized approaches.”
Ann McKee reported that she is a member of the Mackey-White Committee of the National Football League Players Association and received funding from the National Institutes of Health, US Department of Veteran Affairs, the Buoniconti Foundation and the MacParkman Foundation during the conduct of the study. She reports honorarium for speaking engagements.
Heather Snyder is a full-time employee of the Alzheimer’s Association, Chicago, IL and has a spouse who is employed by Abbott in an unrelated area. She has no financial conflicts to disclose.
Stereotactic Radiation Linked to Better Brain Mets Outcomes
Stereotactic Radiation Linked to Better Brain Mets Outcomes
TOPLINE:
In patients with 5-20 brain metastases, stereotactic radiation improved symptoms and reduced interference with daily functioning compared to hippocampal-avoidance whole brain radiation. The weighted composite MD Anderson Symptom Inventory-Brain Tumor score changed from 2.69 to 2.37 with stereotactic radiation compared with 2.29 to 3.03 with hippocampal-avoidance whole brain radiation.
METHODOLOGY:
- Randomized trials have shown stereotactic radiation preserves neurocognitive function and patient-reported outcomes compared with whole brain radiation in patients with four or less brain metastases. For patients with more than four brain metastases, published randomized comparisons of stereotactic radiation vs whole brain radiation were lacking prior to this study.
- Researchers conducted a phase 3, open-label, randomized clinical trial at four US-based centers, enrolling 196 patients between April 2017 and May 2024, with final follow-up in March 2025.
- Participants included patients with 5-20 brain metastases and no prior brain-directed radiation, with a median of 14 brain metastases per patient and 25% having undergone prior neurosurgical resection.
- The primary outcome was the mean weighted patient-reported symptom severity and interference score change over 6 months. The researchers used the MD Anderson Symptom Inventory-Brain Tumor instrument, with scores ranging from 0-10 and change range of -10 to 10, to measure outcomes.
- Stereotactic radiation was delivered in either 1 day (20 Gy) or five daily fractions (30 Gy, or 25 Gy for surgically removed tumors), while hippocampal-avoidance whole brain radiation was administered as 30 Gy in 10 daily fractions with memantine.
TAKEAWAY:
- Primary outcome analysis showed that stereotactic radiation was linked to a change in the weighted composite MD Anderson Symptom Inventory-Brain Tumor score of 2.69 to 2.37 (mean change, -0.32) compared with 2.29 to 3.03 (mean change, 0.74) with hippocampal-avoidance whole brain radiation (mean difference, -1.06; 95% CI, -1.54 to -0.58; P < .001).
- Functional independence via the Barthel Index was better in the stereotactic radiation group at 4 months (mean difference, 6.79; 95% CI, 1.19-12.38; P = .02) and 12 months (mean difference, 7.92; 95% CI, 1.34-14.49; P = .02).
- New brain metastases were more frequent with stereotactic radiation (1-year cumulative incidence, 45.4% vs 24.2%; P = .003), while local recurrence was lower (3.2% vs 39.5%; P < .001).
- Grade 3-5 adverse events occurred in 12% of stereotactic radiation patients vs 13% in the hippocampal-avoidance whole brain radiation group, with fatigue being most common (28% vs 44%).
IN PRACTICE:
“While [the trial] clearly demonstrates that patients with 5-20 brain metastases have improved symptom burden and lowered interference with daily functioning, there are questions that remain for stereotactic radiosurgery in this population. Patients receiving stereotactic radiosurgery for brain metastases have a higher need for future salvage procedures, and this rate of salvage procedures is higher for patients with an increased number of brain metastases at diagnosis… Moreover, it has been shown that the upfront decision between stereotactic radiosurgery and whole brain radiotherapy is the single decision that contributes most to the cost of care of a patient with brain metastases,” said Michael Chan, MD, in an accompanying editorial published in JAMA.
SOURCE:
The study was led by Ayal A. Aizer, MD, MHS, Brigham and Women’s Hospital/Dana-Farber Cancer Institute, Boston. It was published online on February 19 in JAMA.
LIMITATIONS:
According to the authors, the study was not blinded, and the primary outcome was subjective. High mortality limited long-term data collection, reducing precision and biasing outcomes toward survivors. Additionally, randomization was not stratified by treating center, allowing possible unmeasured imbalances. The minimal clinically important difference had not been defined for many study outcome measures.
DISCLOSURES:
The trial was supported by Varian, a Siemens Healthineers Company. Aizer disclosed receiving grants from NH TherAguix Research outside the submitted work. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
In patients with 5-20 brain metastases, stereotactic radiation improved symptoms and reduced interference with daily functioning compared to hippocampal-avoidance whole brain radiation. The weighted composite MD Anderson Symptom Inventory-Brain Tumor score changed from 2.69 to 2.37 with stereotactic radiation compared with 2.29 to 3.03 with hippocampal-avoidance whole brain radiation.
METHODOLOGY:
- Randomized trials have shown stereotactic radiation preserves neurocognitive function and patient-reported outcomes compared with whole brain radiation in patients with four or less brain metastases. For patients with more than four brain metastases, published randomized comparisons of stereotactic radiation vs whole brain radiation were lacking prior to this study.
- Researchers conducted a phase 3, open-label, randomized clinical trial at four US-based centers, enrolling 196 patients between April 2017 and May 2024, with final follow-up in March 2025.
- Participants included patients with 5-20 brain metastases and no prior brain-directed radiation, with a median of 14 brain metastases per patient and 25% having undergone prior neurosurgical resection.
- The primary outcome was the mean weighted patient-reported symptom severity and interference score change over 6 months. The researchers used the MD Anderson Symptom Inventory-Brain Tumor instrument, with scores ranging from 0-10 and change range of -10 to 10, to measure outcomes.
- Stereotactic radiation was delivered in either 1 day (20 Gy) or five daily fractions (30 Gy, or 25 Gy for surgically removed tumors), while hippocampal-avoidance whole brain radiation was administered as 30 Gy in 10 daily fractions with memantine.
TAKEAWAY:
- Primary outcome analysis showed that stereotactic radiation was linked to a change in the weighted composite MD Anderson Symptom Inventory-Brain Tumor score of 2.69 to 2.37 (mean change, -0.32) compared with 2.29 to 3.03 (mean change, 0.74) with hippocampal-avoidance whole brain radiation (mean difference, -1.06; 95% CI, -1.54 to -0.58; P < .001).
- Functional independence via the Barthel Index was better in the stereotactic radiation group at 4 months (mean difference, 6.79; 95% CI, 1.19-12.38; P = .02) and 12 months (mean difference, 7.92; 95% CI, 1.34-14.49; P = .02).
- New brain metastases were more frequent with stereotactic radiation (1-year cumulative incidence, 45.4% vs 24.2%; P = .003), while local recurrence was lower (3.2% vs 39.5%; P < .001).
- Grade 3-5 adverse events occurred in 12% of stereotactic radiation patients vs 13% in the hippocampal-avoidance whole brain radiation group, with fatigue being most common (28% vs 44%).
IN PRACTICE:
“While [the trial] clearly demonstrates that patients with 5-20 brain metastases have improved symptom burden and lowered interference with daily functioning, there are questions that remain for stereotactic radiosurgery in this population. Patients receiving stereotactic radiosurgery for brain metastases have a higher need for future salvage procedures, and this rate of salvage procedures is higher for patients with an increased number of brain metastases at diagnosis… Moreover, it has been shown that the upfront decision between stereotactic radiosurgery and whole brain radiotherapy is the single decision that contributes most to the cost of care of a patient with brain metastases,” said Michael Chan, MD, in an accompanying editorial published in JAMA.
SOURCE:
The study was led by Ayal A. Aizer, MD, MHS, Brigham and Women’s Hospital/Dana-Farber Cancer Institute, Boston. It was published online on February 19 in JAMA.
LIMITATIONS:
According to the authors, the study was not blinded, and the primary outcome was subjective. High mortality limited long-term data collection, reducing precision and biasing outcomes toward survivors. Additionally, randomization was not stratified by treating center, allowing possible unmeasured imbalances. The minimal clinically important difference had not been defined for many study outcome measures.
DISCLOSURES:
The trial was supported by Varian, a Siemens Healthineers Company. Aizer disclosed receiving grants from NH TherAguix Research outside the submitted work. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
In patients with 5-20 brain metastases, stereotactic radiation improved symptoms and reduced interference with daily functioning compared to hippocampal-avoidance whole brain radiation. The weighted composite MD Anderson Symptom Inventory-Brain Tumor score changed from 2.69 to 2.37 with stereotactic radiation compared with 2.29 to 3.03 with hippocampal-avoidance whole brain radiation.
METHODOLOGY:
- Randomized trials have shown stereotactic radiation preserves neurocognitive function and patient-reported outcomes compared with whole brain radiation in patients with four or less brain metastases. For patients with more than four brain metastases, published randomized comparisons of stereotactic radiation vs whole brain radiation were lacking prior to this study.
- Researchers conducted a phase 3, open-label, randomized clinical trial at four US-based centers, enrolling 196 patients between April 2017 and May 2024, with final follow-up in March 2025.
- Participants included patients with 5-20 brain metastases and no prior brain-directed radiation, with a median of 14 brain metastases per patient and 25% having undergone prior neurosurgical resection.
- The primary outcome was the mean weighted patient-reported symptom severity and interference score change over 6 months. The researchers used the MD Anderson Symptom Inventory-Brain Tumor instrument, with scores ranging from 0-10 and change range of -10 to 10, to measure outcomes.
- Stereotactic radiation was delivered in either 1 day (20 Gy) or five daily fractions (30 Gy, or 25 Gy for surgically removed tumors), while hippocampal-avoidance whole brain radiation was administered as 30 Gy in 10 daily fractions with memantine.
TAKEAWAY:
- Primary outcome analysis showed that stereotactic radiation was linked to a change in the weighted composite MD Anderson Symptom Inventory-Brain Tumor score of 2.69 to 2.37 (mean change, -0.32) compared with 2.29 to 3.03 (mean change, 0.74) with hippocampal-avoidance whole brain radiation (mean difference, -1.06; 95% CI, -1.54 to -0.58; P < .001).
- Functional independence via the Barthel Index was better in the stereotactic radiation group at 4 months (mean difference, 6.79; 95% CI, 1.19-12.38; P = .02) and 12 months (mean difference, 7.92; 95% CI, 1.34-14.49; P = .02).
- New brain metastases were more frequent with stereotactic radiation (1-year cumulative incidence, 45.4% vs 24.2%; P = .003), while local recurrence was lower (3.2% vs 39.5%; P < .001).
- Grade 3-5 adverse events occurred in 12% of stereotactic radiation patients vs 13% in the hippocampal-avoidance whole brain radiation group, with fatigue being most common (28% vs 44%).
IN PRACTICE:
“While [the trial] clearly demonstrates that patients with 5-20 brain metastases have improved symptom burden and lowered interference with daily functioning, there are questions that remain for stereotactic radiosurgery in this population. Patients receiving stereotactic radiosurgery for brain metastases have a higher need for future salvage procedures, and this rate of salvage procedures is higher for patients with an increased number of brain metastases at diagnosis… Moreover, it has been shown that the upfront decision between stereotactic radiosurgery and whole brain radiotherapy is the single decision that contributes most to the cost of care of a patient with brain metastases,” said Michael Chan, MD, in an accompanying editorial published in JAMA.
SOURCE:
The study was led by Ayal A. Aizer, MD, MHS, Brigham and Women’s Hospital/Dana-Farber Cancer Institute, Boston. It was published online on February 19 in JAMA.
LIMITATIONS:
According to the authors, the study was not blinded, and the primary outcome was subjective. High mortality limited long-term data collection, reducing precision and biasing outcomes toward survivors. Additionally, randomization was not stratified by treating center, allowing possible unmeasured imbalances. The minimal clinically important difference had not been defined for many study outcome measures.
DISCLOSURES:
The trial was supported by Varian, a Siemens Healthineers Company. Aizer disclosed receiving grants from NH TherAguix Research outside the submitted work. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
Stereotactic Radiation Linked to Better Brain Mets Outcomes
Stereotactic Radiation Linked to Better Brain Mets Outcomes
Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes
Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes
Transcript generated from video captions.
Hello. I'm Dr Maurie Markman, from City of Hope. I'd like to discuss over the next few minutes an absolutely provocative — and I don't use that term loosely — report that I would humbly suggest may, or perhaps even should, change standard of practice in the care of patients with breast cancer. The paper was published in the Journal of the National Cancer Institute, entitled, “Aprepitant Use During Chemotherapy and Association With Survival in Women With Early Breast Cancer.”
This is a very complex, important, and provocative topic, and I'm only going to have a short time to summarize these results, but again, I would suggest this is a topic worthy of very serious consideration in terms of the implications.
Aprepitant, as many of you know, is a standard antiemetic that has been used for many years. It’s very effective and very well tolerated. There’s not any question about that. It’s a supportive-care medication that may be used or not used; a variety of drugs might be used in its place.
However, there are preclinical data —I cannot go into any kind of detail here—that have revealed that aprepitant in these preclinical settings will slow breast cancer growth and progression.
What we're looking at in this report is retrospective data linking a nationwide registry of 13,811 women diagnosed with early breast cancer between 2008 and 2020 in Norway. These are population-based data that were very well documented because that's how things work in Scandinavian countries in general, but in Norway in particular. They know what patients receive nationally, over time, and there's follow-up.
The point is that they had knowledge of the diagnoses and the therapy. These women that I'm referring to had received chemotherapy and antiemetics, which, of course, is standard of care and has been for decades. These women were followed for the development of metastatic disease and death from 1 year after diagnosis to the end of 2021, which was the duration of this particular report.
During this period of time, of these 13,811 women, 7047 were given aprepitant, which is, interestingly, 51% or about half of the population. Here's the bottom line: Aprepitant use resulted in superior distant disease-free survival, with a hazard ratio of 0.89, and breast cancer-specific survival, with a hazard ratio of 0.83.
Increasingly interesting, only nonluminal breast cancer had this demonstrated benefit, with a hazard ratio of 0.69. Again, that's a hazard ratio for metastatic disease or death of 0.69 if aprepitant was used. It was strongest in triple-negative breast cancer, with a hazard ratio of 0.66. Let me repeat that: a hazard ratio of 0.66 for the reduction in the risk of distant disease or death. This was a difference that was able to be documented with the use of aprepitant or not.
Finally, in this analysis, survival outcomes were not observed with any other class of antiemetics, only aprepitant. In the nonluminal breast cancer population, the longer duration of aprepitant use — presumably multiple cycles over time — was associated with increasingly favorable survival outcomes. This was a trend analysis, so the longer it was used, the more superior the outcomes.
I’m not surprised. To get this paper published in a high-impact journal, the authors had to conclude that clinical trials are required to confirm these findings. Really?
If you're a patient, a family member, or an oncologist caring for a woman with triple-negative breast cancer, you are going to wait for a phase 3, randomized trial to be conducted and reported maybe in 5 or 10 years? When you're talking about a drug that is widely used and is safe, you're going to make a decision to wait for the clinical trial before you conclude that aprepitant should be used in this setting, based upon these excellent data?
I would challenge that and ask, on average today, certainly in patients that I'm seeing or counseling, aprepitant should become a component of the standard of care unless there's a contraindication to the use of the drug, based upon these excellent registry and population-based data.
We don't have to wait for randomized phase 3 trials to answer every question if what we see here makes sense, based on a plausible biological explanation and well-analyzed data. Obviously, other databases can look at this and see if they come up with different answers, but we do not need to wait for a phase 3, randomized trial before we incorporate something that we believe the data support as having a favorable impact on the outcome of patients we are seeing today.
I thank you for your attention.
A version of this article first appeared on Medscape.com.
Transcript generated from video captions.
Hello. I'm Dr Maurie Markman, from City of Hope. I'd like to discuss over the next few minutes an absolutely provocative — and I don't use that term loosely — report that I would humbly suggest may, or perhaps even should, change standard of practice in the care of patients with breast cancer. The paper was published in the Journal of the National Cancer Institute, entitled, “Aprepitant Use During Chemotherapy and Association With Survival in Women With Early Breast Cancer.”
This is a very complex, important, and provocative topic, and I'm only going to have a short time to summarize these results, but again, I would suggest this is a topic worthy of very serious consideration in terms of the implications.
Aprepitant, as many of you know, is a standard antiemetic that has been used for many years. It’s very effective and very well tolerated. There’s not any question about that. It’s a supportive-care medication that may be used or not used; a variety of drugs might be used in its place.
However, there are preclinical data —I cannot go into any kind of detail here—that have revealed that aprepitant in these preclinical settings will slow breast cancer growth and progression.
What we're looking at in this report is retrospective data linking a nationwide registry of 13,811 women diagnosed with early breast cancer between 2008 and 2020 in Norway. These are population-based data that were very well documented because that's how things work in Scandinavian countries in general, but in Norway in particular. They know what patients receive nationally, over time, and there's follow-up.
The point is that they had knowledge of the diagnoses and the therapy. These women that I'm referring to had received chemotherapy and antiemetics, which, of course, is standard of care and has been for decades. These women were followed for the development of metastatic disease and death from 1 year after diagnosis to the end of 2021, which was the duration of this particular report.
During this period of time, of these 13,811 women, 7047 were given aprepitant, which is, interestingly, 51% or about half of the population. Here's the bottom line: Aprepitant use resulted in superior distant disease-free survival, with a hazard ratio of 0.89, and breast cancer-specific survival, with a hazard ratio of 0.83.
Increasingly interesting, only nonluminal breast cancer had this demonstrated benefit, with a hazard ratio of 0.69. Again, that's a hazard ratio for metastatic disease or death of 0.69 if aprepitant was used. It was strongest in triple-negative breast cancer, with a hazard ratio of 0.66. Let me repeat that: a hazard ratio of 0.66 for the reduction in the risk of distant disease or death. This was a difference that was able to be documented with the use of aprepitant or not.
Finally, in this analysis, survival outcomes were not observed with any other class of antiemetics, only aprepitant. In the nonluminal breast cancer population, the longer duration of aprepitant use — presumably multiple cycles over time — was associated with increasingly favorable survival outcomes. This was a trend analysis, so the longer it was used, the more superior the outcomes.
I’m not surprised. To get this paper published in a high-impact journal, the authors had to conclude that clinical trials are required to confirm these findings. Really?
If you're a patient, a family member, or an oncologist caring for a woman with triple-negative breast cancer, you are going to wait for a phase 3, randomized trial to be conducted and reported maybe in 5 or 10 years? When you're talking about a drug that is widely used and is safe, you're going to make a decision to wait for the clinical trial before you conclude that aprepitant should be used in this setting, based upon these excellent data?
I would challenge that and ask, on average today, certainly in patients that I'm seeing or counseling, aprepitant should become a component of the standard of care unless there's a contraindication to the use of the drug, based upon these excellent registry and population-based data.
We don't have to wait for randomized phase 3 trials to answer every question if what we see here makes sense, based on a plausible biological explanation and well-analyzed data. Obviously, other databases can look at this and see if they come up with different answers, but we do not need to wait for a phase 3, randomized trial before we incorporate something that we believe the data support as having a favorable impact on the outcome of patients we are seeing today.
I thank you for your attention.
A version of this article first appeared on Medscape.com.
Transcript generated from video captions.
Hello. I'm Dr Maurie Markman, from City of Hope. I'd like to discuss over the next few minutes an absolutely provocative — and I don't use that term loosely — report that I would humbly suggest may, or perhaps even should, change standard of practice in the care of patients with breast cancer. The paper was published in the Journal of the National Cancer Institute, entitled, “Aprepitant Use During Chemotherapy and Association With Survival in Women With Early Breast Cancer.”
This is a very complex, important, and provocative topic, and I'm only going to have a short time to summarize these results, but again, I would suggest this is a topic worthy of very serious consideration in terms of the implications.
Aprepitant, as many of you know, is a standard antiemetic that has been used for many years. It’s very effective and very well tolerated. There’s not any question about that. It’s a supportive-care medication that may be used or not used; a variety of drugs might be used in its place.
However, there are preclinical data —I cannot go into any kind of detail here—that have revealed that aprepitant in these preclinical settings will slow breast cancer growth and progression.
What we're looking at in this report is retrospective data linking a nationwide registry of 13,811 women diagnosed with early breast cancer between 2008 and 2020 in Norway. These are population-based data that were very well documented because that's how things work in Scandinavian countries in general, but in Norway in particular. They know what patients receive nationally, over time, and there's follow-up.
The point is that they had knowledge of the diagnoses and the therapy. These women that I'm referring to had received chemotherapy and antiemetics, which, of course, is standard of care and has been for decades. These women were followed for the development of metastatic disease and death from 1 year after diagnosis to the end of 2021, which was the duration of this particular report.
During this period of time, of these 13,811 women, 7047 were given aprepitant, which is, interestingly, 51% or about half of the population. Here's the bottom line: Aprepitant use resulted in superior distant disease-free survival, with a hazard ratio of 0.89, and breast cancer-specific survival, with a hazard ratio of 0.83.
Increasingly interesting, only nonluminal breast cancer had this demonstrated benefit, with a hazard ratio of 0.69. Again, that's a hazard ratio for metastatic disease or death of 0.69 if aprepitant was used. It was strongest in triple-negative breast cancer, with a hazard ratio of 0.66. Let me repeat that: a hazard ratio of 0.66 for the reduction in the risk of distant disease or death. This was a difference that was able to be documented with the use of aprepitant or not.
Finally, in this analysis, survival outcomes were not observed with any other class of antiemetics, only aprepitant. In the nonluminal breast cancer population, the longer duration of aprepitant use — presumably multiple cycles over time — was associated with increasingly favorable survival outcomes. This was a trend analysis, so the longer it was used, the more superior the outcomes.
I’m not surprised. To get this paper published in a high-impact journal, the authors had to conclude that clinical trials are required to confirm these findings. Really?
If you're a patient, a family member, or an oncologist caring for a woman with triple-negative breast cancer, you are going to wait for a phase 3, randomized trial to be conducted and reported maybe in 5 or 10 years? When you're talking about a drug that is widely used and is safe, you're going to make a decision to wait for the clinical trial before you conclude that aprepitant should be used in this setting, based upon these excellent data?
I would challenge that and ask, on average today, certainly in patients that I'm seeing or counseling, aprepitant should become a component of the standard of care unless there's a contraindication to the use of the drug, based upon these excellent registry and population-based data.
We don't have to wait for randomized phase 3 trials to answer every question if what we see here makes sense, based on a plausible biological explanation and well-analyzed data. Obviously, other databases can look at this and see if they come up with different answers, but we do not need to wait for a phase 3, randomized trial before we incorporate something that we believe the data support as having a favorable impact on the outcome of patients we are seeing today.
I thank you for your attention.
A version of this article first appeared on Medscape.com.
Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes
Unexpected Survival Signal: Aprepitant Use During Chemotherapy Linked to Improved Breast Cancer Outcomes
Veteran Suicide Rate Declines Slightly, VA Report Shows
Veteran Suicide Rate Declines Slightly, VA Report Shows
Fewer veterans died by suicide in 2023 than 2022, according to the recently released 2025 National Veteran Suicide Prevention Annual Report from the US Department of Veterans Affairs (VA).
More than half of suicides, the Veterans Health Administration (VHA) found, were driven by pain (52.3%) or sleep problems (51.5%). Increased health problems were factors in 43.1% of cases, particularly traumatic brain injury (TBI) and cancer diagnosis. The suicide rate was 77.6 per 100,000 for veterans with a recent diagnosis of TBI, 94.3% higher than the rate of individuals without such a diagnosis. The suicide rate following a cancer diagnosis was 10.3% higher than for other veterans in VHA care—emphasizing the need, according to the VHA, to continue to expand efforts to integrate suicide prevention resources across all areas serving high-risk veteran groups.
VA has published the National Veteran Suicide Prevention report annually since 2016, with its release typically occurring in December. Release of the 2025 report was delayed until February 2026. The VA attributed the delay, however, due to the federal government shutdown from October 1 to November 12, 2025. At a January 2026 Senate Veterans’ Affairs Committee hearing, VA Secretary Doug Collins denied that there was an effort to halt its release.
Veteran deaths by suicide have often been called an epidemic, with the suicide rate having risen faster for veterans than it has for nonveterans since 2005. Veterans are 1.5 times more likely to die by suicide, a statistic that led Collins, veteran advocates, and members of Congress to identify veteran suicide prevention as a top priority.
The report indicates that the number of veteran suicides per year has remained relatively constant in the 6 most recent years of available data: 6738 in 2018, 6510 in 2019, 6347 in 2020, 6429 in 2021, 6442 in 2022, and 6398 in 2023. The fewest veteran suicides in the last 25 years happened both in 2001 and 2004 (6021), while the most (6738) came in 2018.
Although the overall veteran population has declined over time, more veterans are enrolling in VHA care, increasing from 3.8 million in 2001 to 6.1 million in 2023. However, the VHA found that 61% of veterans who died by suicide in 2023 were not receiving VHA care in the final year of their life.
The suicide rate among veterans in VHA care with mental health or substance use disorder diagnoses fell 34.7%, highlighting “the importance of both strengthening VA’s direct care system and expanding outreach and suicide prevention efforts for veterans who are not engaged in VHA health care,” Sen. Richard Blumenthal (D-CT), Ranking Member on the Senate Veterans’ Affairs Committee, said in a Feb. 5 statement about the report.
Aligning with previous VA data, the report presented information suggesting VHA services such as the Veterans Crisis Line (VCL) may reduce veteran suicide rates. Twelve months after the first contact with the VCL, the suicide rate for veterans in VHA care in 2022 was 16.1% lower than for those in 2021.
More than 2800 local and state coalitions are “actively working to meet community needs, expand available resources, and raise awareness” about suicide risks and prevention, the report says. The Staff Sergeant Parker Gordon Fox Suicide Prevention Grants Program, for example, provides community-based services for veterans, service members, and their families.
“Veteran suicide has been a scourge on our nation for far too long,” Collins said in a press release. “Most veterans who die by suicide were not in recent VA care, so making it easier for those who have worn the uniform to access the VA benefits they have earned is key.”
Fewer veterans died by suicide in 2023 than 2022, according to the recently released 2025 National Veteran Suicide Prevention Annual Report from the US Department of Veterans Affairs (VA).
More than half of suicides, the Veterans Health Administration (VHA) found, were driven by pain (52.3%) or sleep problems (51.5%). Increased health problems were factors in 43.1% of cases, particularly traumatic brain injury (TBI) and cancer diagnosis. The suicide rate was 77.6 per 100,000 for veterans with a recent diagnosis of TBI, 94.3% higher than the rate of individuals without such a diagnosis. The suicide rate following a cancer diagnosis was 10.3% higher than for other veterans in VHA care—emphasizing the need, according to the VHA, to continue to expand efforts to integrate suicide prevention resources across all areas serving high-risk veteran groups.
VA has published the National Veteran Suicide Prevention report annually since 2016, with its release typically occurring in December. Release of the 2025 report was delayed until February 2026. The VA attributed the delay, however, due to the federal government shutdown from October 1 to November 12, 2025. At a January 2026 Senate Veterans’ Affairs Committee hearing, VA Secretary Doug Collins denied that there was an effort to halt its release.
Veteran deaths by suicide have often been called an epidemic, with the suicide rate having risen faster for veterans than it has for nonveterans since 2005. Veterans are 1.5 times more likely to die by suicide, a statistic that led Collins, veteran advocates, and members of Congress to identify veteran suicide prevention as a top priority.
The report indicates that the number of veteran suicides per year has remained relatively constant in the 6 most recent years of available data: 6738 in 2018, 6510 in 2019, 6347 in 2020, 6429 in 2021, 6442 in 2022, and 6398 in 2023. The fewest veteran suicides in the last 25 years happened both in 2001 and 2004 (6021), while the most (6738) came in 2018.
Although the overall veteran population has declined over time, more veterans are enrolling in VHA care, increasing from 3.8 million in 2001 to 6.1 million in 2023. However, the VHA found that 61% of veterans who died by suicide in 2023 were not receiving VHA care in the final year of their life.
The suicide rate among veterans in VHA care with mental health or substance use disorder diagnoses fell 34.7%, highlighting “the importance of both strengthening VA’s direct care system and expanding outreach and suicide prevention efforts for veterans who are not engaged in VHA health care,” Sen. Richard Blumenthal (D-CT), Ranking Member on the Senate Veterans’ Affairs Committee, said in a Feb. 5 statement about the report.
Aligning with previous VA data, the report presented information suggesting VHA services such as the Veterans Crisis Line (VCL) may reduce veteran suicide rates. Twelve months after the first contact with the VCL, the suicide rate for veterans in VHA care in 2022 was 16.1% lower than for those in 2021.
More than 2800 local and state coalitions are “actively working to meet community needs, expand available resources, and raise awareness” about suicide risks and prevention, the report says. The Staff Sergeant Parker Gordon Fox Suicide Prevention Grants Program, for example, provides community-based services for veterans, service members, and their families.
“Veteran suicide has been a scourge on our nation for far too long,” Collins said in a press release. “Most veterans who die by suicide were not in recent VA care, so making it easier for those who have worn the uniform to access the VA benefits they have earned is key.”
Fewer veterans died by suicide in 2023 than 2022, according to the recently released 2025 National Veteran Suicide Prevention Annual Report from the US Department of Veterans Affairs (VA).
More than half of suicides, the Veterans Health Administration (VHA) found, were driven by pain (52.3%) or sleep problems (51.5%). Increased health problems were factors in 43.1% of cases, particularly traumatic brain injury (TBI) and cancer diagnosis. The suicide rate was 77.6 per 100,000 for veterans with a recent diagnosis of TBI, 94.3% higher than the rate of individuals without such a diagnosis. The suicide rate following a cancer diagnosis was 10.3% higher than for other veterans in VHA care—emphasizing the need, according to the VHA, to continue to expand efforts to integrate suicide prevention resources across all areas serving high-risk veteran groups.
VA has published the National Veteran Suicide Prevention report annually since 2016, with its release typically occurring in December. Release of the 2025 report was delayed until February 2026. The VA attributed the delay, however, due to the federal government shutdown from October 1 to November 12, 2025. At a January 2026 Senate Veterans’ Affairs Committee hearing, VA Secretary Doug Collins denied that there was an effort to halt its release.
Veteran deaths by suicide have often been called an epidemic, with the suicide rate having risen faster for veterans than it has for nonveterans since 2005. Veterans are 1.5 times more likely to die by suicide, a statistic that led Collins, veteran advocates, and members of Congress to identify veteran suicide prevention as a top priority.
The report indicates that the number of veteran suicides per year has remained relatively constant in the 6 most recent years of available data: 6738 in 2018, 6510 in 2019, 6347 in 2020, 6429 in 2021, 6442 in 2022, and 6398 in 2023. The fewest veteran suicides in the last 25 years happened both in 2001 and 2004 (6021), while the most (6738) came in 2018.
Although the overall veteran population has declined over time, more veterans are enrolling in VHA care, increasing from 3.8 million in 2001 to 6.1 million in 2023. However, the VHA found that 61% of veterans who died by suicide in 2023 were not receiving VHA care in the final year of their life.
The suicide rate among veterans in VHA care with mental health or substance use disorder diagnoses fell 34.7%, highlighting “the importance of both strengthening VA’s direct care system and expanding outreach and suicide prevention efforts for veterans who are not engaged in VHA health care,” Sen. Richard Blumenthal (D-CT), Ranking Member on the Senate Veterans’ Affairs Committee, said in a Feb. 5 statement about the report.
Aligning with previous VA data, the report presented information suggesting VHA services such as the Veterans Crisis Line (VCL) may reduce veteran suicide rates. Twelve months after the first contact with the VCL, the suicide rate for veterans in VHA care in 2022 was 16.1% lower than for those in 2021.
More than 2800 local and state coalitions are “actively working to meet community needs, expand available resources, and raise awareness” about suicide risks and prevention, the report says. The Staff Sergeant Parker Gordon Fox Suicide Prevention Grants Program, for example, provides community-based services for veterans, service members, and their families.
“Veteran suicide has been a scourge on our nation for far too long,” Collins said in a press release. “Most veterans who die by suicide were not in recent VA care, so making it easier for those who have worn the uniform to access the VA benefits they have earned is key.”
Veteran Suicide Rate Declines Slightly, VA Report Shows
Veteran Suicide Rate Declines Slightly, VA Report Shows
Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments
Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments
Spending time in a war zone can lead to chronic mental and physical pain. Now, research points to a link between two common disorders that can leave service members struggling.
Published in the journal Arthritis Care & Research, a longitudinal cohort study of 1761 US military service members found that those who had posttraumatic stress disorder (PTSD) before deployment were nearly 3times more likely to develop fibromyalgia after returning home (odds ratio, 2.96; 95% CI, 2.08-4.22). Those with fibromyalgia before deployment had more than threefold greater likelihood of developing PTSD after deployment (odds ratio, 3.12; 95% CI, 1.63-5.95).
This is the largest prospective study to date linking the stress of combat deployment to the onset of fibromyalgia.
“We had the advantage of observing a large population before and after exposure to an environment that often involves significant stress,” said lead study author Jay Higgs, MD, a retired rheumatologist with Brooke Army Medical Center and the University of Texas Health Science Center at San Antonio.
Here’s what the team found and why it matters.
Significant Increase in Fibromyalgia After Development
Service members were checked for fibromyalgia using the 2011 questionnaire modification of the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia. They were assessed for PTSD using the PTSD Checklist Stressor-Specific Version.
Before deployment, service members had similar rates of fibromyalgia as the general population: 2.2% in men and 2.0% in women. After deployment, fibromyalgia rates increased significantly to 8.0% in men and 11.1% in women.
While fibromyalgia tends to be underreported in men, the findings suggest it should not be overlooked in this population. “Our results are consistent with the notion that there should be no gender bias when considering the possibility of fibromyalgia in an individual patient,” Higgs said.
Before deployment, 20.7% of men and 18.3% of women had PTSD symptoms. After deployment, the PTSD rate increased slightly to 22.7% in men and 25.5% in women.
The Link Between Fibromyalgia and PTSD
The researchers said the results suggest that PTSD and fibromyalgia might be linked through central nervous system mechanisms such as central sensitization, elevated hypothalamic-pituitary-adrenal axis activity, elevated cortisol, and proinflammatory cytokines. However, shared causation, associated risk factors, selection bias, or alternative mechanisms within the central and peripheral neuroendocrine and cytokine systems could also be part of the story.
“What we do not know is how much of what we see clinically represents central nervous system pathology, peripheral problems, or a combination of the 2,” Higgs said. “Neurotransmission in the central nervous system is highly complex, and may not only involve specific structures, but a web of communications between them.”
Loci in the midbrain appear especially important, he said.
Elizabeth Hoge, MD, professor and director of the Anxiety Disorders Research Program at Georgetown University School of Medicine, Washington, DC, said that patients with PTSD often have pain, headaches, sleep disturbances, and other symptoms that are part of the picture of fibromyalgia. It’s plausible that pain syndromes could be manifestations of PTSD or groupings of symptoms that suggest a subtype.
“Pain is one way that people experience distress, and we know that in PTSD, sometimes the trauma memories are encoded too strongly, more stressful and more alarming to the body system,” she said.
When patients have symptoms such as chronic pain, headaches, fatigue, or cognitive brain fog, clinicians should remember to ask about trauma exposure, Hoge said. You might be the first to broach the subject.
“I’ve certainly seen patients in clinic who never get asked about the exposure to trauma, including sexual trauma, so sometimes that can be the first pathway to helping people feel better is just to have their trauma recognized,” Hoge said.
If a patient has experienced or witnessed violence, consider a referral to a psychiatrist or psychologist to evaluate them for PTSD. Higgs said he collaborated closely with a psychologist to complement his treatment plans for active duty and retired military service members and families.
The US Department of Veterans Affairs and the Department of Defense (DoD) recommend trauma-focused psychotherapy as the first line of treatment for PTSD. This form of therapy deliberately focuses on bringing trauma memories into the open, Hoge said.
“When a person talks about their trauma, and it comes into direct consciousness, somehow it’s malleable, and so when it goes back down into the memory banks, it’s changed somewhat,” she said.
This study was supported by the DoD through awards from the US Army Medical Research and Materiel Command, Congressionally Directed Medical Research Programs, and Psychological Health and Traumatic Brain Injury Research Program. The funding organizations played no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Higgs’s comments are his own and do not necessarily reflect the official policy or position of the Defense Health Agency, Brooke Army Medical Center, Carl R. Darnall Army Medical Center, the DoD, the Department of Veterans Affairs, or any agencies under the US government. Hoge had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Spending time in a war zone can lead to chronic mental and physical pain. Now, research points to a link between two common disorders that can leave service members struggling.
Published in the journal Arthritis Care & Research, a longitudinal cohort study of 1761 US military service members found that those who had posttraumatic stress disorder (PTSD) before deployment were nearly 3times more likely to develop fibromyalgia after returning home (odds ratio, 2.96; 95% CI, 2.08-4.22). Those with fibromyalgia before deployment had more than threefold greater likelihood of developing PTSD after deployment (odds ratio, 3.12; 95% CI, 1.63-5.95).
This is the largest prospective study to date linking the stress of combat deployment to the onset of fibromyalgia.
“We had the advantage of observing a large population before and after exposure to an environment that often involves significant stress,” said lead study author Jay Higgs, MD, a retired rheumatologist with Brooke Army Medical Center and the University of Texas Health Science Center at San Antonio.
Here’s what the team found and why it matters.
Significant Increase in Fibromyalgia After Development
Service members were checked for fibromyalgia using the 2011 questionnaire modification of the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia. They were assessed for PTSD using the PTSD Checklist Stressor-Specific Version.
Before deployment, service members had similar rates of fibromyalgia as the general population: 2.2% in men and 2.0% in women. After deployment, fibromyalgia rates increased significantly to 8.0% in men and 11.1% in women.
While fibromyalgia tends to be underreported in men, the findings suggest it should not be overlooked in this population. “Our results are consistent with the notion that there should be no gender bias when considering the possibility of fibromyalgia in an individual patient,” Higgs said.
Before deployment, 20.7% of men and 18.3% of women had PTSD symptoms. After deployment, the PTSD rate increased slightly to 22.7% in men and 25.5% in women.
The Link Between Fibromyalgia and PTSD
The researchers said the results suggest that PTSD and fibromyalgia might be linked through central nervous system mechanisms such as central sensitization, elevated hypothalamic-pituitary-adrenal axis activity, elevated cortisol, and proinflammatory cytokines. However, shared causation, associated risk factors, selection bias, or alternative mechanisms within the central and peripheral neuroendocrine and cytokine systems could also be part of the story.
“What we do not know is how much of what we see clinically represents central nervous system pathology, peripheral problems, or a combination of the 2,” Higgs said. “Neurotransmission in the central nervous system is highly complex, and may not only involve specific structures, but a web of communications between them.”
Loci in the midbrain appear especially important, he said.
Elizabeth Hoge, MD, professor and director of the Anxiety Disorders Research Program at Georgetown University School of Medicine, Washington, DC, said that patients with PTSD often have pain, headaches, sleep disturbances, and other symptoms that are part of the picture of fibromyalgia. It’s plausible that pain syndromes could be manifestations of PTSD or groupings of symptoms that suggest a subtype.
“Pain is one way that people experience distress, and we know that in PTSD, sometimes the trauma memories are encoded too strongly, more stressful and more alarming to the body system,” she said.
When patients have symptoms such as chronic pain, headaches, fatigue, or cognitive brain fog, clinicians should remember to ask about trauma exposure, Hoge said. You might be the first to broach the subject.
“I’ve certainly seen patients in clinic who never get asked about the exposure to trauma, including sexual trauma, so sometimes that can be the first pathway to helping people feel better is just to have their trauma recognized,” Hoge said.
If a patient has experienced or witnessed violence, consider a referral to a psychiatrist or psychologist to evaluate them for PTSD. Higgs said he collaborated closely with a psychologist to complement his treatment plans for active duty and retired military service members and families.
The US Department of Veterans Affairs and the Department of Defense (DoD) recommend trauma-focused psychotherapy as the first line of treatment for PTSD. This form of therapy deliberately focuses on bringing trauma memories into the open, Hoge said.
“When a person talks about their trauma, and it comes into direct consciousness, somehow it’s malleable, and so when it goes back down into the memory banks, it’s changed somewhat,” she said.
This study was supported by the DoD through awards from the US Army Medical Research and Materiel Command, Congressionally Directed Medical Research Programs, and Psychological Health and Traumatic Brain Injury Research Program. The funding organizations played no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Higgs’s comments are his own and do not necessarily reflect the official policy or position of the Defense Health Agency, Brooke Army Medical Center, Carl R. Darnall Army Medical Center, the DoD, the Department of Veterans Affairs, or any agencies under the US government. Hoge had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Spending time in a war zone can lead to chronic mental and physical pain. Now, research points to a link between two common disorders that can leave service members struggling.
Published in the journal Arthritis Care & Research, a longitudinal cohort study of 1761 US military service members found that those who had posttraumatic stress disorder (PTSD) before deployment were nearly 3times more likely to develop fibromyalgia after returning home (odds ratio, 2.96; 95% CI, 2.08-4.22). Those with fibromyalgia before deployment had more than threefold greater likelihood of developing PTSD after deployment (odds ratio, 3.12; 95% CI, 1.63-5.95).
This is the largest prospective study to date linking the stress of combat deployment to the onset of fibromyalgia.
“We had the advantage of observing a large population before and after exposure to an environment that often involves significant stress,” said lead study author Jay Higgs, MD, a retired rheumatologist with Brooke Army Medical Center and the University of Texas Health Science Center at San Antonio.
Here’s what the team found and why it matters.
Significant Increase in Fibromyalgia After Development
Service members were checked for fibromyalgia using the 2011 questionnaire modification of the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia. They were assessed for PTSD using the PTSD Checklist Stressor-Specific Version.
Before deployment, service members had similar rates of fibromyalgia as the general population: 2.2% in men and 2.0% in women. After deployment, fibromyalgia rates increased significantly to 8.0% in men and 11.1% in women.
While fibromyalgia tends to be underreported in men, the findings suggest it should not be overlooked in this population. “Our results are consistent with the notion that there should be no gender bias when considering the possibility of fibromyalgia in an individual patient,” Higgs said.
Before deployment, 20.7% of men and 18.3% of women had PTSD symptoms. After deployment, the PTSD rate increased slightly to 22.7% in men and 25.5% in women.
The Link Between Fibromyalgia and PTSD
The researchers said the results suggest that PTSD and fibromyalgia might be linked through central nervous system mechanisms such as central sensitization, elevated hypothalamic-pituitary-adrenal axis activity, elevated cortisol, and proinflammatory cytokines. However, shared causation, associated risk factors, selection bias, or alternative mechanisms within the central and peripheral neuroendocrine and cytokine systems could also be part of the story.
“What we do not know is how much of what we see clinically represents central nervous system pathology, peripheral problems, or a combination of the 2,” Higgs said. “Neurotransmission in the central nervous system is highly complex, and may not only involve specific structures, but a web of communications between them.”
Loci in the midbrain appear especially important, he said.
Elizabeth Hoge, MD, professor and director of the Anxiety Disorders Research Program at Georgetown University School of Medicine, Washington, DC, said that patients with PTSD often have pain, headaches, sleep disturbances, and other symptoms that are part of the picture of fibromyalgia. It’s plausible that pain syndromes could be manifestations of PTSD or groupings of symptoms that suggest a subtype.
“Pain is one way that people experience distress, and we know that in PTSD, sometimes the trauma memories are encoded too strongly, more stressful and more alarming to the body system,” she said.
When patients have symptoms such as chronic pain, headaches, fatigue, or cognitive brain fog, clinicians should remember to ask about trauma exposure, Hoge said. You might be the first to broach the subject.
“I’ve certainly seen patients in clinic who never get asked about the exposure to trauma, including sexual trauma, so sometimes that can be the first pathway to helping people feel better is just to have their trauma recognized,” Hoge said.
If a patient has experienced or witnessed violence, consider a referral to a psychiatrist or psychologist to evaluate them for PTSD. Higgs said he collaborated closely with a psychologist to complement his treatment plans for active duty and retired military service members and families.
The US Department of Veterans Affairs and the Department of Defense (DoD) recommend trauma-focused psychotherapy as the first line of treatment for PTSD. This form of therapy deliberately focuses on bringing trauma memories into the open, Hoge said.
“When a person talks about their trauma, and it comes into direct consciousness, somehow it’s malleable, and so when it goes back down into the memory banks, it’s changed somewhat,” she said.
This study was supported by the DoD through awards from the US Army Medical Research and Materiel Command, Congressionally Directed Medical Research Programs, and Psychological Health and Traumatic Brain Injury Research Program. The funding organizations played no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Higgs’s comments are his own and do not necessarily reflect the official policy or position of the Defense Health Agency, Brooke Army Medical Center, Carl R. Darnall Army Medical Center, the DoD, the Department of Veterans Affairs, or any agencies under the US government. Hoge had no relevant disclosures.
A version of this article first appeared on Medscape.com.
Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments
Fibromyalgia-PTSD Link Shows Bidirectional Relationship With Exposure to Combat Environments
Adding Protein EpiScores May Better Predict CRC Survival
Adding Protein EpiScores May Better Predict CRC Survival
DNA methylation-derived biomarkers called Protein EpiScores may improve the accuracy of disease-free and overall survival prediction in patients with colorectal cancer (CRC), compared with traditional clinical risk factors alone, suggest results of a prospective study.
Although Protein EpiScores require further validation before they are ready for clinical use, the present data offer insights into the underlying processes shaping CRC outcomes, lead author Alicia R. Richards, PhD, of Moffitt Cancer Center, Tampa, Florida, and colleagues wrote in Clinical Epigenetics.
“The immediate value of our findings is highlighting biological pathways like immune suppression and coagulation as drivers of poor outcomes,” senior author Jacob K. Kresovich, PhD, of Moffitt Cancer Center, told Medscape Medical News.
What Are Protein EpiScores?
Previous studies have evaluated epigenetic clocks, which are derived from DNA methylation profiles, as markers for CRC risk. However, these clocks cannot pinpoint specific biological drivers of cancer progression, the investigators wrote.
Protein EpiScores may fill this gap; they were developed based on previous work suggesting that DNA methylation profiles may improve disease prediction based on circulating proteins (eg, C-reactive protein) and physiologic traits (eg, smoking status) beyond directly measuring those same variables.
“Protein EpiScores may therefore represent a complementary class of biomarker to direct measurements,” the investigators wrote.
Although Protein EpiScores have helped uncover biological processes driving various conditions such as cardiovascular disease and cancer, this is the first study to evaluate them specifically in the context of cancer survival.
How Did This Study Evaluate Protein EpiScores in Patients With CRC?
The present study involved 136 patients with newly diagnosed CRC from the prospective ColoCare Study.
For each patient, the investigators recorded 107 Protein EpiScores from pretreatment whole blood samples. Disease-free and overall survival were monitored over a median follow-up of 7.3 years and as long as 13.8 years. During follow-up, 26% of patients experienced disease recurrence, and 35% died.
With these data, the investigators compared the predictive power of the Protein EpiScores vs traditional clinical risk factors for disease-free and overall survival. “We used the standard factors doctors routinely collect before treatment starts to assess prognosis, including tumor stage, age at cancer diagnosis, sex, body mass index, race, and tumor location,” Kresovich said. “These are well-established predictors readily available from medical records.”
What Were the Key Findings?
Adding specific Protein EpiScores to the standard clinical risk factors significantly improved prognostic accuracy for survival.
After adjusting for confounding variables, the HCII, VEGFA, CCL17, and LGALS3BP Protein EpiScores were each independently associated with worse disease-free survival, with hazard ratios ranging from 1.62 to 1.71. Adding these scores to the clinical model improved the concordance index (C-index) from 0.64 to 0.70.
The LGALS3BP Protein EpiScore was also independently linked to overall survival, with a hazard ratio of 1.80. Adding this score to the model raised the C-index from 0.70 to 0.75.
Finally, the HCII, LGALS3BP, MMP12, and VEGFA Protein EpiScores were tied to both disease-free and overall survival with hazard ratios above 1.50.
Are These Findings Practice-Changing?
“The improvements [in prognostic accuracy] are modest but potentially meaningful and comparable to gains from other established biomarkers,” Kresovich said. “The 6-point improvement for recurrence (C-index 0.64 to 0.70) resulted in 34% of patients being reclassified into more accurate risk categories.”
In theory, this could have a meaningful clinical impact.
“In cancer care, even incremental gains matter if they prevent undertreating high-risk patients or overtreating low-risk ones,” Kresovich said.
Despite this potential, he was clear that more work is needed.
“If our findings are validated in other epidemiologic settings, these Protein EpiScores could eventually complement existing risk tools, but we’re realistically several years from clinical implementation,” Kresovich said. “We see these current findings more as a research tool that requires validation in larger cohorts before clinical use.”
How Might These Findings Shape Future Research?
Although more studies are needed before clinical rollout, the present findings point to key biological pathways, such as those involving immune suppression and coagulation, which may be driving worse outcomes in patients with CRC.
“This information can guide basic scientists and mechanistic studies to identify potential therapeutic targets,” Kresovich said.
Beyond evaluating Protein EpiScores in larger patient populations, future studies may also need to recruit a more diverse patient population, given the present cohort was 93% White.
Although the investigators noted that “the racial homogeneity reduced potential confounding by ancestry,” they also explained that “Protein EpiScores were developed in European populations, and their translation to individuals with different ancestries has not been closely examined.”
The study was supported by the Miles for Moffitt Team Science Mechanism. The investigators reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
DNA methylation-derived biomarkers called Protein EpiScores may improve the accuracy of disease-free and overall survival prediction in patients with colorectal cancer (CRC), compared with traditional clinical risk factors alone, suggest results of a prospective study.
Although Protein EpiScores require further validation before they are ready for clinical use, the present data offer insights into the underlying processes shaping CRC outcomes, lead author Alicia R. Richards, PhD, of Moffitt Cancer Center, Tampa, Florida, and colleagues wrote in Clinical Epigenetics.
“The immediate value of our findings is highlighting biological pathways like immune suppression and coagulation as drivers of poor outcomes,” senior author Jacob K. Kresovich, PhD, of Moffitt Cancer Center, told Medscape Medical News.
What Are Protein EpiScores?
Previous studies have evaluated epigenetic clocks, which are derived from DNA methylation profiles, as markers for CRC risk. However, these clocks cannot pinpoint specific biological drivers of cancer progression, the investigators wrote.
Protein EpiScores may fill this gap; they were developed based on previous work suggesting that DNA methylation profiles may improve disease prediction based on circulating proteins (eg, C-reactive protein) and physiologic traits (eg, smoking status) beyond directly measuring those same variables.
“Protein EpiScores may therefore represent a complementary class of biomarker to direct measurements,” the investigators wrote.
Although Protein EpiScores have helped uncover biological processes driving various conditions such as cardiovascular disease and cancer, this is the first study to evaluate them specifically in the context of cancer survival.
How Did This Study Evaluate Protein EpiScores in Patients With CRC?
The present study involved 136 patients with newly diagnosed CRC from the prospective ColoCare Study.
For each patient, the investigators recorded 107 Protein EpiScores from pretreatment whole blood samples. Disease-free and overall survival were monitored over a median follow-up of 7.3 years and as long as 13.8 years. During follow-up, 26% of patients experienced disease recurrence, and 35% died.
With these data, the investigators compared the predictive power of the Protein EpiScores vs traditional clinical risk factors for disease-free and overall survival. “We used the standard factors doctors routinely collect before treatment starts to assess prognosis, including tumor stage, age at cancer diagnosis, sex, body mass index, race, and tumor location,” Kresovich said. “These are well-established predictors readily available from medical records.”
What Were the Key Findings?
Adding specific Protein EpiScores to the standard clinical risk factors significantly improved prognostic accuracy for survival.
After adjusting for confounding variables, the HCII, VEGFA, CCL17, and LGALS3BP Protein EpiScores were each independently associated with worse disease-free survival, with hazard ratios ranging from 1.62 to 1.71. Adding these scores to the clinical model improved the concordance index (C-index) from 0.64 to 0.70.
The LGALS3BP Protein EpiScore was also independently linked to overall survival, with a hazard ratio of 1.80. Adding this score to the model raised the C-index from 0.70 to 0.75.
Finally, the HCII, LGALS3BP, MMP12, and VEGFA Protein EpiScores were tied to both disease-free and overall survival with hazard ratios above 1.50.
Are These Findings Practice-Changing?
“The improvements [in prognostic accuracy] are modest but potentially meaningful and comparable to gains from other established biomarkers,” Kresovich said. “The 6-point improvement for recurrence (C-index 0.64 to 0.70) resulted in 34% of patients being reclassified into more accurate risk categories.”
In theory, this could have a meaningful clinical impact.
“In cancer care, even incremental gains matter if they prevent undertreating high-risk patients or overtreating low-risk ones,” Kresovich said.
Despite this potential, he was clear that more work is needed.
“If our findings are validated in other epidemiologic settings, these Protein EpiScores could eventually complement existing risk tools, but we’re realistically several years from clinical implementation,” Kresovich said. “We see these current findings more as a research tool that requires validation in larger cohorts before clinical use.”
How Might These Findings Shape Future Research?
Although more studies are needed before clinical rollout, the present findings point to key biological pathways, such as those involving immune suppression and coagulation, which may be driving worse outcomes in patients with CRC.
“This information can guide basic scientists and mechanistic studies to identify potential therapeutic targets,” Kresovich said.
Beyond evaluating Protein EpiScores in larger patient populations, future studies may also need to recruit a more diverse patient population, given the present cohort was 93% White.
Although the investigators noted that “the racial homogeneity reduced potential confounding by ancestry,” they also explained that “Protein EpiScores were developed in European populations, and their translation to individuals with different ancestries has not been closely examined.”
The study was supported by the Miles for Moffitt Team Science Mechanism. The investigators reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
DNA methylation-derived biomarkers called Protein EpiScores may improve the accuracy of disease-free and overall survival prediction in patients with colorectal cancer (CRC), compared with traditional clinical risk factors alone, suggest results of a prospective study.
Although Protein EpiScores require further validation before they are ready for clinical use, the present data offer insights into the underlying processes shaping CRC outcomes, lead author Alicia R. Richards, PhD, of Moffitt Cancer Center, Tampa, Florida, and colleagues wrote in Clinical Epigenetics.
“The immediate value of our findings is highlighting biological pathways like immune suppression and coagulation as drivers of poor outcomes,” senior author Jacob K. Kresovich, PhD, of Moffitt Cancer Center, told Medscape Medical News.
What Are Protein EpiScores?
Previous studies have evaluated epigenetic clocks, which are derived from DNA methylation profiles, as markers for CRC risk. However, these clocks cannot pinpoint specific biological drivers of cancer progression, the investigators wrote.
Protein EpiScores may fill this gap; they were developed based on previous work suggesting that DNA methylation profiles may improve disease prediction based on circulating proteins (eg, C-reactive protein) and physiologic traits (eg, smoking status) beyond directly measuring those same variables.
“Protein EpiScores may therefore represent a complementary class of biomarker to direct measurements,” the investigators wrote.
Although Protein EpiScores have helped uncover biological processes driving various conditions such as cardiovascular disease and cancer, this is the first study to evaluate them specifically in the context of cancer survival.
How Did This Study Evaluate Protein EpiScores in Patients With CRC?
The present study involved 136 patients with newly diagnosed CRC from the prospective ColoCare Study.
For each patient, the investigators recorded 107 Protein EpiScores from pretreatment whole blood samples. Disease-free and overall survival were monitored over a median follow-up of 7.3 years and as long as 13.8 years. During follow-up, 26% of patients experienced disease recurrence, and 35% died.
With these data, the investigators compared the predictive power of the Protein EpiScores vs traditional clinical risk factors for disease-free and overall survival. “We used the standard factors doctors routinely collect before treatment starts to assess prognosis, including tumor stage, age at cancer diagnosis, sex, body mass index, race, and tumor location,” Kresovich said. “These are well-established predictors readily available from medical records.”
What Were the Key Findings?
Adding specific Protein EpiScores to the standard clinical risk factors significantly improved prognostic accuracy for survival.
After adjusting for confounding variables, the HCII, VEGFA, CCL17, and LGALS3BP Protein EpiScores were each independently associated with worse disease-free survival, with hazard ratios ranging from 1.62 to 1.71. Adding these scores to the clinical model improved the concordance index (C-index) from 0.64 to 0.70.
The LGALS3BP Protein EpiScore was also independently linked to overall survival, with a hazard ratio of 1.80. Adding this score to the model raised the C-index from 0.70 to 0.75.
Finally, the HCII, LGALS3BP, MMP12, and VEGFA Protein EpiScores were tied to both disease-free and overall survival with hazard ratios above 1.50.
Are These Findings Practice-Changing?
“The improvements [in prognostic accuracy] are modest but potentially meaningful and comparable to gains from other established biomarkers,” Kresovich said. “The 6-point improvement for recurrence (C-index 0.64 to 0.70) resulted in 34% of patients being reclassified into more accurate risk categories.”
In theory, this could have a meaningful clinical impact.
“In cancer care, even incremental gains matter if they prevent undertreating high-risk patients or overtreating low-risk ones,” Kresovich said.
Despite this potential, he was clear that more work is needed.
“If our findings are validated in other epidemiologic settings, these Protein EpiScores could eventually complement existing risk tools, but we’re realistically several years from clinical implementation,” Kresovich said. “We see these current findings more as a research tool that requires validation in larger cohorts before clinical use.”
How Might These Findings Shape Future Research?
Although more studies are needed before clinical rollout, the present findings point to key biological pathways, such as those involving immune suppression and coagulation, which may be driving worse outcomes in patients with CRC.
“This information can guide basic scientists and mechanistic studies to identify potential therapeutic targets,” Kresovich said.
Beyond evaluating Protein EpiScores in larger patient populations, future studies may also need to recruit a more diverse patient population, given the present cohort was 93% White.
Although the investigators noted that “the racial homogeneity reduced potential confounding by ancestry,” they also explained that “Protein EpiScores were developed in European populations, and their translation to individuals with different ancestries has not been closely examined.”
The study was supported by the Miles for Moffitt Team Science Mechanism. The investigators reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Adding Protein EpiScores May Better Predict CRC Survival
Adding Protein EpiScores May Better Predict CRC Survival
Mailed Tests Boost Colorectal Screening in Veterans
TOPLINE:
Mailed fecal immunochemical test (FIT) kits with reminder phone calls promote colorectal cancer (CRC) screening among veterans without recent primary care visits. Among 782 veterans in a randomized controlled trial (RCT), mailed FITs resulted in a 26.1% screening completion rate within 6 months, compared with 5.8% for usual care and 7.7% for mailed invitations with reminders. Improving screening in this population may help CRC morbidity and mortality among veterans.
METHODOLOGY:
- Researchers conducted a 3-arm pragmatic RCT at the US Department of Verterans Affairs (VA) Corporal Michael J. Crescenz VA Medical Center (CMC-VAMC), enrolling veterans aged 50 to 75 years without a primary care visit within 18 months.
- Participants were randomized 1:1:1 to usual care (n = 260), mailed clinic-based screening invitations with reminder calls (n = 261), or mailed home FIT outreach plus prenotification letter and reminder phone calls (n = 261).
- Outcome measures included documented completion of CRC screening within 6 months after randomization in the electronic health record (EHR); a secondary outcome was FIT return within 6 months among those mailed FIT.
- Eligibility and exclusions were based on chart review and EHR criteria (eg, excluding symptoms, family history, inflammatory bowel disease, prior resection, or being current by having undergone a colonoscopy within 10 years, sigmoidoscopy or barium enema within 5 years, or fecal occult blood testing within 1 year).
TAKEAWAY
- CRC screening completion within 6 months is 26.1% with mailed FIT vs 5.8% with usual care (RD, 20.3%; 95% CI, 14.3%-26.3%; RR, 4.5; 95% CI, 2.7-7.7; P < .001).
- CRC screening completion within 6 months is 26.1% with mailed FIT vs 7.7% with mailed invitation plus reminders (RD, 18.4%; 95% CI, 12.2%-24.6%; RR, 3.4; 95% CI, 2.1-5.4; P < .001).
- Screening completion does not differ between mailed invitation plus reminders (7.7%) and usual care (5.8%), and the comparison is not statistically supported (RR, 1.3; P = .39).
- No statistically significant differences in screening completion are reported by age or race/ethnicity, and investigators also report no significant differences in FIT return by age or race/ethnicity in the secondary analysis.
IN PRACTICE
“This research represents the first pragmatic RCT of mailed FIT outreach screening among veterans who have not recently (18 months) used primary care services offered by the VA. In this work, there were large relative, and absolute differences in CRC screening participation rate between veterans offered home FIT screening and those who received usual care (RR = 4.52, RD = 20.2%) or a mailed invitation plus reminders (RR = 3.40, RD = 18.4%)," wrote the authors.
SOURCE
The study was led by Matthew A. Goldshore, MD, PhD, MPH, of the CMC-VAMC . It was published online in Am J Prev Med.
LIMITATIONS
The study was not able identify differences in screening completion or FIT return by patient demographic characteristics such as age and race. The sample was randomized from predominantly male veterans cared for at a single VA medical center, limiting generalizability and reducing external validity. Follow-up and subsequent evaluation of FIT-positive participants is needed for the success of a mailed FIT intervention; of the 3 FIT-positive participants who should have received follow-up evaluation, only 1 underwent colonoscopy, highlighting the challenge of FIT to colonoscopy among participants who do not use care regularly at the CMC-VAMC.
DISCLOSURES
This trial received funding from an VA Health Services Research and Development Service award, with E. Carter Paulson, MD, MSCE, and Chyke A. Doubeni, MD, MPH, serving as principal investigators. Chyke A. Doubeni received support from grant number RO1CA 213645, and Shivan J. Mehta received support from grant number K08CA 234326, both from the National Cancer Institute of the National Institutes of Health. The authors reported no financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
TOPLINE:
Mailed fecal immunochemical test (FIT) kits with reminder phone calls promote colorectal cancer (CRC) screening among veterans without recent primary care visits. Among 782 veterans in a randomized controlled trial (RCT), mailed FITs resulted in a 26.1% screening completion rate within 6 months, compared with 5.8% for usual care and 7.7% for mailed invitations with reminders. Improving screening in this population may help CRC morbidity and mortality among veterans.
METHODOLOGY:
- Researchers conducted a 3-arm pragmatic RCT at the US Department of Verterans Affairs (VA) Corporal Michael J. Crescenz VA Medical Center (CMC-VAMC), enrolling veterans aged 50 to 75 years without a primary care visit within 18 months.
- Participants were randomized 1:1:1 to usual care (n = 260), mailed clinic-based screening invitations with reminder calls (n = 261), or mailed home FIT outreach plus prenotification letter and reminder phone calls (n = 261).
- Outcome measures included documented completion of CRC screening within 6 months after randomization in the electronic health record (EHR); a secondary outcome was FIT return within 6 months among those mailed FIT.
- Eligibility and exclusions were based on chart review and EHR criteria (eg, excluding symptoms, family history, inflammatory bowel disease, prior resection, or being current by having undergone a colonoscopy within 10 years, sigmoidoscopy or barium enema within 5 years, or fecal occult blood testing within 1 year).
TAKEAWAY
- CRC screening completion within 6 months is 26.1% with mailed FIT vs 5.8% with usual care (RD, 20.3%; 95% CI, 14.3%-26.3%; RR, 4.5; 95% CI, 2.7-7.7; P < .001).
- CRC screening completion within 6 months is 26.1% with mailed FIT vs 7.7% with mailed invitation plus reminders (RD, 18.4%; 95% CI, 12.2%-24.6%; RR, 3.4; 95% CI, 2.1-5.4; P < .001).
- Screening completion does not differ between mailed invitation plus reminders (7.7%) and usual care (5.8%), and the comparison is not statistically supported (RR, 1.3; P = .39).
- No statistically significant differences in screening completion are reported by age or race/ethnicity, and investigators also report no significant differences in FIT return by age or race/ethnicity in the secondary analysis.
IN PRACTICE
“This research represents the first pragmatic RCT of mailed FIT outreach screening among veterans who have not recently (18 months) used primary care services offered by the VA. In this work, there were large relative, and absolute differences in CRC screening participation rate between veterans offered home FIT screening and those who received usual care (RR = 4.52, RD = 20.2%) or a mailed invitation plus reminders (RR = 3.40, RD = 18.4%)," wrote the authors.
SOURCE
The study was led by Matthew A. Goldshore, MD, PhD, MPH, of the CMC-VAMC . It was published online in Am J Prev Med.
LIMITATIONS
The study was not able identify differences in screening completion or FIT return by patient demographic characteristics such as age and race. The sample was randomized from predominantly male veterans cared for at a single VA medical center, limiting generalizability and reducing external validity. Follow-up and subsequent evaluation of FIT-positive participants is needed for the success of a mailed FIT intervention; of the 3 FIT-positive participants who should have received follow-up evaluation, only 1 underwent colonoscopy, highlighting the challenge of FIT to colonoscopy among participants who do not use care regularly at the CMC-VAMC.
DISCLOSURES
This trial received funding from an VA Health Services Research and Development Service award, with E. Carter Paulson, MD, MSCE, and Chyke A. Doubeni, MD, MPH, serving as principal investigators. Chyke A. Doubeni received support from grant number RO1CA 213645, and Shivan J. Mehta received support from grant number K08CA 234326, both from the National Cancer Institute of the National Institutes of Health. The authors reported no financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
TOPLINE:
Mailed fecal immunochemical test (FIT) kits with reminder phone calls promote colorectal cancer (CRC) screening among veterans without recent primary care visits. Among 782 veterans in a randomized controlled trial (RCT), mailed FITs resulted in a 26.1% screening completion rate within 6 months, compared with 5.8% for usual care and 7.7% for mailed invitations with reminders. Improving screening in this population may help CRC morbidity and mortality among veterans.
METHODOLOGY:
- Researchers conducted a 3-arm pragmatic RCT at the US Department of Verterans Affairs (VA) Corporal Michael J. Crescenz VA Medical Center (CMC-VAMC), enrolling veterans aged 50 to 75 years without a primary care visit within 18 months.
- Participants were randomized 1:1:1 to usual care (n = 260), mailed clinic-based screening invitations with reminder calls (n = 261), or mailed home FIT outreach plus prenotification letter and reminder phone calls (n = 261).
- Outcome measures included documented completion of CRC screening within 6 months after randomization in the electronic health record (EHR); a secondary outcome was FIT return within 6 months among those mailed FIT.
- Eligibility and exclusions were based on chart review and EHR criteria (eg, excluding symptoms, family history, inflammatory bowel disease, prior resection, or being current by having undergone a colonoscopy within 10 years, sigmoidoscopy or barium enema within 5 years, or fecal occult blood testing within 1 year).
TAKEAWAY
- CRC screening completion within 6 months is 26.1% with mailed FIT vs 5.8% with usual care (RD, 20.3%; 95% CI, 14.3%-26.3%; RR, 4.5; 95% CI, 2.7-7.7; P < .001).
- CRC screening completion within 6 months is 26.1% with mailed FIT vs 7.7% with mailed invitation plus reminders (RD, 18.4%; 95% CI, 12.2%-24.6%; RR, 3.4; 95% CI, 2.1-5.4; P < .001).
- Screening completion does not differ between mailed invitation plus reminders (7.7%) and usual care (5.8%), and the comparison is not statistically supported (RR, 1.3; P = .39).
- No statistically significant differences in screening completion are reported by age or race/ethnicity, and investigators also report no significant differences in FIT return by age or race/ethnicity in the secondary analysis.
IN PRACTICE
“This research represents the first pragmatic RCT of mailed FIT outreach screening among veterans who have not recently (18 months) used primary care services offered by the VA. In this work, there were large relative, and absolute differences in CRC screening participation rate between veterans offered home FIT screening and those who received usual care (RR = 4.52, RD = 20.2%) or a mailed invitation plus reminders (RR = 3.40, RD = 18.4%)," wrote the authors.
SOURCE
The study was led by Matthew A. Goldshore, MD, PhD, MPH, of the CMC-VAMC . It was published online in Am J Prev Med.
LIMITATIONS
The study was not able identify differences in screening completion or FIT return by patient demographic characteristics such as age and race. The sample was randomized from predominantly male veterans cared for at a single VA medical center, limiting generalizability and reducing external validity. Follow-up and subsequent evaluation of FIT-positive participants is needed for the success of a mailed FIT intervention; of the 3 FIT-positive participants who should have received follow-up evaluation, only 1 underwent colonoscopy, highlighting the challenge of FIT to colonoscopy among participants who do not use care regularly at the CMC-VAMC.
DISCLOSURES
This trial received funding from an VA Health Services Research and Development Service award, with E. Carter Paulson, MD, MSCE, and Chyke A. Doubeni, MD, MPH, serving as principal investigators. Chyke A. Doubeni received support from grant number RO1CA 213645, and Shivan J. Mehta received support from grant number K08CA 234326, both from the National Cancer Institute of the National Institutes of Health. The authors reported no financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
Social Challenges Linked to More Suicidality in Vets
Veterans experiencing unstable housing, financial strain, and with poor access to health care have a higher risk of suicidal thoughts and behaviors, according to findings in a new study, leading researchers to call for additional screening to identify those in jeopardy.
Each incremental increase in social disadvantage was tied to increases in the likelihood of recent suicidal thoughts (odds ratio [OR], 2.14), future suicidal intent (OR, 2.21), and lifetime suicide attempt (OR, 1.78) in a weighted analysis. The self-reported data was published as a cross-sectional study by Pietrzak et al in the December 2025 issue of JAMA Psychiatry.
Veterans whose social plights ranked in the worst 5% were > 20 times more likely to report suicidal thoughts and behaviors than those in the top 5%. Especially striking were the magnitudes of the associations and their persistence after adjustment for psychiatric conditions and other suicide risk factors, lead author Robert H. Pietrzak, PhD, MPH said in an interview with Federal Practitioner.
“This finding highlights how extreme cumulative disadvantage can be overwhelming,” Pietrzak said. “It suggests that suicide risk among veterans increases dramatically when multiple social stressors cluster together. Rather than any single hardship driving risk, it is the cumulative impact of social disadvantage that appears most strongly linked to elevated suicide risk.”
As Pietrzak explained, veterans account for < 7% of total US adults but about 14% of suicide deaths. “Several factors may contribute to this difference, including higher exposure to trauma, elevated rates of psychiatric conditions, challenges with reintegration into civilian life, and structural barriers to care,” Pietrzak said. “Increasingly, social and economic stressors are also recognized by experts and researchers as critical contributors to suicide risk.”
Social determinants of health (SDOH) such as unemployment and lack of access to health care have also been linked to suicide risk, he said.
“Less well understood is how multiple adverse social conditions interact and accumulate to compound suicide risk,” Pietrzak said.
The new study sought to determine the impact of SDOH as a whole, not just in isolation. The study analyzed SDOH in 5 areas—education access and quality, economic conditions, health care access and quality, neighborhood and built environment, and social and community context—via the National Health and Resilience in Veterans Study, which surveyed 4069 veterans. The participants had weighted demographics of mean age 62.2 years; 90.2% were male; and 78.1% White, 11.2% Black, 6.6% Hispanic, 4.2% other.
Past-year suicidal ideation was most highly linked to psychosocial difficulties (OR, 1.58; 95% CI, 1.43-1.75). Future suicidal intent was most highly linked to residing in a mobile home, recreational vehicle, or van (OR, 1.60; 95% CI, 1.24-2.07) in addition to psychosocial difficulties (OR, 1.45; 95% CI, 1.18-1.80). Lifetime suicidal attempt was most highly linked to history of homelessness (OR, 1.37; 95% CI, 1.22-1.55; all P < .001).
“The results of our study underscore the importance of routine, standardized screening for cumulative social disadvantage within VA and community care settings that serve veterans,” Pietrzak said.
He added that findings make it clear that “suicide prevention extends beyond mental health care. Improving the social conditions in which veterans live, work, and age is not only good public policy. It may save lives.”
Mark S. Kaplan, DrPH, a research professor of Social Welfare at the University of California at Los Angeles Luskin School of Public Affairs is familiar with the study findings and said they highlight the need to “approach the question of suicide in much wider terms as opposed to reducing it to psychiatric traits.”
J. John Mann, MD, a professor of translational neuroscience in psychiatry and radiology who studies suicide at Columbia University, New York City, said the study’s findings illustrate that clinicians must do more to understand the lives of patients outside the examination room. He predicted that more screening for social determinants of health will “enrich the amount of information that the clinician will have and lead to a more comprehensive clinical care plan.”
The US Department of Veterans Affairs supported the study. Pietrzak has no disclosures. Other study authors report various disclosures.
Veterans experiencing unstable housing, financial strain, and with poor access to health care have a higher risk of suicidal thoughts and behaviors, according to findings in a new study, leading researchers to call for additional screening to identify those in jeopardy.
Each incremental increase in social disadvantage was tied to increases in the likelihood of recent suicidal thoughts (odds ratio [OR], 2.14), future suicidal intent (OR, 2.21), and lifetime suicide attempt (OR, 1.78) in a weighted analysis. The self-reported data was published as a cross-sectional study by Pietrzak et al in the December 2025 issue of JAMA Psychiatry.
Veterans whose social plights ranked in the worst 5% were > 20 times more likely to report suicidal thoughts and behaviors than those in the top 5%. Especially striking were the magnitudes of the associations and their persistence after adjustment for psychiatric conditions and other suicide risk factors, lead author Robert H. Pietrzak, PhD, MPH said in an interview with Federal Practitioner.
“This finding highlights how extreme cumulative disadvantage can be overwhelming,” Pietrzak said. “It suggests that suicide risk among veterans increases dramatically when multiple social stressors cluster together. Rather than any single hardship driving risk, it is the cumulative impact of social disadvantage that appears most strongly linked to elevated suicide risk.”
As Pietrzak explained, veterans account for < 7% of total US adults but about 14% of suicide deaths. “Several factors may contribute to this difference, including higher exposure to trauma, elevated rates of psychiatric conditions, challenges with reintegration into civilian life, and structural barriers to care,” Pietrzak said. “Increasingly, social and economic stressors are also recognized by experts and researchers as critical contributors to suicide risk.”
Social determinants of health (SDOH) such as unemployment and lack of access to health care have also been linked to suicide risk, he said.
“Less well understood is how multiple adverse social conditions interact and accumulate to compound suicide risk,” Pietrzak said.
The new study sought to determine the impact of SDOH as a whole, not just in isolation. The study analyzed SDOH in 5 areas—education access and quality, economic conditions, health care access and quality, neighborhood and built environment, and social and community context—via the National Health and Resilience in Veterans Study, which surveyed 4069 veterans. The participants had weighted demographics of mean age 62.2 years; 90.2% were male; and 78.1% White, 11.2% Black, 6.6% Hispanic, 4.2% other.
Past-year suicidal ideation was most highly linked to psychosocial difficulties (OR, 1.58; 95% CI, 1.43-1.75). Future suicidal intent was most highly linked to residing in a mobile home, recreational vehicle, or van (OR, 1.60; 95% CI, 1.24-2.07) in addition to psychosocial difficulties (OR, 1.45; 95% CI, 1.18-1.80). Lifetime suicidal attempt was most highly linked to history of homelessness (OR, 1.37; 95% CI, 1.22-1.55; all P < .001).
“The results of our study underscore the importance of routine, standardized screening for cumulative social disadvantage within VA and community care settings that serve veterans,” Pietrzak said.
He added that findings make it clear that “suicide prevention extends beyond mental health care. Improving the social conditions in which veterans live, work, and age is not only good public policy. It may save lives.”
Mark S. Kaplan, DrPH, a research professor of Social Welfare at the University of California at Los Angeles Luskin School of Public Affairs is familiar with the study findings and said they highlight the need to “approach the question of suicide in much wider terms as opposed to reducing it to psychiatric traits.”
J. John Mann, MD, a professor of translational neuroscience in psychiatry and radiology who studies suicide at Columbia University, New York City, said the study’s findings illustrate that clinicians must do more to understand the lives of patients outside the examination room. He predicted that more screening for social determinants of health will “enrich the amount of information that the clinician will have and lead to a more comprehensive clinical care plan.”
The US Department of Veterans Affairs supported the study. Pietrzak has no disclosures. Other study authors report various disclosures.
Veterans experiencing unstable housing, financial strain, and with poor access to health care have a higher risk of suicidal thoughts and behaviors, according to findings in a new study, leading researchers to call for additional screening to identify those in jeopardy.
Each incremental increase in social disadvantage was tied to increases in the likelihood of recent suicidal thoughts (odds ratio [OR], 2.14), future suicidal intent (OR, 2.21), and lifetime suicide attempt (OR, 1.78) in a weighted analysis. The self-reported data was published as a cross-sectional study by Pietrzak et al in the December 2025 issue of JAMA Psychiatry.
Veterans whose social plights ranked in the worst 5% were > 20 times more likely to report suicidal thoughts and behaviors than those in the top 5%. Especially striking were the magnitudes of the associations and their persistence after adjustment for psychiatric conditions and other suicide risk factors, lead author Robert H. Pietrzak, PhD, MPH said in an interview with Federal Practitioner.
“This finding highlights how extreme cumulative disadvantage can be overwhelming,” Pietrzak said. “It suggests that suicide risk among veterans increases dramatically when multiple social stressors cluster together. Rather than any single hardship driving risk, it is the cumulative impact of social disadvantage that appears most strongly linked to elevated suicide risk.”
As Pietrzak explained, veterans account for < 7% of total US adults but about 14% of suicide deaths. “Several factors may contribute to this difference, including higher exposure to trauma, elevated rates of psychiatric conditions, challenges with reintegration into civilian life, and structural barriers to care,” Pietrzak said. “Increasingly, social and economic stressors are also recognized by experts and researchers as critical contributors to suicide risk.”
Social determinants of health (SDOH) such as unemployment and lack of access to health care have also been linked to suicide risk, he said.
“Less well understood is how multiple adverse social conditions interact and accumulate to compound suicide risk,” Pietrzak said.
The new study sought to determine the impact of SDOH as a whole, not just in isolation. The study analyzed SDOH in 5 areas—education access and quality, economic conditions, health care access and quality, neighborhood and built environment, and social and community context—via the National Health and Resilience in Veterans Study, which surveyed 4069 veterans. The participants had weighted demographics of mean age 62.2 years; 90.2% were male; and 78.1% White, 11.2% Black, 6.6% Hispanic, 4.2% other.
Past-year suicidal ideation was most highly linked to psychosocial difficulties (OR, 1.58; 95% CI, 1.43-1.75). Future suicidal intent was most highly linked to residing in a mobile home, recreational vehicle, or van (OR, 1.60; 95% CI, 1.24-2.07) in addition to psychosocial difficulties (OR, 1.45; 95% CI, 1.18-1.80). Lifetime suicidal attempt was most highly linked to history of homelessness (OR, 1.37; 95% CI, 1.22-1.55; all P < .001).
“The results of our study underscore the importance of routine, standardized screening for cumulative social disadvantage within VA and community care settings that serve veterans,” Pietrzak said.
He added that findings make it clear that “suicide prevention extends beyond mental health care. Improving the social conditions in which veterans live, work, and age is not only good public policy. It may save lives.”
Mark S. Kaplan, DrPH, a research professor of Social Welfare at the University of California at Los Angeles Luskin School of Public Affairs is familiar with the study findings and said they highlight the need to “approach the question of suicide in much wider terms as opposed to reducing it to psychiatric traits.”
J. John Mann, MD, a professor of translational neuroscience in psychiatry and radiology who studies suicide at Columbia University, New York City, said the study’s findings illustrate that clinicians must do more to understand the lives of patients outside the examination room. He predicted that more screening for social determinants of health will “enrich the amount of information that the clinician will have and lead to a more comprehensive clinical care plan.”
The US Department of Veterans Affairs supported the study. Pietrzak has no disclosures. Other study authors report various disclosures.
Indian Affairs Staffing Fell 11% in 2025
The US Department of the Interior Bureau of Indian Affairs (BIA) workforce shrunk 11% through the first 6 months of 2025, a result of executive orders, hiring freezes, a voluntary deferred resignation program (DRP), and terminations of probationary employees, according to a recently US Government Accountability Office (GAO) report. Though these reductions are complete, GAO said it has not yet analyzed projected cost savings or operational impacts from these staff reductions, and the department has shown signs of growth so far in 2026.
The reduction in force (RIF) from 7470 to 6624 employees aligns with the February 2025 executive order aimed at “restoring accountability, eliminating waste, bloat, and insularity” and reforming the federal workforce to maximize efficiency and productivity. The directives also instructed agencies to develop plans for large-scale RIFs and reorganizations. GAO auditors reviewed workforce data from January 25, 2025, through July 31, 2025, interviewed BIA officials, and reviewed comments from Native American tribal representatives to compose the report.
All BIA regions experienced a reduction in staff: 10% in the Western and Rocky Mountain regions, 29% in the Pacific region, and > 20% each in the Alaska, Midwest, and Southern Plains regions. Positions within law enforcement and social work agencies were excluded from the May program due to job functions and responsibilities.
A small portion of separations included resignations and retirements outside of the DRPs; of the staff separating from BIA after January 25, 2025, while 24% left for other reasons. Although the downsizing was not unexpected and some staff were already planning to retire, repercussions were felt immediately.
“Some remaining staff took on additional responsibilities to mitigate the effects of reductions,” the GAO report said. “Some Indian Affairs staff said the reductions would exacerbate preexisting staffing limitations in their offices and make it more difficult to carry out their responsibilities serving Tribes.”
Tribal leaders voiced concerns, claiming BIA already was understaffed to effectively carry out its responsibilities and that service delivery was impaired. Some BIA staff reported that departures forced them to take on duties beyond their main area of responsibility, compromising their primary work. Regional BIA staff also described confusion about which employees were leaving, which limited their ability to effectively plan for impending departures, and reported receiving limited guidance from superiors about how to cover the responsibilities of those departing, particularly those in leadership positions. As of June 2, 2025, 6 of 12 BIA regional directors were serving in an acting capacity, and 12 of the 24 deputy regional director positions were either vacant or acting.
BIA officials have said there are no plans to reorganize or enact additional RIFs, but existing functions “might need to be restructured or realigned to achieve administration priorities.”
As of 2024, the Indiana Health Service (IHS) had a near 30% vacancy rate. In 2025, it awarded > 1800 scholarships and loan repayments under programs aimed at educating and training health professionals for careers at IHS facilities. And in January 2026, IHS announced it was launching the “largest hiring effort in agency history.”
“[O]ur top priority is filling vacancies for positions essential to keeping our health care facilities operating smoothly, especially in some of the more rural and remote locations,” said IHS Chief of Staff Clayton Fulton.
The US Department of the Interior Bureau of Indian Affairs (BIA) workforce shrunk 11% through the first 6 months of 2025, a result of executive orders, hiring freezes, a voluntary deferred resignation program (DRP), and terminations of probationary employees, according to a recently US Government Accountability Office (GAO) report. Though these reductions are complete, GAO said it has not yet analyzed projected cost savings or operational impacts from these staff reductions, and the department has shown signs of growth so far in 2026.
The reduction in force (RIF) from 7470 to 6624 employees aligns with the February 2025 executive order aimed at “restoring accountability, eliminating waste, bloat, and insularity” and reforming the federal workforce to maximize efficiency and productivity. The directives also instructed agencies to develop plans for large-scale RIFs and reorganizations. GAO auditors reviewed workforce data from January 25, 2025, through July 31, 2025, interviewed BIA officials, and reviewed comments from Native American tribal representatives to compose the report.
All BIA regions experienced a reduction in staff: 10% in the Western and Rocky Mountain regions, 29% in the Pacific region, and > 20% each in the Alaska, Midwest, and Southern Plains regions. Positions within law enforcement and social work agencies were excluded from the May program due to job functions and responsibilities.
A small portion of separations included resignations and retirements outside of the DRPs; of the staff separating from BIA after January 25, 2025, while 24% left for other reasons. Although the downsizing was not unexpected and some staff were already planning to retire, repercussions were felt immediately.
“Some remaining staff took on additional responsibilities to mitigate the effects of reductions,” the GAO report said. “Some Indian Affairs staff said the reductions would exacerbate preexisting staffing limitations in their offices and make it more difficult to carry out their responsibilities serving Tribes.”
Tribal leaders voiced concerns, claiming BIA already was understaffed to effectively carry out its responsibilities and that service delivery was impaired. Some BIA staff reported that departures forced them to take on duties beyond their main area of responsibility, compromising their primary work. Regional BIA staff also described confusion about which employees were leaving, which limited their ability to effectively plan for impending departures, and reported receiving limited guidance from superiors about how to cover the responsibilities of those departing, particularly those in leadership positions. As of June 2, 2025, 6 of 12 BIA regional directors were serving in an acting capacity, and 12 of the 24 deputy regional director positions were either vacant or acting.
BIA officials have said there are no plans to reorganize or enact additional RIFs, but existing functions “might need to be restructured or realigned to achieve administration priorities.”
As of 2024, the Indiana Health Service (IHS) had a near 30% vacancy rate. In 2025, it awarded > 1800 scholarships and loan repayments under programs aimed at educating and training health professionals for careers at IHS facilities. And in January 2026, IHS announced it was launching the “largest hiring effort in agency history.”
“[O]ur top priority is filling vacancies for positions essential to keeping our health care facilities operating smoothly, especially in some of the more rural and remote locations,” said IHS Chief of Staff Clayton Fulton.
The US Department of the Interior Bureau of Indian Affairs (BIA) workforce shrunk 11% through the first 6 months of 2025, a result of executive orders, hiring freezes, a voluntary deferred resignation program (DRP), and terminations of probationary employees, according to a recently US Government Accountability Office (GAO) report. Though these reductions are complete, GAO said it has not yet analyzed projected cost savings or operational impacts from these staff reductions, and the department has shown signs of growth so far in 2026.
The reduction in force (RIF) from 7470 to 6624 employees aligns with the February 2025 executive order aimed at “restoring accountability, eliminating waste, bloat, and insularity” and reforming the federal workforce to maximize efficiency and productivity. The directives also instructed agencies to develop plans for large-scale RIFs and reorganizations. GAO auditors reviewed workforce data from January 25, 2025, through July 31, 2025, interviewed BIA officials, and reviewed comments from Native American tribal representatives to compose the report.
All BIA regions experienced a reduction in staff: 10% in the Western and Rocky Mountain regions, 29% in the Pacific region, and > 20% each in the Alaska, Midwest, and Southern Plains regions. Positions within law enforcement and social work agencies were excluded from the May program due to job functions and responsibilities.
A small portion of separations included resignations and retirements outside of the DRPs; of the staff separating from BIA after January 25, 2025, while 24% left for other reasons. Although the downsizing was not unexpected and some staff were already planning to retire, repercussions were felt immediately.
“Some remaining staff took on additional responsibilities to mitigate the effects of reductions,” the GAO report said. “Some Indian Affairs staff said the reductions would exacerbate preexisting staffing limitations in their offices and make it more difficult to carry out their responsibilities serving Tribes.”
Tribal leaders voiced concerns, claiming BIA already was understaffed to effectively carry out its responsibilities and that service delivery was impaired. Some BIA staff reported that departures forced them to take on duties beyond their main area of responsibility, compromising their primary work. Regional BIA staff also described confusion about which employees were leaving, which limited their ability to effectively plan for impending departures, and reported receiving limited guidance from superiors about how to cover the responsibilities of those departing, particularly those in leadership positions. As of June 2, 2025, 6 of 12 BIA regional directors were serving in an acting capacity, and 12 of the 24 deputy regional director positions were either vacant or acting.
BIA officials have said there are no plans to reorganize or enact additional RIFs, but existing functions “might need to be restructured or realigned to achieve administration priorities.”
As of 2024, the Indiana Health Service (IHS) had a near 30% vacancy rate. In 2025, it awarded > 1800 scholarships and loan repayments under programs aimed at educating and training health professionals for careers at IHS facilities. And in January 2026, IHS announced it was launching the “largest hiring effort in agency history.”
“[O]ur top priority is filling vacancies for positions essential to keeping our health care facilities operating smoothly, especially in some of the more rural and remote locations,” said IHS Chief of Staff Clayton Fulton.