CRC Screening: Right Patient, Right Test, Right Time

Article Type
Changed
Wed, 11/27/2024 - 14:42

It has been three and a half years since the US Preventive Services Task Force (USPSTF) lowered the age to start colorectal cancer (CRC) screening from 50 to 45. As I mentioned in a previous commentary, two major medical groups — the American Academy of Family Physicians and the American College of Physicians — felt that the evidence was insufficient to support this change. 

Did doctors adjust their screening practices? A recent study suggests that they have. Comparing CRC screening rates in more than 10 million adults aged 45-49 during the 20 months preceding and 20 months following the USPSTF recommendation, researchers found significant increases during the latter time period, with the greatest increases among persons of high socioeconomic status or living in metropolitan areas.

Another study addressed concerns that younger adults may be less likely to follow up on positive screening results or more likely to have false positives on a fecal immunochemical test (FIT). Patients aged 45-49 years were slightly less likely to have a positive FIT result than 50-year-olds, but they had similar rates of colonoscopy completion and similar percentages of abnormal findings on colonoscopy.

Although the sensitivity and specificity of FIT varies quite a bit across different test brands, its overall effectiveness at reducing colorectal cancer deaths is well established. In 2024, the Food and Drug Administration approved three new screening options: a blood-based screening test (Shield), a next-generation multitarget stool DNA test (Cologuard Plus), and a multitarget stool RNA test (ColoSense) with similar performance characteristics as Cologuard Plus. The latter two tests will become available early next year.

This profusion of noninvasive options for CRC screening will challenge those tasked with developing the next iteration of the USPSTF recommendations. Not only must future guidelines establish what evidence threshold is sufficient to recommend a new screening strategy, but they also will need to consider the population-level consequences of relative utilization of different tests. For example, a cost-effectiveness analysis found that more CRC deaths would occur if people who would have otherwise accepted colonoscopy or fecal tests chose to be screened with Shield instead; however, this negative outcome could be offset if for every three of these test substitutions, two other people chose Shield who would otherwise have not been screened at all.

In the meantime, it is important for primary care clinicians to be familiar with evidence-based intervals for CRC screening tests and test eligibility criteria. A troubling study of patients who completed a multitarget stool DNA test in a Midwestern health system in 2021 found that more than one in five had the test ordered inappropriately, based on USPSTF guidelines. Reasons for inappropriate testing included having had a colonoscopy within the past 10 years, a family history of CRC, symptoms suggestive of possible CRC, age younger than 45, and a prior diagnosis of colonic adenomas. 

Just as a medication works best when the patient takes it as prescribed, a CRC screening test is most likely to yield more benefit than harm when it’s provided to the right patient at the right time.

Dr. Lin is Associate Director, Family Medicine Residency Program, at Lancaster General Hospital in Pennsylvania. He reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

It has been three and a half years since the US Preventive Services Task Force (USPSTF) lowered the age to start colorectal cancer (CRC) screening from 50 to 45. As I mentioned in a previous commentary, two major medical groups — the American Academy of Family Physicians and the American College of Physicians — felt that the evidence was insufficient to support this change. 

Did doctors adjust their screening practices? A recent study suggests that they have. Comparing CRC screening rates in more than 10 million adults aged 45-49 during the 20 months preceding and 20 months following the USPSTF recommendation, researchers found significant increases during the latter time period, with the greatest increases among persons of high socioeconomic status or living in metropolitan areas.

Another study addressed concerns that younger adults may be less likely to follow up on positive screening results or more likely to have false positives on a fecal immunochemical test (FIT). Patients aged 45-49 years were slightly less likely to have a positive FIT result than 50-year-olds, but they had similar rates of colonoscopy completion and similar percentages of abnormal findings on colonoscopy.

Although the sensitivity and specificity of FIT varies quite a bit across different test brands, its overall effectiveness at reducing colorectal cancer deaths is well established. In 2024, the Food and Drug Administration approved three new screening options: a blood-based screening test (Shield), a next-generation multitarget stool DNA test (Cologuard Plus), and a multitarget stool RNA test (ColoSense) with similar performance characteristics as Cologuard Plus. The latter two tests will become available early next year.

This profusion of noninvasive options for CRC screening will challenge those tasked with developing the next iteration of the USPSTF recommendations. Not only must future guidelines establish what evidence threshold is sufficient to recommend a new screening strategy, but they also will need to consider the population-level consequences of relative utilization of different tests. For example, a cost-effectiveness analysis found that more CRC deaths would occur if people who would have otherwise accepted colonoscopy or fecal tests chose to be screened with Shield instead; however, this negative outcome could be offset if for every three of these test substitutions, two other people chose Shield who would otherwise have not been screened at all.

In the meantime, it is important for primary care clinicians to be familiar with evidence-based intervals for CRC screening tests and test eligibility criteria. A troubling study of patients who completed a multitarget stool DNA test in a Midwestern health system in 2021 found that more than one in five had the test ordered inappropriately, based on USPSTF guidelines. Reasons for inappropriate testing included having had a colonoscopy within the past 10 years, a family history of CRC, symptoms suggestive of possible CRC, age younger than 45, and a prior diagnosis of colonic adenomas. 

Just as a medication works best when the patient takes it as prescribed, a CRC screening test is most likely to yield more benefit than harm when it’s provided to the right patient at the right time.

Dr. Lin is Associate Director, Family Medicine Residency Program, at Lancaster General Hospital in Pennsylvania. He reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

It has been three and a half years since the US Preventive Services Task Force (USPSTF) lowered the age to start colorectal cancer (CRC) screening from 50 to 45. As I mentioned in a previous commentary, two major medical groups — the American Academy of Family Physicians and the American College of Physicians — felt that the evidence was insufficient to support this change. 

Did doctors adjust their screening practices? A recent study suggests that they have. Comparing CRC screening rates in more than 10 million adults aged 45-49 during the 20 months preceding and 20 months following the USPSTF recommendation, researchers found significant increases during the latter time period, with the greatest increases among persons of high socioeconomic status or living in metropolitan areas.

Another study addressed concerns that younger adults may be less likely to follow up on positive screening results or more likely to have false positives on a fecal immunochemical test (FIT). Patients aged 45-49 years were slightly less likely to have a positive FIT result than 50-year-olds, but they had similar rates of colonoscopy completion and similar percentages of abnormal findings on colonoscopy.

Although the sensitivity and specificity of FIT varies quite a bit across different test brands, its overall effectiveness at reducing colorectal cancer deaths is well established. In 2024, the Food and Drug Administration approved three new screening options: a blood-based screening test (Shield), a next-generation multitarget stool DNA test (Cologuard Plus), and a multitarget stool RNA test (ColoSense) with similar performance characteristics as Cologuard Plus. The latter two tests will become available early next year.

This profusion of noninvasive options for CRC screening will challenge those tasked with developing the next iteration of the USPSTF recommendations. Not only must future guidelines establish what evidence threshold is sufficient to recommend a new screening strategy, but they also will need to consider the population-level consequences of relative utilization of different tests. For example, a cost-effectiveness analysis found that more CRC deaths would occur if people who would have otherwise accepted colonoscopy or fecal tests chose to be screened with Shield instead; however, this negative outcome could be offset if for every three of these test substitutions, two other people chose Shield who would otherwise have not been screened at all.

In the meantime, it is important for primary care clinicians to be familiar with evidence-based intervals for CRC screening tests and test eligibility criteria. A troubling study of patients who completed a multitarget stool DNA test in a Midwestern health system in 2021 found that more than one in five had the test ordered inappropriately, based on USPSTF guidelines. Reasons for inappropriate testing included having had a colonoscopy within the past 10 years, a family history of CRC, symptoms suggestive of possible CRC, age younger than 45, and a prior diagnosis of colonic adenomas. 

Just as a medication works best when the patient takes it as prescribed, a CRC screening test is most likely to yield more benefit than harm when it’s provided to the right patient at the right time.

Dr. Lin is Associate Director, Family Medicine Residency Program, at Lancaster General Hospital in Pennsylvania. He reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/27/2024 - 14:41
Un-Gate On Date
Wed, 11/27/2024 - 14:41
Use ProPublica
CFC Schedule Remove Status
Wed, 11/27/2024 - 14:41
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/27/2024 - 14:41

CRC Screening Uptake Rises in Adults Aged 45-49 Years

Article Type
Changed
Wed, 11/27/2024 - 14:22

TOPLINE:

After the US Preventive Services Task Force (USPSTF) in May 2021 lowered from 50 to 45 the recommended age to begin colorectal cancer (CRC) screening for average-risk adults, there was a threefold increase in screening rates among individuals aged 45-49, but disparities by socioeconomic status and locality occurred.

METHODOLOGY:

  • Researchers compared absolute and relative changes in screening uptake among average-risk adults 45-49 years between a 20-month period before and a 20-month period after the USPSTF recommendation was issued (May 1, 2018, to December 31, 2019, and May 1, 2021, to December 31, 2022). Data was evaluated bimonthly.
  • They analyzed claims data from more than 10.2 million people with private Blue Cross Blue Shield (BCBS) coverage, with about three million eligible for screening during each bimonthly period, both pre- and post-recommendation.
  • They used interrupted time-series analysis and autoregressive integrated moving average models to gauge changes in screening rates.

TAKEAWAY:

  • Mean CRC screening uptake in average-risk adults 45-49 years increased from 0.50% in the pre-recommendation period to 1.51% post-recommendation, reflecting a significant absolute change of 1.01 percentage points but no significant relative change.
  • Adults 45-49 years living in areas with the highest socioeconomic status (SES) had the largest absolute change in screening uptake compared with peers in the lowest SES areas (1.25 vs 0.75 percentage points). Relative changes were not significant.
  • The absolute change in screening uptake was higher among individuals in metropolitan areas than individuals in nonmetropolitan areas (1.06 vs 0.73 percentage points). Again, relative changes were not significant.
  • The screening uptake rate increased the fastest among those living in the highest SES and metropolitan areas (0.24 and 0.20 percentage points every 2 months, respectively).
  • By December 2022 (the end of the post-recommendation period), CRC screening uptake among adults 45-49 years were on par with those seen in adults 50-75 years (2.37% vs 2.4%). Nonetheless, only 11.5% of average-risk adults aged 45-49 years received CRC screening during the post-recommendation period.

IN PRACTICE:

“The threefold increase in screening uptake among average-risk individuals aged 45-49 years reflects an accomplishment, yet evidence of widening disparities based on SDI [Social Deprivation Index] and locality indicate that population subgroups may not be benefiting equally from this change in CRC screening recommendation. Furthermore, given that only 11.5% of average-risk individuals aged 45-49 years during the post-recommendation period received CRC screening before the age of 50 years, targeted initiatives to improve screening in this age group are warranted to reach the national goal of screening 80% of the population in every community,” the researchers wrote.

SOURCE:

The study, with first author Sunny Siddique, MPH, with Yale School of Public Health, New Haven, Connecticut, was published online in JAMA Network Open

LIMITATIONS:

Data on race and ethnicity were incomplete, which may have impacted the analysis of disparities. The study cohort may not be fully representative of the general US population because BCBS beneficiaries tend to be younger and more socioeconomically advantaged with employer-based insurance. Specific information on the type of coverage provided by each beneficiary’s insurance plan was not available.

DISCLOSURES:

The study was funded by the National Cancer Institute. The authors declared no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

After the US Preventive Services Task Force (USPSTF) in May 2021 lowered from 50 to 45 the recommended age to begin colorectal cancer (CRC) screening for average-risk adults, there was a threefold increase in screening rates among individuals aged 45-49, but disparities by socioeconomic status and locality occurred.

METHODOLOGY:

  • Researchers compared absolute and relative changes in screening uptake among average-risk adults 45-49 years between a 20-month period before and a 20-month period after the USPSTF recommendation was issued (May 1, 2018, to December 31, 2019, and May 1, 2021, to December 31, 2022). Data was evaluated bimonthly.
  • They analyzed claims data from more than 10.2 million people with private Blue Cross Blue Shield (BCBS) coverage, with about three million eligible for screening during each bimonthly period, both pre- and post-recommendation.
  • They used interrupted time-series analysis and autoregressive integrated moving average models to gauge changes in screening rates.

TAKEAWAY:

  • Mean CRC screening uptake in average-risk adults 45-49 years increased from 0.50% in the pre-recommendation period to 1.51% post-recommendation, reflecting a significant absolute change of 1.01 percentage points but no significant relative change.
  • Adults 45-49 years living in areas with the highest socioeconomic status (SES) had the largest absolute change in screening uptake compared with peers in the lowest SES areas (1.25 vs 0.75 percentage points). Relative changes were not significant.
  • The absolute change in screening uptake was higher among individuals in metropolitan areas than individuals in nonmetropolitan areas (1.06 vs 0.73 percentage points). Again, relative changes were not significant.
  • The screening uptake rate increased the fastest among those living in the highest SES and metropolitan areas (0.24 and 0.20 percentage points every 2 months, respectively).
  • By December 2022 (the end of the post-recommendation period), CRC screening uptake among adults 45-49 years were on par with those seen in adults 50-75 years (2.37% vs 2.4%). Nonetheless, only 11.5% of average-risk adults aged 45-49 years received CRC screening during the post-recommendation period.

IN PRACTICE:

“The threefold increase in screening uptake among average-risk individuals aged 45-49 years reflects an accomplishment, yet evidence of widening disparities based on SDI [Social Deprivation Index] and locality indicate that population subgroups may not be benefiting equally from this change in CRC screening recommendation. Furthermore, given that only 11.5% of average-risk individuals aged 45-49 years during the post-recommendation period received CRC screening before the age of 50 years, targeted initiatives to improve screening in this age group are warranted to reach the national goal of screening 80% of the population in every community,” the researchers wrote.

SOURCE:

The study, with first author Sunny Siddique, MPH, with Yale School of Public Health, New Haven, Connecticut, was published online in JAMA Network Open

LIMITATIONS:

Data on race and ethnicity were incomplete, which may have impacted the analysis of disparities. The study cohort may not be fully representative of the general US population because BCBS beneficiaries tend to be younger and more socioeconomically advantaged with employer-based insurance. Specific information on the type of coverage provided by each beneficiary’s insurance plan was not available.

DISCLOSURES:

The study was funded by the National Cancer Institute. The authors declared no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

TOPLINE:

After the US Preventive Services Task Force (USPSTF) in May 2021 lowered from 50 to 45 the recommended age to begin colorectal cancer (CRC) screening for average-risk adults, there was a threefold increase in screening rates among individuals aged 45-49, but disparities by socioeconomic status and locality occurred.

METHODOLOGY:

  • Researchers compared absolute and relative changes in screening uptake among average-risk adults 45-49 years between a 20-month period before and a 20-month period after the USPSTF recommendation was issued (May 1, 2018, to December 31, 2019, and May 1, 2021, to December 31, 2022). Data was evaluated bimonthly.
  • They analyzed claims data from more than 10.2 million people with private Blue Cross Blue Shield (BCBS) coverage, with about three million eligible for screening during each bimonthly period, both pre- and post-recommendation.
  • They used interrupted time-series analysis and autoregressive integrated moving average models to gauge changes in screening rates.

TAKEAWAY:

  • Mean CRC screening uptake in average-risk adults 45-49 years increased from 0.50% in the pre-recommendation period to 1.51% post-recommendation, reflecting a significant absolute change of 1.01 percentage points but no significant relative change.
  • Adults 45-49 years living in areas with the highest socioeconomic status (SES) had the largest absolute change in screening uptake compared with peers in the lowest SES areas (1.25 vs 0.75 percentage points). Relative changes were not significant.
  • The absolute change in screening uptake was higher among individuals in metropolitan areas than individuals in nonmetropolitan areas (1.06 vs 0.73 percentage points). Again, relative changes were not significant.
  • The screening uptake rate increased the fastest among those living in the highest SES and metropolitan areas (0.24 and 0.20 percentage points every 2 months, respectively).
  • By December 2022 (the end of the post-recommendation period), CRC screening uptake among adults 45-49 years were on par with those seen in adults 50-75 years (2.37% vs 2.4%). Nonetheless, only 11.5% of average-risk adults aged 45-49 years received CRC screening during the post-recommendation period.

IN PRACTICE:

“The threefold increase in screening uptake among average-risk individuals aged 45-49 years reflects an accomplishment, yet evidence of widening disparities based on SDI [Social Deprivation Index] and locality indicate that population subgroups may not be benefiting equally from this change in CRC screening recommendation. Furthermore, given that only 11.5% of average-risk individuals aged 45-49 years during the post-recommendation period received CRC screening before the age of 50 years, targeted initiatives to improve screening in this age group are warranted to reach the national goal of screening 80% of the population in every community,” the researchers wrote.

SOURCE:

The study, with first author Sunny Siddique, MPH, with Yale School of Public Health, New Haven, Connecticut, was published online in JAMA Network Open

LIMITATIONS:

Data on race and ethnicity were incomplete, which may have impacted the analysis of disparities. The study cohort may not be fully representative of the general US population because BCBS beneficiaries tend to be younger and more socioeconomically advantaged with employer-based insurance. Specific information on the type of coverage provided by each beneficiary’s insurance plan was not available.

DISCLOSURES:

The study was funded by the National Cancer Institute. The authors declared no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/27/2024 - 14:21
Un-Gate On Date
Wed, 11/27/2024 - 14:21
Use ProPublica
CFC Schedule Remove Status
Wed, 11/27/2024 - 14:21
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/27/2024 - 14:21

Diet Matters in Prostate Cancer, but It’s Complicated

Article Type
Changed
Wed, 11/27/2024 - 14:16

Diet is increasingly seen as a modifiable risk factor in prostate cancer.

Recent studies have shown that ultralow-carbohydrate diets, weight loss diets, supplementation with omega-3 fatty acids, pro- and anti-inflammatory diets, fasting, and even tea drinking may affect prostate cancer risk or risk for progression.

In October, a cohort study involving about 900 men under active surveillance for early stage prostate cancers found that those who reported eating a diet that adhered closely to the US government’s recommendations as indicated by the Healthy Eating Index (HEI) saw a lower risk for progression at a median 6.5 months follow-up.

These findings follow results from an observational study, published in May, that followed about 2000 men with locally advanced prostate tumors. Men consuming a primarily plant-based diet (one closely adhering to the plant-based diet index) had less likelihood of progression over a median 6.5 years than those consuming diets low in plant-based foods.

“There is an increasing body of literature that says your diet matters,” said urologist Stephen J. Freedland, MD, of Cedars-Sinai Medical Center in Los Angeles, California, and director of its Center for Integrated Research in Cancer and Lifestyle. “At the same time, there are a lot of things that could explain these associations. People who can afford lots of plant-based foods tend to have higher socioeconomic status, for example.”

What’s needed, Freedland said, are more randomized trials to test the hypotheses emerging from the longitudinal cohort studies. “That’s where I’m going with my own research,” he said. “I’d like to look at a study like [one of these] and design a trial. Let’s say we get half of patients to eat according to the healthy eating index, while half eat whatever they want. Can dietary modification change which genes are turned on and off in a tumor, as a start?”

 

Oncologist and Nutritionist Collaborate on Multiple Studies

Nutritionist Pao-Hwa Lin, PhD, of Duke University in Durham, North Carolina, has been working for several years with Freedland on trials of nutrition interventions. A longtime researcher of chronic disease and diet, she first collaborated with Freedland on a study, published in 2019, that looked at whether insulin could be driven down with diet and exercise in men treated with androgen deprivation therapy.

Not only are high levels of insulin a known contributor to prostate cancer growth, Lin said, but “insulin resistance is a very common side effect of hormone therapy. And we saw that the low carb diet was very helpful for that.” The finding led Freedland and Lin to design further trials investigating carbohydrate restriction in people with prostate cancer.

Lin said randomized trials tend to be smaller and shorter in duration than the observational cohort studies because “interventions like these can be hard to maintain, and recruitment can be hard to sustain. A very well controlled and intensive nutrition intervention is not going to be super long.” Short trial durations also mean that prostate cancer progression can be difficult to capture. Risk for progression has to be measured using surrogate markers, such as the doubling time for prostate-specific antigen (PSA).

In 2020, Freedland and Lin published results from a pilot study of 57 men who had been treated with surgery or radiation for localized prostate cancer but had a PSA recurrence and were randomized to an ultralow-carbohydrate diet or no restrictions for 6 months. The investigators saw that PSA doubling times, an intermediate measure of tumor growth rate, were slower among those consuming the low-carb diet.

Currently they are wrapping up a trial that randomizes men who have been scheduled for radical prostatectomy to daily supplementation with walnuts, a natural source of polyphenols and omega-3 acids. This time, the aim is to determine whether gene expression in tumors changes in response to supplementation.

The researchers are also recruiting for a study in men being treated for metastatic prostate cancer. This study randomizes patients to a fasting-mimicking diet, which is a type of intermittent fasting, or no dietary restrictions for 6 months.

Developed by biologist Valter Longo, PhD, of the University of Southern California, Los Angeles, the fasting-mimicking diet has been shown to boost treatment effects in women with hormone receptor–positive breast cancer. In 2023, Longo and his colleagues published results from a small pilot study of the same diet in men with prostate cancer, reporting some positive metabolic findings.

Longo, who is consulting on Lin and Freedland’s trial, “has proven that the diet is helpful in treatment outcomes for breast cancer. So we connected and decided to test it and see if it’s helpful in prostate cancer as well.”

 

More Than One Approach Likely to Work

Though Lin and Freedland have focused most of their investigations on carbohydrate restriction, neither dismisses the potential for other dietary approaches to show benefit.

“There are two main schools of thought in terms of the relationship between diet and prostate cancer,” Lin said. “One is the insulin angle, and that’s what we hypothesized when we first tested the low-carb diet. The other is the inflammation angle.”

Studies have shown greater adherence to the HEI — a diet quality indicator that favors grains, fruits, dairy, vegetables, beans, and seafood — or the plant-based diet index to be associated with lower biomarkers of inflammation, she noted.

Insulin resistance, Lin explained, “is also highly related to inflammation.” (Several of the diets being investigated in prostate cancer were originally studied in diabetes.)

Moreover, weight loss caused by low-carb diets — or other healthy diets — can have a positive effect on insulin resistance independent of diet composition. “So it is a very complicated picture — and that doesn’t exclude other pathways that could also be contributing,” she said.

On the surface, a low-carb diet that is heavy in eggs, cheeses, and meats would seem to have little in common with the HEI or a plant-based diet. But Freedland noted that there are commonalities among the approaches being studied. “No one’s promoting eating a lot of simple sugars. No one’s saying eat a lot of processed foods. All of these diets emphasize whole, natural foods,” he said.

Lin hopes that she and Freedland will one day be able to test a diet that is both lower carb and anti-inflammatory in men with prostate cancer. “Why not combine the approaches, have all the good features together?” she asked.

But Freeland pointed out and explained why most clinicians don’t make dietary recommendations to their newly diagnosed patients.

“A new prostate cancer patient already gets easily an hour discussion of treatment options, of pros and cons. Patients often become overwhelmed. And then to extend it further to talk about diet, they’ll end up even more overwhelmed.” Moreover, he said, current evidence offers doctors few take-home messages to deliver besides avoiding sugar and processed foods.

Multiple dietary approaches are likely to prove helpful in prostate cancer, and when the evidence for them is better established, patients and their doctors will want to consider lifestyle factors in choosing one. The best diet will depend on a patient’s philosophy, tastes, and willingness to follow it, he concluded.

“At the end of the day I’m not rooting for one diet or another. I just want to get the answers.”

Lin disclosed no financial conflicts of interest. Freedland disclosed serving as a speaker for AstraZeneca, Astellas, and Pfizer and as a consultant for Astellas, AstraZeneca, Bayer, Eli Lilly, Janssen, Merck, Novartis, Pfizer, Sanofi-Aventis, and Sumitomo.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Diet is increasingly seen as a modifiable risk factor in prostate cancer.

Recent studies have shown that ultralow-carbohydrate diets, weight loss diets, supplementation with omega-3 fatty acids, pro- and anti-inflammatory diets, fasting, and even tea drinking may affect prostate cancer risk or risk for progression.

In October, a cohort study involving about 900 men under active surveillance for early stage prostate cancers found that those who reported eating a diet that adhered closely to the US government’s recommendations as indicated by the Healthy Eating Index (HEI) saw a lower risk for progression at a median 6.5 months follow-up.

These findings follow results from an observational study, published in May, that followed about 2000 men with locally advanced prostate tumors. Men consuming a primarily plant-based diet (one closely adhering to the plant-based diet index) had less likelihood of progression over a median 6.5 years than those consuming diets low in plant-based foods.

“There is an increasing body of literature that says your diet matters,” said urologist Stephen J. Freedland, MD, of Cedars-Sinai Medical Center in Los Angeles, California, and director of its Center for Integrated Research in Cancer and Lifestyle. “At the same time, there are a lot of things that could explain these associations. People who can afford lots of plant-based foods tend to have higher socioeconomic status, for example.”

What’s needed, Freedland said, are more randomized trials to test the hypotheses emerging from the longitudinal cohort studies. “That’s where I’m going with my own research,” he said. “I’d like to look at a study like [one of these] and design a trial. Let’s say we get half of patients to eat according to the healthy eating index, while half eat whatever they want. Can dietary modification change which genes are turned on and off in a tumor, as a start?”

 

Oncologist and Nutritionist Collaborate on Multiple Studies

Nutritionist Pao-Hwa Lin, PhD, of Duke University in Durham, North Carolina, has been working for several years with Freedland on trials of nutrition interventions. A longtime researcher of chronic disease and diet, she first collaborated with Freedland on a study, published in 2019, that looked at whether insulin could be driven down with diet and exercise in men treated with androgen deprivation therapy.

Not only are high levels of insulin a known contributor to prostate cancer growth, Lin said, but “insulin resistance is a very common side effect of hormone therapy. And we saw that the low carb diet was very helpful for that.” The finding led Freedland and Lin to design further trials investigating carbohydrate restriction in people with prostate cancer.

Lin said randomized trials tend to be smaller and shorter in duration than the observational cohort studies because “interventions like these can be hard to maintain, and recruitment can be hard to sustain. A very well controlled and intensive nutrition intervention is not going to be super long.” Short trial durations also mean that prostate cancer progression can be difficult to capture. Risk for progression has to be measured using surrogate markers, such as the doubling time for prostate-specific antigen (PSA).

In 2020, Freedland and Lin published results from a pilot study of 57 men who had been treated with surgery or radiation for localized prostate cancer but had a PSA recurrence and were randomized to an ultralow-carbohydrate diet or no restrictions for 6 months. The investigators saw that PSA doubling times, an intermediate measure of tumor growth rate, were slower among those consuming the low-carb diet.

Currently they are wrapping up a trial that randomizes men who have been scheduled for radical prostatectomy to daily supplementation with walnuts, a natural source of polyphenols and omega-3 acids. This time, the aim is to determine whether gene expression in tumors changes in response to supplementation.

The researchers are also recruiting for a study in men being treated for metastatic prostate cancer. This study randomizes patients to a fasting-mimicking diet, which is a type of intermittent fasting, or no dietary restrictions for 6 months.

Developed by biologist Valter Longo, PhD, of the University of Southern California, Los Angeles, the fasting-mimicking diet has been shown to boost treatment effects in women with hormone receptor–positive breast cancer. In 2023, Longo and his colleagues published results from a small pilot study of the same diet in men with prostate cancer, reporting some positive metabolic findings.

Longo, who is consulting on Lin and Freedland’s trial, “has proven that the diet is helpful in treatment outcomes for breast cancer. So we connected and decided to test it and see if it’s helpful in prostate cancer as well.”

 

More Than One Approach Likely to Work

Though Lin and Freedland have focused most of their investigations on carbohydrate restriction, neither dismisses the potential for other dietary approaches to show benefit.

“There are two main schools of thought in terms of the relationship between diet and prostate cancer,” Lin said. “One is the insulin angle, and that’s what we hypothesized when we first tested the low-carb diet. The other is the inflammation angle.”

Studies have shown greater adherence to the HEI — a diet quality indicator that favors grains, fruits, dairy, vegetables, beans, and seafood — or the plant-based diet index to be associated with lower biomarkers of inflammation, she noted.

Insulin resistance, Lin explained, “is also highly related to inflammation.” (Several of the diets being investigated in prostate cancer were originally studied in diabetes.)

Moreover, weight loss caused by low-carb diets — or other healthy diets — can have a positive effect on insulin resistance independent of diet composition. “So it is a very complicated picture — and that doesn’t exclude other pathways that could also be contributing,” she said.

On the surface, a low-carb diet that is heavy in eggs, cheeses, and meats would seem to have little in common with the HEI or a plant-based diet. But Freedland noted that there are commonalities among the approaches being studied. “No one’s promoting eating a lot of simple sugars. No one’s saying eat a lot of processed foods. All of these diets emphasize whole, natural foods,” he said.

Lin hopes that she and Freedland will one day be able to test a diet that is both lower carb and anti-inflammatory in men with prostate cancer. “Why not combine the approaches, have all the good features together?” she asked.

But Freeland pointed out and explained why most clinicians don’t make dietary recommendations to their newly diagnosed patients.

“A new prostate cancer patient already gets easily an hour discussion of treatment options, of pros and cons. Patients often become overwhelmed. And then to extend it further to talk about diet, they’ll end up even more overwhelmed.” Moreover, he said, current evidence offers doctors few take-home messages to deliver besides avoiding sugar and processed foods.

Multiple dietary approaches are likely to prove helpful in prostate cancer, and when the evidence for them is better established, patients and their doctors will want to consider lifestyle factors in choosing one. The best diet will depend on a patient’s philosophy, tastes, and willingness to follow it, he concluded.

“At the end of the day I’m not rooting for one diet or another. I just want to get the answers.”

Lin disclosed no financial conflicts of interest. Freedland disclosed serving as a speaker for AstraZeneca, Astellas, and Pfizer and as a consultant for Astellas, AstraZeneca, Bayer, Eli Lilly, Janssen, Merck, Novartis, Pfizer, Sanofi-Aventis, and Sumitomo.

A version of this article first appeared on Medscape.com.

Diet is increasingly seen as a modifiable risk factor in prostate cancer.

Recent studies have shown that ultralow-carbohydrate diets, weight loss diets, supplementation with omega-3 fatty acids, pro- and anti-inflammatory diets, fasting, and even tea drinking may affect prostate cancer risk or risk for progression.

In October, a cohort study involving about 900 men under active surveillance for early stage prostate cancers found that those who reported eating a diet that adhered closely to the US government’s recommendations as indicated by the Healthy Eating Index (HEI) saw a lower risk for progression at a median 6.5 months follow-up.

These findings follow results from an observational study, published in May, that followed about 2000 men with locally advanced prostate tumors. Men consuming a primarily plant-based diet (one closely adhering to the plant-based diet index) had less likelihood of progression over a median 6.5 years than those consuming diets low in plant-based foods.

“There is an increasing body of literature that says your diet matters,” said urologist Stephen J. Freedland, MD, of Cedars-Sinai Medical Center in Los Angeles, California, and director of its Center for Integrated Research in Cancer and Lifestyle. “At the same time, there are a lot of things that could explain these associations. People who can afford lots of plant-based foods tend to have higher socioeconomic status, for example.”

What’s needed, Freedland said, are more randomized trials to test the hypotheses emerging from the longitudinal cohort studies. “That’s where I’m going with my own research,” he said. “I’d like to look at a study like [one of these] and design a trial. Let’s say we get half of patients to eat according to the healthy eating index, while half eat whatever they want. Can dietary modification change which genes are turned on and off in a tumor, as a start?”

 

Oncologist and Nutritionist Collaborate on Multiple Studies

Nutritionist Pao-Hwa Lin, PhD, of Duke University in Durham, North Carolina, has been working for several years with Freedland on trials of nutrition interventions. A longtime researcher of chronic disease and diet, she first collaborated with Freedland on a study, published in 2019, that looked at whether insulin could be driven down with diet and exercise in men treated with androgen deprivation therapy.

Not only are high levels of insulin a known contributor to prostate cancer growth, Lin said, but “insulin resistance is a very common side effect of hormone therapy. And we saw that the low carb diet was very helpful for that.” The finding led Freedland and Lin to design further trials investigating carbohydrate restriction in people with prostate cancer.

Lin said randomized trials tend to be smaller and shorter in duration than the observational cohort studies because “interventions like these can be hard to maintain, and recruitment can be hard to sustain. A very well controlled and intensive nutrition intervention is not going to be super long.” Short trial durations also mean that prostate cancer progression can be difficult to capture. Risk for progression has to be measured using surrogate markers, such as the doubling time for prostate-specific antigen (PSA).

In 2020, Freedland and Lin published results from a pilot study of 57 men who had been treated with surgery or radiation for localized prostate cancer but had a PSA recurrence and were randomized to an ultralow-carbohydrate diet or no restrictions for 6 months. The investigators saw that PSA doubling times, an intermediate measure of tumor growth rate, were slower among those consuming the low-carb diet.

Currently they are wrapping up a trial that randomizes men who have been scheduled for radical prostatectomy to daily supplementation with walnuts, a natural source of polyphenols and omega-3 acids. This time, the aim is to determine whether gene expression in tumors changes in response to supplementation.

The researchers are also recruiting for a study in men being treated for metastatic prostate cancer. This study randomizes patients to a fasting-mimicking diet, which is a type of intermittent fasting, or no dietary restrictions for 6 months.

Developed by biologist Valter Longo, PhD, of the University of Southern California, Los Angeles, the fasting-mimicking diet has been shown to boost treatment effects in women with hormone receptor–positive breast cancer. In 2023, Longo and his colleagues published results from a small pilot study of the same diet in men with prostate cancer, reporting some positive metabolic findings.

Longo, who is consulting on Lin and Freedland’s trial, “has proven that the diet is helpful in treatment outcomes for breast cancer. So we connected and decided to test it and see if it’s helpful in prostate cancer as well.”

 

More Than One Approach Likely to Work

Though Lin and Freedland have focused most of their investigations on carbohydrate restriction, neither dismisses the potential for other dietary approaches to show benefit.

“There are two main schools of thought in terms of the relationship between diet and prostate cancer,” Lin said. “One is the insulin angle, and that’s what we hypothesized when we first tested the low-carb diet. The other is the inflammation angle.”

Studies have shown greater adherence to the HEI — a diet quality indicator that favors grains, fruits, dairy, vegetables, beans, and seafood — or the plant-based diet index to be associated with lower biomarkers of inflammation, she noted.

Insulin resistance, Lin explained, “is also highly related to inflammation.” (Several of the diets being investigated in prostate cancer were originally studied in diabetes.)

Moreover, weight loss caused by low-carb diets — or other healthy diets — can have a positive effect on insulin resistance independent of diet composition. “So it is a very complicated picture — and that doesn’t exclude other pathways that could also be contributing,” she said.

On the surface, a low-carb diet that is heavy in eggs, cheeses, and meats would seem to have little in common with the HEI or a plant-based diet. But Freedland noted that there are commonalities among the approaches being studied. “No one’s promoting eating a lot of simple sugars. No one’s saying eat a lot of processed foods. All of these diets emphasize whole, natural foods,” he said.

Lin hopes that she and Freedland will one day be able to test a diet that is both lower carb and anti-inflammatory in men with prostate cancer. “Why not combine the approaches, have all the good features together?” she asked.

But Freeland pointed out and explained why most clinicians don’t make dietary recommendations to their newly diagnosed patients.

“A new prostate cancer patient already gets easily an hour discussion of treatment options, of pros and cons. Patients often become overwhelmed. And then to extend it further to talk about diet, they’ll end up even more overwhelmed.” Moreover, he said, current evidence offers doctors few take-home messages to deliver besides avoiding sugar and processed foods.

Multiple dietary approaches are likely to prove helpful in prostate cancer, and when the evidence for them is better established, patients and their doctors will want to consider lifestyle factors in choosing one. The best diet will depend on a patient’s philosophy, tastes, and willingness to follow it, he concluded.

“At the end of the day I’m not rooting for one diet or another. I just want to get the answers.”

Lin disclosed no financial conflicts of interest. Freedland disclosed serving as a speaker for AstraZeneca, Astellas, and Pfizer and as a consultant for Astellas, AstraZeneca, Bayer, Eli Lilly, Janssen, Merck, Novartis, Pfizer, Sanofi-Aventis, and Sumitomo.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/27/2024 - 14:14
Un-Gate On Date
Wed, 11/27/2024 - 14:14
Use ProPublica
CFC Schedule Remove Status
Wed, 11/27/2024 - 14:14
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/27/2024 - 14:14

How Much Water Should We Drink in a Day?

Article Type
Changed
Wed, 11/27/2024 - 13:26

This transcript has been edited for clarity. 

It’s just about the easiest, safest medical advice you can give: “Drink more water.” You have a headache? Drink more water. Tired? Drink more water. Cold coming on? Drink more water. Tom Brady famously attributed his QB longevity to water drinking, among some other less ordinary practices.

I’m a nephrologist — a kidney doctor. I think about water all the time. I can tell you how your brain senses how much water is in your body and exactly how it communicates that information to your kidneys to control how dilute your urine is. I can explain the miraculous ability of the kidney to concentrate urine across a range from 50 mOsm/L to 1200 mOsm/L and the physiology that makes it all work.

 



But I can’t really tell you how much water you’re supposed to drink. And believe me, I get asked all the time.

I’m sure of a couple of things when it comes to water: You need to drink some. Though some animals, such as kangaroo rats, can get virtually all the water they need from the food they eat, we are not such animals. Without water, we die. I’m also sure that you can die from drinking too much water. Drinking excessive amounts of water dilutes the sodium in your blood, which messes with the electrical system in your brain and heart. I actually had a patient who went on a “water cleanse” and gave herself a seizure. 

But, to be fair, assuming your kidneys are working reasonably well and you’re otherwise healthy, you’d need to drink around 20 liters of water a day to get into mortal trouble. The dose is the poison, as they say.

So, somewhere between zero and 20 liters of water is the amount you should be drinking in a day. That much I’m sure of.

But the evidence on where in that range you should target is actually pretty skimpy. You wouldn’t think so if you look at the online wellness influencers, with their Stanleys and their strict water intake regimens. You’d think the evidence for the benefits of drinking extra water is overwhelming.

The venerated National Academy of Medicine suggests that men drink thirteen 8 oz cups a day (that’s about 3 liters) and women drink nine 8 oz cups a day (a bit more than 2 liters). From what I can tell, this recommendation — like the old “8 cups of water per day” recommendation — is pulled out of thin air.

I’m not arguing that we shouldn’t drink water. Of course, water is important. I’m just wondering what data there are to really prove that drinking more water is better. 

Fortunately, a team from UCSF has finally done the legwork for us. They break down the actual evidence in this paper, appearing in JAMA Network Open. 

The team scoured the medical literature for randomized controlled trials of water intake. This is critical; we don’t want anecdotes about how clear someone’s skin became after they increased their water intake. We want icy cold, clear data. Randomized trials take a group of people and, at random, assign some to the intervention — in this case, drinking more water — and others to keep doing what they would normally do.

 



The team reviewed nearly 1500 papers but only 18 (!) met the rigorous criteria to be included in the analysis, as you can see from this flow chart.

 



This is the first important finding; not many high-quality studies have investigated how much water we should drink. Of course, water isn’t a prescription product, so funding is likely hard to come by. Can we do a trial of Dasani?

In any case, these 18 trials all looked at different outcomes of interest. Four studies looked at the impact of drinking more water on weight loss, two on fasting blood glucose, two on headache, two on urinary tract infection, two on kidney stones, and six studies on various other outcomes. None of the studies looked at energy, skin tone, or overall wellness, though one did measure a quality-of-life score.

And if I could sum up all these studies in a word, that word would be “meh.”

 



One of four weight loss studies showed that increasing water intake had no effect on weight loss. Two studies showed an effect, but drinking extra water was combined with a low-calorie diet, so that feels a bit like cheating to me. One study randomized participants to drink half a liter of water before meals, and that group did lose more weight than the control group — about a kilogram more over 12 weeks. That’s not exactly Ozempic.

For fasting blood glucose, although one trial suggested that higher premeal water intake lowered glucose levels, the other study (which looked just at increasing water overall) didn’t.

For headache — and, cards on the table here, I’m a big believer in water for headaches — one study showed nothing. The other showed that increasing water intake by 1.5 liters per day improved migraine-related quality of life but didn’t change the number of headache days per month.

For urinary tract infections, one positive trial and one negative one.

The best evidence comes from the kidney stone trials. Increasing water intake to achieve more than two liters of urine a day was associated with a significant reduction in kidney stone recurrence. I consider this a positive finding, more or less. You would be hard-pressed to find a kidney doctor who doesn’t think that people with a history of kidney stones should drink more water.

What about that quality-of-life study? They randomized participants to either drink 1.5 liters of extra water per day (intervention group) or not (control group). Six months later, the scores on the quality-of-life survey were no different between those two groups.

Thirsty yet?

So, what’s going on here? There are a few possibilities.

First, I need to point out that clinical trials are really hard. All the studies in this review were relatively small, with most enrolling fewer than 100 people. The effect of extra water would need to be pretty potent to detect it with those small samples.

I can’t help but point out that our bodies are actually exquisitely tuned to manage how much water we carry. As we lose water throughout the day from sweat and exhalation, our blood becomes a tiny bit more concentrated — the sodium level goes up. Our brains detect that and create a sensation we call thirst. Thirst is one of the most powerful drives we have. Animals, including humans, when thirsty, will choose water over food, over drugs, and over sex. It is incredibly hard to resist, and assuming that we have ready access to water, there is no need to resist it. We drink when we are thirsty. And that may be enough.

Of course, pushing beyond thirst is possible. We are sapient beings who can drink more than we want to. But what we can’t do, assuming our kidneys work, is hold onto that water. It passes right through us. In the case of preventing kidney stones, this is a good thing. Putting more water into your body leads to more water coming out — more dilute urine — which means it’s harder for stones to form. 

But for all that other stuff? The wellness, the skin tone, and so on? It just doesn’t make much sense. If you drink an extra liter of water, you pee an extra liter of water. Net net? Zero.

Some folks will argue that the extra pee gets rid of extra toxins or something like that, but — sorry, kidney doctor Perry here again — that’s not how pee works. The clearance of toxins from the blood happens way upstream of where your urine is diluted or concentrated.

 



If you drink more, the same toxins come out, just with more water around them. In fact, one of the largest studies in this JAMA Network Open review assessed whether increasing water consumption in people with chronic kidney disease would improve kidney function. It didn’t.

I am left, then, with only a bit more confidence than when I began. I remain certain that you should drink more than zero liters and less than 20 liters every day (assuming you’re not losing a lot of water in some other way, like working in the heat). Beyond that, it seems reasonable to trust the millions of years of evolution that have made water homeostasis central to life itself. Give yourself access to water. Drink when you’re thirsty. Drink a bit more if you’d like. But no need to push it. Your kidneys won’t let you anyway.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Connecticut. He disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity. 

It’s just about the easiest, safest medical advice you can give: “Drink more water.” You have a headache? Drink more water. Tired? Drink more water. Cold coming on? Drink more water. Tom Brady famously attributed his QB longevity to water drinking, among some other less ordinary practices.

I’m a nephrologist — a kidney doctor. I think about water all the time. I can tell you how your brain senses how much water is in your body and exactly how it communicates that information to your kidneys to control how dilute your urine is. I can explain the miraculous ability of the kidney to concentrate urine across a range from 50 mOsm/L to 1200 mOsm/L and the physiology that makes it all work.

 



But I can’t really tell you how much water you’re supposed to drink. And believe me, I get asked all the time.

I’m sure of a couple of things when it comes to water: You need to drink some. Though some animals, such as kangaroo rats, can get virtually all the water they need from the food they eat, we are not such animals. Without water, we die. I’m also sure that you can die from drinking too much water. Drinking excessive amounts of water dilutes the sodium in your blood, which messes with the electrical system in your brain and heart. I actually had a patient who went on a “water cleanse” and gave herself a seizure. 

But, to be fair, assuming your kidneys are working reasonably well and you’re otherwise healthy, you’d need to drink around 20 liters of water a day to get into mortal trouble. The dose is the poison, as they say.

So, somewhere between zero and 20 liters of water is the amount you should be drinking in a day. That much I’m sure of.

But the evidence on where in that range you should target is actually pretty skimpy. You wouldn’t think so if you look at the online wellness influencers, with their Stanleys and their strict water intake regimens. You’d think the evidence for the benefits of drinking extra water is overwhelming.

The venerated National Academy of Medicine suggests that men drink thirteen 8 oz cups a day (that’s about 3 liters) and women drink nine 8 oz cups a day (a bit more than 2 liters). From what I can tell, this recommendation — like the old “8 cups of water per day” recommendation — is pulled out of thin air.

I’m not arguing that we shouldn’t drink water. Of course, water is important. I’m just wondering what data there are to really prove that drinking more water is better. 

Fortunately, a team from UCSF has finally done the legwork for us. They break down the actual evidence in this paper, appearing in JAMA Network Open. 

The team scoured the medical literature for randomized controlled trials of water intake. This is critical; we don’t want anecdotes about how clear someone’s skin became after they increased their water intake. We want icy cold, clear data. Randomized trials take a group of people and, at random, assign some to the intervention — in this case, drinking more water — and others to keep doing what they would normally do.

 



The team reviewed nearly 1500 papers but only 18 (!) met the rigorous criteria to be included in the analysis, as you can see from this flow chart.

 



This is the first important finding; not many high-quality studies have investigated how much water we should drink. Of course, water isn’t a prescription product, so funding is likely hard to come by. Can we do a trial of Dasani?

In any case, these 18 trials all looked at different outcomes of interest. Four studies looked at the impact of drinking more water on weight loss, two on fasting blood glucose, two on headache, two on urinary tract infection, two on kidney stones, and six studies on various other outcomes. None of the studies looked at energy, skin tone, or overall wellness, though one did measure a quality-of-life score.

And if I could sum up all these studies in a word, that word would be “meh.”

 



One of four weight loss studies showed that increasing water intake had no effect on weight loss. Two studies showed an effect, but drinking extra water was combined with a low-calorie diet, so that feels a bit like cheating to me. One study randomized participants to drink half a liter of water before meals, and that group did lose more weight than the control group — about a kilogram more over 12 weeks. That’s not exactly Ozempic.

For fasting blood glucose, although one trial suggested that higher premeal water intake lowered glucose levels, the other study (which looked just at increasing water overall) didn’t.

For headache — and, cards on the table here, I’m a big believer in water for headaches — one study showed nothing. The other showed that increasing water intake by 1.5 liters per day improved migraine-related quality of life but didn’t change the number of headache days per month.

For urinary tract infections, one positive trial and one negative one.

The best evidence comes from the kidney stone trials. Increasing water intake to achieve more than two liters of urine a day was associated with a significant reduction in kidney stone recurrence. I consider this a positive finding, more or less. You would be hard-pressed to find a kidney doctor who doesn’t think that people with a history of kidney stones should drink more water.

What about that quality-of-life study? They randomized participants to either drink 1.5 liters of extra water per day (intervention group) or not (control group). Six months later, the scores on the quality-of-life survey were no different between those two groups.

Thirsty yet?

So, what’s going on here? There are a few possibilities.

First, I need to point out that clinical trials are really hard. All the studies in this review were relatively small, with most enrolling fewer than 100 people. The effect of extra water would need to be pretty potent to detect it with those small samples.

I can’t help but point out that our bodies are actually exquisitely tuned to manage how much water we carry. As we lose water throughout the day from sweat and exhalation, our blood becomes a tiny bit more concentrated — the sodium level goes up. Our brains detect that and create a sensation we call thirst. Thirst is one of the most powerful drives we have. Animals, including humans, when thirsty, will choose water over food, over drugs, and over sex. It is incredibly hard to resist, and assuming that we have ready access to water, there is no need to resist it. We drink when we are thirsty. And that may be enough.

Of course, pushing beyond thirst is possible. We are sapient beings who can drink more than we want to. But what we can’t do, assuming our kidneys work, is hold onto that water. It passes right through us. In the case of preventing kidney stones, this is a good thing. Putting more water into your body leads to more water coming out — more dilute urine — which means it’s harder for stones to form. 

But for all that other stuff? The wellness, the skin tone, and so on? It just doesn’t make much sense. If you drink an extra liter of water, you pee an extra liter of water. Net net? Zero.

Some folks will argue that the extra pee gets rid of extra toxins or something like that, but — sorry, kidney doctor Perry here again — that’s not how pee works. The clearance of toxins from the blood happens way upstream of where your urine is diluted or concentrated.

 



If you drink more, the same toxins come out, just with more water around them. In fact, one of the largest studies in this JAMA Network Open review assessed whether increasing water consumption in people with chronic kidney disease would improve kidney function. It didn’t.

I am left, then, with only a bit more confidence than when I began. I remain certain that you should drink more than zero liters and less than 20 liters every day (assuming you’re not losing a lot of water in some other way, like working in the heat). Beyond that, it seems reasonable to trust the millions of years of evolution that have made water homeostasis central to life itself. Give yourself access to water. Drink when you’re thirsty. Drink a bit more if you’d like. But no need to push it. Your kidneys won’t let you anyway.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Connecticut. He disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

This transcript has been edited for clarity. 

It’s just about the easiest, safest medical advice you can give: “Drink more water.” You have a headache? Drink more water. Tired? Drink more water. Cold coming on? Drink more water. Tom Brady famously attributed his QB longevity to water drinking, among some other less ordinary practices.

I’m a nephrologist — a kidney doctor. I think about water all the time. I can tell you how your brain senses how much water is in your body and exactly how it communicates that information to your kidneys to control how dilute your urine is. I can explain the miraculous ability of the kidney to concentrate urine across a range from 50 mOsm/L to 1200 mOsm/L and the physiology that makes it all work.

 



But I can’t really tell you how much water you’re supposed to drink. And believe me, I get asked all the time.

I’m sure of a couple of things when it comes to water: You need to drink some. Though some animals, such as kangaroo rats, can get virtually all the water they need from the food they eat, we are not such animals. Without water, we die. I’m also sure that you can die from drinking too much water. Drinking excessive amounts of water dilutes the sodium in your blood, which messes with the electrical system in your brain and heart. I actually had a patient who went on a “water cleanse” and gave herself a seizure. 

But, to be fair, assuming your kidneys are working reasonably well and you’re otherwise healthy, you’d need to drink around 20 liters of water a day to get into mortal trouble. The dose is the poison, as they say.

So, somewhere between zero and 20 liters of water is the amount you should be drinking in a day. That much I’m sure of.

But the evidence on where in that range you should target is actually pretty skimpy. You wouldn’t think so if you look at the online wellness influencers, with their Stanleys and their strict water intake regimens. You’d think the evidence for the benefits of drinking extra water is overwhelming.

The venerated National Academy of Medicine suggests that men drink thirteen 8 oz cups a day (that’s about 3 liters) and women drink nine 8 oz cups a day (a bit more than 2 liters). From what I can tell, this recommendation — like the old “8 cups of water per day” recommendation — is pulled out of thin air.

I’m not arguing that we shouldn’t drink water. Of course, water is important. I’m just wondering what data there are to really prove that drinking more water is better. 

Fortunately, a team from UCSF has finally done the legwork for us. They break down the actual evidence in this paper, appearing in JAMA Network Open. 

The team scoured the medical literature for randomized controlled trials of water intake. This is critical; we don’t want anecdotes about how clear someone’s skin became after they increased their water intake. We want icy cold, clear data. Randomized trials take a group of people and, at random, assign some to the intervention — in this case, drinking more water — and others to keep doing what they would normally do.

 



The team reviewed nearly 1500 papers but only 18 (!) met the rigorous criteria to be included in the analysis, as you can see from this flow chart.

 



This is the first important finding; not many high-quality studies have investigated how much water we should drink. Of course, water isn’t a prescription product, so funding is likely hard to come by. Can we do a trial of Dasani?

In any case, these 18 trials all looked at different outcomes of interest. Four studies looked at the impact of drinking more water on weight loss, two on fasting blood glucose, two on headache, two on urinary tract infection, two on kidney stones, and six studies on various other outcomes. None of the studies looked at energy, skin tone, or overall wellness, though one did measure a quality-of-life score.

And if I could sum up all these studies in a word, that word would be “meh.”

 



One of four weight loss studies showed that increasing water intake had no effect on weight loss. Two studies showed an effect, but drinking extra water was combined with a low-calorie diet, so that feels a bit like cheating to me. One study randomized participants to drink half a liter of water before meals, and that group did lose more weight than the control group — about a kilogram more over 12 weeks. That’s not exactly Ozempic.

For fasting blood glucose, although one trial suggested that higher premeal water intake lowered glucose levels, the other study (which looked just at increasing water overall) didn’t.

For headache — and, cards on the table here, I’m a big believer in water for headaches — one study showed nothing. The other showed that increasing water intake by 1.5 liters per day improved migraine-related quality of life but didn’t change the number of headache days per month.

For urinary tract infections, one positive trial and one negative one.

The best evidence comes from the kidney stone trials. Increasing water intake to achieve more than two liters of urine a day was associated with a significant reduction in kidney stone recurrence. I consider this a positive finding, more or less. You would be hard-pressed to find a kidney doctor who doesn’t think that people with a history of kidney stones should drink more water.

What about that quality-of-life study? They randomized participants to either drink 1.5 liters of extra water per day (intervention group) or not (control group). Six months later, the scores on the quality-of-life survey were no different between those two groups.

Thirsty yet?

So, what’s going on here? There are a few possibilities.

First, I need to point out that clinical trials are really hard. All the studies in this review were relatively small, with most enrolling fewer than 100 people. The effect of extra water would need to be pretty potent to detect it with those small samples.

I can’t help but point out that our bodies are actually exquisitely tuned to manage how much water we carry. As we lose water throughout the day from sweat and exhalation, our blood becomes a tiny bit more concentrated — the sodium level goes up. Our brains detect that and create a sensation we call thirst. Thirst is one of the most powerful drives we have. Animals, including humans, when thirsty, will choose water over food, over drugs, and over sex. It is incredibly hard to resist, and assuming that we have ready access to water, there is no need to resist it. We drink when we are thirsty. And that may be enough.

Of course, pushing beyond thirst is possible. We are sapient beings who can drink more than we want to. But what we can’t do, assuming our kidneys work, is hold onto that water. It passes right through us. In the case of preventing kidney stones, this is a good thing. Putting more water into your body leads to more water coming out — more dilute urine — which means it’s harder for stones to form. 

But for all that other stuff? The wellness, the skin tone, and so on? It just doesn’t make much sense. If you drink an extra liter of water, you pee an extra liter of water. Net net? Zero.

Some folks will argue that the extra pee gets rid of extra toxins or something like that, but — sorry, kidney doctor Perry here again — that’s not how pee works. The clearance of toxins from the blood happens way upstream of where your urine is diluted or concentrated.

 



If you drink more, the same toxins come out, just with more water around them. In fact, one of the largest studies in this JAMA Network Open review assessed whether increasing water consumption in people with chronic kidney disease would improve kidney function. It didn’t.

I am left, then, with only a bit more confidence than when I began. I remain certain that you should drink more than zero liters and less than 20 liters every day (assuming you’re not losing a lot of water in some other way, like working in the heat). Beyond that, it seems reasonable to trust the millions of years of evolution that have made water homeostasis central to life itself. Give yourself access to water. Drink when you’re thirsty. Drink a bit more if you’d like. But no need to push it. Your kidneys won’t let you anyway.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Connecticut. He disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 11/27/2024 - 13:25
Un-Gate On Date
Wed, 11/27/2024 - 13:25
Use ProPublica
CFC Schedule Remove Status
Wed, 11/27/2024 - 13:25
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 11/27/2024 - 13:25

New ‘Touchless’ Blood Pressure Screening Tech: How It Works

Article Type
Changed
Wed, 11/27/2024 - 01:38

When a patient signs on to a telehealth portal, there’s little more a provider can do than ask questions. But a new artificial intelligence (AI) technology could allow providers to get feedback about the patient’s blood pressure and diabetes risk just from a video call or a smartphone app.

Researchers at the University of Tokyo in Japan are using AI to determine whether people might have high blood pressure or diabetes based on video data collected with a special sensor. 

The technology relies on photoplethysmography (PPG), which measures changes in blood volume by detecting the amount of light absorbed by blood just below the skin. 

This technology is already used for things like finger pulse oximetry to determine oxygen saturation and heart rate. Wearable devices like Apple Watches and Fitbits also use PPG technologies to detect heart rate and atrial fibrillation.

“If we could detect and accurately measure your blood pressure, heart rate, and oxygen saturation non-invasively that would be fantastic,” said Eugene Yang, MD, professor of medicine in the division of cardiology at the University of Washington School of Medicine in Seattle who was not involved in the study.

 

How Does PPG Work — and Is This New Tech Accurate?

Using PPG, “you’re detecting these small, little blood vessels that sit underneath the surface of your skin,” explained Yang.

“Since both hypertension and diabetes are diseases that damage blood vessels, we thought these diseases might affect blood flow and pulse wave transit times,” said Ryoko Uchida, a project researcher in the cardiology department at the University of Tokyo and one of the leaders of the study.

PPG devices primarily use green light to detect blood flow, as hemoglobin, the oxygen-carrying molecule in blood, absorbs green light most effectively, Yang said. “So, if you extract and remove all the other channels of light and only focus on the green channel, then that’s when you’ll be able to potentially see blood flow and pulsatile blood flow activity,” he noted.

The University of Tokyo researchers used remote or contactless PPG, which requires a short video recording of someone’s face and palms, as the person holds as still as possible. A special sensor collects the video and detects only certain wavelengths of light. Then the researchers developed an AI algorithm to extract data from participants’ skin, such as changes in pulse transit time — the time it takes for the pulse to travel from the palm to the face.

To correlate the video algorithm to blood pressure and diabetes risk, the researchers measured blood participants’ pressure with a continuous sphygmomanometer (an automatic blood pressure cuff) at the same time as they collected the video. They also did a blood A1c test to detect diabetes.

So far, they’ve tested their video algorithm on 215 people. The algorithm applied to a 30-second video was 86% accurate in detecting if blood pressure was above normal, and a 5-second video was 81% accurate in detecting higher blood pressure.

Compared with using hemoglobin A1c blood test results to screen for diabetes, the video algorithm was 75% accurate in identifying people who had subtle blood changes that correlated to diabetes.

“Most of this focus has been on wearable devices, patches, rings, wrist devices,” Yang said, “the facial video stuff is great because you can imagine that there are other ways of applying it.”

Yang, who is also doing research on facial video processing, pointed out it could be helpful not only in telehealth visits, but also for patients in the hospital with highly contagious diseases who need to be in isolation, or just for people using their smartphones. 

“People are tied to their smartphones, so you could imagine that that would be great as a way for people to have awareness about their blood pressure or their diabetes status,” Yang noted.

 

More Work to Do

The study has a few caveats. The special sensor they used in this study isn’t yet integrated into smartphone cameras or other common video recording devices. But Uchida is hopeful that it could be mass-produced and inexpensive to someday add.

Also, the study was done in a Japanese population, and lighter skin may be easier to capture changes in blood flow, Uchida noted. Pulse oximeters, which use the same technology, tend to overestimate blood oxygen in people with darker skin tones.

“It is necessary to test whether the same results are obtained in a variety of subjects other than Japanese and Asians,” Uchida said, in addition to validating the tool with more participants.

The study has also not yet undergone peer review.

And Yang pointed out that this new AI technology provides more of a screening tool to predict who is at high risk for high blood pressure or diabetes, rather than precise measurements for either disease.

There are already some devices that claim to measure blood pressure using PPG technology, like blood pressure monitoring watches. But Yang warns that these kinds of devices aren’t validated, meaning we don’t really know how well they work.

One difficulty in getting any kind of PPG blood pressure monitoring device to market is that the organizations involved in setting medical device standards (like the International Organization for Standards) doesn’t yet have a validation standard for this technology, Yang said, so there’s really no way to consistently verify the technology’s accuracy.

“I am optimistic that we are capable of figuring out how to validate these things. I just think we have so many things we have to iron out before that happens,” Yang explained, noting that it will be at least 3 years before a remote blood monitoring system is widely available.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

When a patient signs on to a telehealth portal, there’s little more a provider can do than ask questions. But a new artificial intelligence (AI) technology could allow providers to get feedback about the patient’s blood pressure and diabetes risk just from a video call or a smartphone app.

Researchers at the University of Tokyo in Japan are using AI to determine whether people might have high blood pressure or diabetes based on video data collected with a special sensor. 

The technology relies on photoplethysmography (PPG), which measures changes in blood volume by detecting the amount of light absorbed by blood just below the skin. 

This technology is already used for things like finger pulse oximetry to determine oxygen saturation and heart rate. Wearable devices like Apple Watches and Fitbits also use PPG technologies to detect heart rate and atrial fibrillation.

“If we could detect and accurately measure your blood pressure, heart rate, and oxygen saturation non-invasively that would be fantastic,” said Eugene Yang, MD, professor of medicine in the division of cardiology at the University of Washington School of Medicine in Seattle who was not involved in the study.

 

How Does PPG Work — and Is This New Tech Accurate?

Using PPG, “you’re detecting these small, little blood vessels that sit underneath the surface of your skin,” explained Yang.

“Since both hypertension and diabetes are diseases that damage blood vessels, we thought these diseases might affect blood flow and pulse wave transit times,” said Ryoko Uchida, a project researcher in the cardiology department at the University of Tokyo and one of the leaders of the study.

PPG devices primarily use green light to detect blood flow, as hemoglobin, the oxygen-carrying molecule in blood, absorbs green light most effectively, Yang said. “So, if you extract and remove all the other channels of light and only focus on the green channel, then that’s when you’ll be able to potentially see blood flow and pulsatile blood flow activity,” he noted.

The University of Tokyo researchers used remote or contactless PPG, which requires a short video recording of someone’s face and palms, as the person holds as still as possible. A special sensor collects the video and detects only certain wavelengths of light. Then the researchers developed an AI algorithm to extract data from participants’ skin, such as changes in pulse transit time — the time it takes for the pulse to travel from the palm to the face.

To correlate the video algorithm to blood pressure and diabetes risk, the researchers measured blood participants’ pressure with a continuous sphygmomanometer (an automatic blood pressure cuff) at the same time as they collected the video. They also did a blood A1c test to detect diabetes.

So far, they’ve tested their video algorithm on 215 people. The algorithm applied to a 30-second video was 86% accurate in detecting if blood pressure was above normal, and a 5-second video was 81% accurate in detecting higher blood pressure.

Compared with using hemoglobin A1c blood test results to screen for diabetes, the video algorithm was 75% accurate in identifying people who had subtle blood changes that correlated to diabetes.

“Most of this focus has been on wearable devices, patches, rings, wrist devices,” Yang said, “the facial video stuff is great because you can imagine that there are other ways of applying it.”

Yang, who is also doing research on facial video processing, pointed out it could be helpful not only in telehealth visits, but also for patients in the hospital with highly contagious diseases who need to be in isolation, or just for people using their smartphones. 

“People are tied to their smartphones, so you could imagine that that would be great as a way for people to have awareness about their blood pressure or their diabetes status,” Yang noted.

 

More Work to Do

The study has a few caveats. The special sensor they used in this study isn’t yet integrated into smartphone cameras or other common video recording devices. But Uchida is hopeful that it could be mass-produced and inexpensive to someday add.

Also, the study was done in a Japanese population, and lighter skin may be easier to capture changes in blood flow, Uchida noted. Pulse oximeters, which use the same technology, tend to overestimate blood oxygen in people with darker skin tones.

“It is necessary to test whether the same results are obtained in a variety of subjects other than Japanese and Asians,” Uchida said, in addition to validating the tool with more participants.

The study has also not yet undergone peer review.

And Yang pointed out that this new AI technology provides more of a screening tool to predict who is at high risk for high blood pressure or diabetes, rather than precise measurements for either disease.

There are already some devices that claim to measure blood pressure using PPG technology, like blood pressure monitoring watches. But Yang warns that these kinds of devices aren’t validated, meaning we don’t really know how well they work.

One difficulty in getting any kind of PPG blood pressure monitoring device to market is that the organizations involved in setting medical device standards (like the International Organization for Standards) doesn’t yet have a validation standard for this technology, Yang said, so there’s really no way to consistently verify the technology’s accuracy.

“I am optimistic that we are capable of figuring out how to validate these things. I just think we have so many things we have to iron out before that happens,” Yang explained, noting that it will be at least 3 years before a remote blood monitoring system is widely available.

A version of this article first appeared on Medscape.com.

When a patient signs on to a telehealth portal, there’s little more a provider can do than ask questions. But a new artificial intelligence (AI) technology could allow providers to get feedback about the patient’s blood pressure and diabetes risk just from a video call or a smartphone app.

Researchers at the University of Tokyo in Japan are using AI to determine whether people might have high blood pressure or diabetes based on video data collected with a special sensor. 

The technology relies on photoplethysmography (PPG), which measures changes in blood volume by detecting the amount of light absorbed by blood just below the skin. 

This technology is already used for things like finger pulse oximetry to determine oxygen saturation and heart rate. Wearable devices like Apple Watches and Fitbits also use PPG technologies to detect heart rate and atrial fibrillation.

“If we could detect and accurately measure your blood pressure, heart rate, and oxygen saturation non-invasively that would be fantastic,” said Eugene Yang, MD, professor of medicine in the division of cardiology at the University of Washington School of Medicine in Seattle who was not involved in the study.

 

How Does PPG Work — and Is This New Tech Accurate?

Using PPG, “you’re detecting these small, little blood vessels that sit underneath the surface of your skin,” explained Yang.

“Since both hypertension and diabetes are diseases that damage blood vessels, we thought these diseases might affect blood flow and pulse wave transit times,” said Ryoko Uchida, a project researcher in the cardiology department at the University of Tokyo and one of the leaders of the study.

PPG devices primarily use green light to detect blood flow, as hemoglobin, the oxygen-carrying molecule in blood, absorbs green light most effectively, Yang said. “So, if you extract and remove all the other channels of light and only focus on the green channel, then that’s when you’ll be able to potentially see blood flow and pulsatile blood flow activity,” he noted.

The University of Tokyo researchers used remote or contactless PPG, which requires a short video recording of someone’s face and palms, as the person holds as still as possible. A special sensor collects the video and detects only certain wavelengths of light. Then the researchers developed an AI algorithm to extract data from participants’ skin, such as changes in pulse transit time — the time it takes for the pulse to travel from the palm to the face.

To correlate the video algorithm to blood pressure and diabetes risk, the researchers measured blood participants’ pressure with a continuous sphygmomanometer (an automatic blood pressure cuff) at the same time as they collected the video. They also did a blood A1c test to detect diabetes.

So far, they’ve tested their video algorithm on 215 people. The algorithm applied to a 30-second video was 86% accurate in detecting if blood pressure was above normal, and a 5-second video was 81% accurate in detecting higher blood pressure.

Compared with using hemoglobin A1c blood test results to screen for diabetes, the video algorithm was 75% accurate in identifying people who had subtle blood changes that correlated to diabetes.

“Most of this focus has been on wearable devices, patches, rings, wrist devices,” Yang said, “the facial video stuff is great because you can imagine that there are other ways of applying it.”

Yang, who is also doing research on facial video processing, pointed out it could be helpful not only in telehealth visits, but also for patients in the hospital with highly contagious diseases who need to be in isolation, or just for people using their smartphones. 

“People are tied to their smartphones, so you could imagine that that would be great as a way for people to have awareness about their blood pressure or their diabetes status,” Yang noted.

 

More Work to Do

The study has a few caveats. The special sensor they used in this study isn’t yet integrated into smartphone cameras or other common video recording devices. But Uchida is hopeful that it could be mass-produced and inexpensive to someday add.

Also, the study was done in a Japanese population, and lighter skin may be easier to capture changes in blood flow, Uchida noted. Pulse oximeters, which use the same technology, tend to overestimate blood oxygen in people with darker skin tones.

“It is necessary to test whether the same results are obtained in a variety of subjects other than Japanese and Asians,” Uchida said, in addition to validating the tool with more participants.

The study has also not yet undergone peer review.

And Yang pointed out that this new AI technology provides more of a screening tool to predict who is at high risk for high blood pressure or diabetes, rather than precise measurements for either disease.

There are already some devices that claim to measure blood pressure using PPG technology, like blood pressure monitoring watches. But Yang warns that these kinds of devices aren’t validated, meaning we don’t really know how well they work.

One difficulty in getting any kind of PPG blood pressure monitoring device to market is that the organizations involved in setting medical device standards (like the International Organization for Standards) doesn’t yet have a validation standard for this technology, Yang said, so there’s really no way to consistently verify the technology’s accuracy.

“I am optimistic that we are capable of figuring out how to validate these things. I just think we have so many things we have to iron out before that happens,” Yang explained, noting that it will be at least 3 years before a remote blood monitoring system is widely available.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 11/26/2024 - 15:14
Un-Gate On Date
Tue, 11/26/2024 - 15:14
Use ProPublica
CFC Schedule Remove Status
Tue, 11/26/2024 - 15:14
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 11/26/2024 - 15:14

Europe’s Lifeline: Science Weighs in on Suicide Prevention

Article Type
Changed
Wed, 11/27/2024 - 02:22

Suicide and self-harm continue to be serious concerns in Europe, despite decreasing rates over the past two decades. In 2021 alone, 47,346 people died by suicide in the European Union, close to 1% of all deaths reported that year. Measures have been taken at population, subpopulation, and individual levels to prevent suicide and suicide attempts. But can more be done? Yes, according to experts.

Researchers are investigating factors that contribute to suicide at the individual level, as well as environmental and societal pressures that may increase risk. New predictive tools show promise in identifying individuals at high risk, and ongoing programs offer hope for early and ongoing interventions. Successful preventive strategies are multimodal, emphasizing the need for trained primary care and mental health professionals to work together to identify and support individuals at risk at every age and in all settings.

 

‘Radical Change’ Needed

The medical community’s approach to suicide prevention is all wrong, according to Igor Galynker, MD, PhD, clinical professor of psychiatry and director of the Mount Sinai Suicide Prevention Research Lab in New York City. 

Galynker is collaborating with colleagues in various parts of the world, including Europe, to validate the use of suicide crisis syndrome (SCS) as a diagnosis to help imminent suicide risk evaluation and treatment.

SCS is a negative cognitive-affective state associated with imminent suicidal behavior in those who are already at high risk for suicide. Galynker and his colleagues want to see SCS recognized and accepted as a suicide-specific diagnosis in the Diagnostic and Statistical Manual of Mental Disorders and the World Health Organization’s International Classification of Diseases. 

Currently, he explained to this news organization, clinicians depend on a person at risk for suicide telling them that this is what they are feeling. This is “absurd,” he said, because people in this situation are in acute pain and distress and cannot answer accurately.

“It is the most lethal psychiatric condition, because people die from it ... yet we rely on people at the worst moment of their lives to tell us accurately when and how they are going to kill themselves. We don’t ask people with serious mental illness to diagnose their own mental illness and rely on that diagnosis.”

Data show that most people who attempt or die by suicide deny suicidal thoughts when assessed by healthcare providers using current questionnaires and scales. Thus, there needs to be “a radical change” in how patients at acute risk are assessed and treated to help “prevent suicides and avoid lost opportunities to intervene,” he said.

Galynker explained that SCS is the final and most acute stage of the “ narrative crisis model” of suicide, which reflects the progression of suicidal risk from chronic risk factors to imminent suicidal risk. “The narrative crisis model has four distinct and successive stages, with specific guidance and applicable interventions that enable patients to receive a stage-specific treatment.”

“Suicide crisis syndrome is a very treatable syndrome that rapidly resolves” with appropriate interventions, he said. “Once it is treated, the patient can engage with psychotherapy and other treatments.”

Galynker said he and his colleagues have had encouraging results with their studies so far on the subjective and objective views of clinicians using the risk assessment tools they are developing to assess suicidal ideation. Further studies are ongoing. 

 

Improving Prediction

There is definitely room for improvement in current approaches to suicide prevention, said Raffaella Calati, PhD, assistant professor of clinical psychology at the University of Milano-Bicocca, Italy, who has had research collaborations with Galynker.

Calati advocates for a more integrated approach across disciplines, institutions, and the community to provide an effective support network for those at risk. 

Accurately predicting suicide risk is challenging, she told this news organization. She and colleagues are working to develop more precise predictive tools for identifying individuals at risk, often by leveraging artificial intelligence and data analytics. They have designed and implemented app-based interventions for psychiatric patients at risk for suicide and university students with psychological distress. The interventions are personalized and based on multiple approaches, such as cognitive-behavioral therapy (CBT) and third-wave CBT. 

The results of current studies are preliminary, she acknowledged, “but even if apps are extremely complex, our projects received high interest from participants and the scientific community,” she said. The aim now is to integrate these tools into healthcare systems so that monitoring high-risk patients becomes part of regular care. 

Another area of focus is the identification of specific subtypes of individuals at risk for suicide, particularly by examining factors such as pain, dissociation, and interoception — the ability to sense and interpret internal signals from the body. 

“By understanding how these experiences intersect and contribute to suicide risk, I aim to identify distinct profiles within at-risk populations, which could ultimately enable more tailored and effective prevention efforts,” she said.

Her work also involves meta-research to build large, comprehensive datasets that increase statistical power for exploring suicide risk factors, such as physical health conditions and symptoms associated with borderline personality disorder. By creating these datasets, she aims to “improve understanding of how various factors contribute to suicide risk, ultimately supporting more effective prevention strategies.”

 

Country-Level Efforts

Preventive work is underway in other countries as well. In Nordic countries such as Denmark, Finland, and Sweden, large-scale national registries that track people’s medical histories, prescriptions, and demographic information are being used to develop predictive algorithms that identify those at high risk for suicide. The predictions are based on known risk factors like previous mental health diagnoses, substance abuse, and social determinants of health.

A recent Norwegian study found that a novel assessment tool used at admission to an acute inpatient unit was a powerful predictor of suicide within 3 years post-discharge.

Researchers in the Netherlands have also recently co-designed a digital integrated suicide prevention program, which has led to a significant reduction in suicide mortality. 

SUPREMOCOL (suicide prevention by monitoring and collaborative care) was implemented in Noord-Brabant, a province in the Netherlands that historically had high suicide rates. It combines technology and personal care, allowing healthcare providers to track a person’s mental health, including by phone calls, text messages, and mobile apps that help people express their feelings and report any changes in their mental state. By staying connected, the program aims to identify warning signs early and provide timely interventions.

The results from the 5-year project showed that rates dropped by 21.5%, from 14.4 per 100,000 to 11.8 per 100,000, and remained low, with a rate of 11.3 per 100,000 by 2021.

Finland used to have one of the highest suicide rates in the world. Now it is implementing its suicide prevention program for 2020-2030, with 36 proposed measures to prevent suicide mortality. 

The program includes measures such as increasing public awareness, early intervention, supporting at-risk groups, developing new treatment options, and enhancing research efforts. Earlier successful interventions included limiting access to firearms and poison, and increasing use of antidepressants and other targeted interventions.

“A key is to ensure that the individuals at risk of suicide have access to adequate, timely, and evidence-based care,” said Timo Partonen, MD, research professor at the Finnish Institute for Health and Welfare and associate professor of psychiatry at the University of Helsinki.

“Emergency and frontline professionals, as well as general practitioners and occupational health physicians, have a key role in identifying people at risk of suicide,” he noted. “High-quality competencies will be developed for healthcare professionals, including access to evidence-based suicide prevention models for addressing and assessing suicide risk.” 

 

Global Strategies

Policymakers across Europe are increasingly recognizing the importance of enhanced public health approaches to suicide prevention. 

The recently adopted EU Action Plan on Mental Health emphasizes the need for comprehensive suicide prevention strategies across Europe, including the promotion of mental health literacy and the provision of accessible mental health services.

The plan was informed by initiatives such as the European Alliance Against Depression (EAAD)-Best project, which ran from 2021 until March 2024. The collaborative project brought together researchers, healthcare providers, and community organizations to improve care for patients with depression and to prevent suicidal behavior in Europe. 

The multimodal approach included community engagement and training for healthcare professionals, as well as promoting the international uptake of the iFightDepression tool, an internet-based self-management approach for patients with depression. It has shown promise in reducing suicide rates in participating regions, including Europe, Australia, South America, and Africa.

“What we now know is that multiple interventions produce a synergic effect with a tendency to reduce suicidal behavior,” said EAAD founding member Ricardo Gusmão, MD, PhD, professor of public mental health at the University of Porto, Portugal. Current approaches to suicide prevention globally vary widely, with “many, fragmentary, atomized interventions, and we know that none of them, in isolation, produces spectacular results.” 

Gusmão explained that promising national suicide prevention strategies are based on multicomponent community interventions. On the clinical side, they encompass training primary health and specialized mental health professionals, and have a guaranteed chain of care and functioning pathways for access. They also involve educational programs in schools, universities, prisons, work settings, and geriatric care centers. Additionally, they have well-developed good standards for media communication and health marketing campaigns on well-being and mental health literacy.

Relevant and cohesive themes for successful strategies include the promotion of positive mental health, the identification and available treatments for depression and common mental disorders, and the management of suicidal crisis stigma. 

“We are now focusing on workplace settings and vulnerable groups such as youth, the elderly, unemployed, migrants and, of course, people affected by mental disorders,” he said. “Suicide prevention is like a web that must be weaved by long-lasting efforts and intersectoral collaboration.”

“Even one suicide is one too many,” Brendan Kelly, MD, PhD, professor of psychiatry, Trinity College Dublin, and author of The Modern Psychiatrist’s Guide to Contemporary Practice, told this news organization. “Nobody is born wanting to die by suicide. And every suicide is an individual tragedy, not a statistic. We need to work ever more intensively to reduce rates of suicide. All contributions to research and fresh thinking are welcome.”

Galynker, Calati, Partonen, and Kelly have disclosed no relevant financial relationships.  Gusmão has been involved in organizing Janssen-funded trainings for registrars on suicidal crisis management. 

 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Suicide and self-harm continue to be serious concerns in Europe, despite decreasing rates over the past two decades. In 2021 alone, 47,346 people died by suicide in the European Union, close to 1% of all deaths reported that year. Measures have been taken at population, subpopulation, and individual levels to prevent suicide and suicide attempts. But can more be done? Yes, according to experts.

Researchers are investigating factors that contribute to suicide at the individual level, as well as environmental and societal pressures that may increase risk. New predictive tools show promise in identifying individuals at high risk, and ongoing programs offer hope for early and ongoing interventions. Successful preventive strategies are multimodal, emphasizing the need for trained primary care and mental health professionals to work together to identify and support individuals at risk at every age and in all settings.

 

‘Radical Change’ Needed

The medical community’s approach to suicide prevention is all wrong, according to Igor Galynker, MD, PhD, clinical professor of psychiatry and director of the Mount Sinai Suicide Prevention Research Lab in New York City. 

Galynker is collaborating with colleagues in various parts of the world, including Europe, to validate the use of suicide crisis syndrome (SCS) as a diagnosis to help imminent suicide risk evaluation and treatment.

SCS is a negative cognitive-affective state associated with imminent suicidal behavior in those who are already at high risk for suicide. Galynker and his colleagues want to see SCS recognized and accepted as a suicide-specific diagnosis in the Diagnostic and Statistical Manual of Mental Disorders and the World Health Organization’s International Classification of Diseases. 

Currently, he explained to this news organization, clinicians depend on a person at risk for suicide telling them that this is what they are feeling. This is “absurd,” he said, because people in this situation are in acute pain and distress and cannot answer accurately.

“It is the most lethal psychiatric condition, because people die from it ... yet we rely on people at the worst moment of their lives to tell us accurately when and how they are going to kill themselves. We don’t ask people with serious mental illness to diagnose their own mental illness and rely on that diagnosis.”

Data show that most people who attempt or die by suicide deny suicidal thoughts when assessed by healthcare providers using current questionnaires and scales. Thus, there needs to be “a radical change” in how patients at acute risk are assessed and treated to help “prevent suicides and avoid lost opportunities to intervene,” he said.

Galynker explained that SCS is the final and most acute stage of the “ narrative crisis model” of suicide, which reflects the progression of suicidal risk from chronic risk factors to imminent suicidal risk. “The narrative crisis model has four distinct and successive stages, with specific guidance and applicable interventions that enable patients to receive a stage-specific treatment.”

“Suicide crisis syndrome is a very treatable syndrome that rapidly resolves” with appropriate interventions, he said. “Once it is treated, the patient can engage with psychotherapy and other treatments.”

Galynker said he and his colleagues have had encouraging results with their studies so far on the subjective and objective views of clinicians using the risk assessment tools they are developing to assess suicidal ideation. Further studies are ongoing. 

 

Improving Prediction

There is definitely room for improvement in current approaches to suicide prevention, said Raffaella Calati, PhD, assistant professor of clinical psychology at the University of Milano-Bicocca, Italy, who has had research collaborations with Galynker.

Calati advocates for a more integrated approach across disciplines, institutions, and the community to provide an effective support network for those at risk. 

Accurately predicting suicide risk is challenging, she told this news organization. She and colleagues are working to develop more precise predictive tools for identifying individuals at risk, often by leveraging artificial intelligence and data analytics. They have designed and implemented app-based interventions for psychiatric patients at risk for suicide and university students with psychological distress. The interventions are personalized and based on multiple approaches, such as cognitive-behavioral therapy (CBT) and third-wave CBT. 

The results of current studies are preliminary, she acknowledged, “but even if apps are extremely complex, our projects received high interest from participants and the scientific community,” she said. The aim now is to integrate these tools into healthcare systems so that monitoring high-risk patients becomes part of regular care. 

Another area of focus is the identification of specific subtypes of individuals at risk for suicide, particularly by examining factors such as pain, dissociation, and interoception — the ability to sense and interpret internal signals from the body. 

“By understanding how these experiences intersect and contribute to suicide risk, I aim to identify distinct profiles within at-risk populations, which could ultimately enable more tailored and effective prevention efforts,” she said.

Her work also involves meta-research to build large, comprehensive datasets that increase statistical power for exploring suicide risk factors, such as physical health conditions and symptoms associated with borderline personality disorder. By creating these datasets, she aims to “improve understanding of how various factors contribute to suicide risk, ultimately supporting more effective prevention strategies.”

 

Country-Level Efforts

Preventive work is underway in other countries as well. In Nordic countries such as Denmark, Finland, and Sweden, large-scale national registries that track people’s medical histories, prescriptions, and demographic information are being used to develop predictive algorithms that identify those at high risk for suicide. The predictions are based on known risk factors like previous mental health diagnoses, substance abuse, and social determinants of health.

A recent Norwegian study found that a novel assessment tool used at admission to an acute inpatient unit was a powerful predictor of suicide within 3 years post-discharge.

Researchers in the Netherlands have also recently co-designed a digital integrated suicide prevention program, which has led to a significant reduction in suicide mortality. 

SUPREMOCOL (suicide prevention by monitoring and collaborative care) was implemented in Noord-Brabant, a province in the Netherlands that historically had high suicide rates. It combines technology and personal care, allowing healthcare providers to track a person’s mental health, including by phone calls, text messages, and mobile apps that help people express their feelings and report any changes in their mental state. By staying connected, the program aims to identify warning signs early and provide timely interventions.

The results from the 5-year project showed that rates dropped by 21.5%, from 14.4 per 100,000 to 11.8 per 100,000, and remained low, with a rate of 11.3 per 100,000 by 2021.

Finland used to have one of the highest suicide rates in the world. Now it is implementing its suicide prevention program for 2020-2030, with 36 proposed measures to prevent suicide mortality. 

The program includes measures such as increasing public awareness, early intervention, supporting at-risk groups, developing new treatment options, and enhancing research efforts. Earlier successful interventions included limiting access to firearms and poison, and increasing use of antidepressants and other targeted interventions.

“A key is to ensure that the individuals at risk of suicide have access to adequate, timely, and evidence-based care,” said Timo Partonen, MD, research professor at the Finnish Institute for Health and Welfare and associate professor of psychiatry at the University of Helsinki.

“Emergency and frontline professionals, as well as general practitioners and occupational health physicians, have a key role in identifying people at risk of suicide,” he noted. “High-quality competencies will be developed for healthcare professionals, including access to evidence-based suicide prevention models for addressing and assessing suicide risk.” 

 

Global Strategies

Policymakers across Europe are increasingly recognizing the importance of enhanced public health approaches to suicide prevention. 

The recently adopted EU Action Plan on Mental Health emphasizes the need for comprehensive suicide prevention strategies across Europe, including the promotion of mental health literacy and the provision of accessible mental health services.

The plan was informed by initiatives such as the European Alliance Against Depression (EAAD)-Best project, which ran from 2021 until March 2024. The collaborative project brought together researchers, healthcare providers, and community organizations to improve care for patients with depression and to prevent suicidal behavior in Europe. 

The multimodal approach included community engagement and training for healthcare professionals, as well as promoting the international uptake of the iFightDepression tool, an internet-based self-management approach for patients with depression. It has shown promise in reducing suicide rates in participating regions, including Europe, Australia, South America, and Africa.

“What we now know is that multiple interventions produce a synergic effect with a tendency to reduce suicidal behavior,” said EAAD founding member Ricardo Gusmão, MD, PhD, professor of public mental health at the University of Porto, Portugal. Current approaches to suicide prevention globally vary widely, with “many, fragmentary, atomized interventions, and we know that none of them, in isolation, produces spectacular results.” 

Gusmão explained that promising national suicide prevention strategies are based on multicomponent community interventions. On the clinical side, they encompass training primary health and specialized mental health professionals, and have a guaranteed chain of care and functioning pathways for access. They also involve educational programs in schools, universities, prisons, work settings, and geriatric care centers. Additionally, they have well-developed good standards for media communication and health marketing campaigns on well-being and mental health literacy.

Relevant and cohesive themes for successful strategies include the promotion of positive mental health, the identification and available treatments for depression and common mental disorders, and the management of suicidal crisis stigma. 

“We are now focusing on workplace settings and vulnerable groups such as youth, the elderly, unemployed, migrants and, of course, people affected by mental disorders,” he said. “Suicide prevention is like a web that must be weaved by long-lasting efforts and intersectoral collaboration.”

“Even one suicide is one too many,” Brendan Kelly, MD, PhD, professor of psychiatry, Trinity College Dublin, and author of The Modern Psychiatrist’s Guide to Contemporary Practice, told this news organization. “Nobody is born wanting to die by suicide. And every suicide is an individual tragedy, not a statistic. We need to work ever more intensively to reduce rates of suicide. All contributions to research and fresh thinking are welcome.”

Galynker, Calati, Partonen, and Kelly have disclosed no relevant financial relationships.  Gusmão has been involved in organizing Janssen-funded trainings for registrars on suicidal crisis management. 

 

A version of this article first appeared on Medscape.com.

Suicide and self-harm continue to be serious concerns in Europe, despite decreasing rates over the past two decades. In 2021 alone, 47,346 people died by suicide in the European Union, close to 1% of all deaths reported that year. Measures have been taken at population, subpopulation, and individual levels to prevent suicide and suicide attempts. But can more be done? Yes, according to experts.

Researchers are investigating factors that contribute to suicide at the individual level, as well as environmental and societal pressures that may increase risk. New predictive tools show promise in identifying individuals at high risk, and ongoing programs offer hope for early and ongoing interventions. Successful preventive strategies are multimodal, emphasizing the need for trained primary care and mental health professionals to work together to identify and support individuals at risk at every age and in all settings.

 

‘Radical Change’ Needed

The medical community’s approach to suicide prevention is all wrong, according to Igor Galynker, MD, PhD, clinical professor of psychiatry and director of the Mount Sinai Suicide Prevention Research Lab in New York City. 

Galynker is collaborating with colleagues in various parts of the world, including Europe, to validate the use of suicide crisis syndrome (SCS) as a diagnosis to help imminent suicide risk evaluation and treatment.

SCS is a negative cognitive-affective state associated with imminent suicidal behavior in those who are already at high risk for suicide. Galynker and his colleagues want to see SCS recognized and accepted as a suicide-specific diagnosis in the Diagnostic and Statistical Manual of Mental Disorders and the World Health Organization’s International Classification of Diseases. 

Currently, he explained to this news organization, clinicians depend on a person at risk for suicide telling them that this is what they are feeling. This is “absurd,” he said, because people in this situation are in acute pain and distress and cannot answer accurately.

“It is the most lethal psychiatric condition, because people die from it ... yet we rely on people at the worst moment of their lives to tell us accurately when and how they are going to kill themselves. We don’t ask people with serious mental illness to diagnose their own mental illness and rely on that diagnosis.”

Data show that most people who attempt or die by suicide deny suicidal thoughts when assessed by healthcare providers using current questionnaires and scales. Thus, there needs to be “a radical change” in how patients at acute risk are assessed and treated to help “prevent suicides and avoid lost opportunities to intervene,” he said.

Galynker explained that SCS is the final and most acute stage of the “ narrative crisis model” of suicide, which reflects the progression of suicidal risk from chronic risk factors to imminent suicidal risk. “The narrative crisis model has four distinct and successive stages, with specific guidance and applicable interventions that enable patients to receive a stage-specific treatment.”

“Suicide crisis syndrome is a very treatable syndrome that rapidly resolves” with appropriate interventions, he said. “Once it is treated, the patient can engage with psychotherapy and other treatments.”

Galynker said he and his colleagues have had encouraging results with their studies so far on the subjective and objective views of clinicians using the risk assessment tools they are developing to assess suicidal ideation. Further studies are ongoing. 

 

Improving Prediction

There is definitely room for improvement in current approaches to suicide prevention, said Raffaella Calati, PhD, assistant professor of clinical psychology at the University of Milano-Bicocca, Italy, who has had research collaborations with Galynker.

Calati advocates for a more integrated approach across disciplines, institutions, and the community to provide an effective support network for those at risk. 

Accurately predicting suicide risk is challenging, she told this news organization. She and colleagues are working to develop more precise predictive tools for identifying individuals at risk, often by leveraging artificial intelligence and data analytics. They have designed and implemented app-based interventions for psychiatric patients at risk for suicide and university students with psychological distress. The interventions are personalized and based on multiple approaches, such as cognitive-behavioral therapy (CBT) and third-wave CBT. 

The results of current studies are preliminary, she acknowledged, “but even if apps are extremely complex, our projects received high interest from participants and the scientific community,” she said. The aim now is to integrate these tools into healthcare systems so that monitoring high-risk patients becomes part of regular care. 

Another area of focus is the identification of specific subtypes of individuals at risk for suicide, particularly by examining factors such as pain, dissociation, and interoception — the ability to sense and interpret internal signals from the body. 

“By understanding how these experiences intersect and contribute to suicide risk, I aim to identify distinct profiles within at-risk populations, which could ultimately enable more tailored and effective prevention efforts,” she said.

Her work also involves meta-research to build large, comprehensive datasets that increase statistical power for exploring suicide risk factors, such as physical health conditions and symptoms associated with borderline personality disorder. By creating these datasets, she aims to “improve understanding of how various factors contribute to suicide risk, ultimately supporting more effective prevention strategies.”

 

Country-Level Efforts

Preventive work is underway in other countries as well. In Nordic countries such as Denmark, Finland, and Sweden, large-scale national registries that track people’s medical histories, prescriptions, and demographic information are being used to develop predictive algorithms that identify those at high risk for suicide. The predictions are based on known risk factors like previous mental health diagnoses, substance abuse, and social determinants of health.

A recent Norwegian study found that a novel assessment tool used at admission to an acute inpatient unit was a powerful predictor of suicide within 3 years post-discharge.

Researchers in the Netherlands have also recently co-designed a digital integrated suicide prevention program, which has led to a significant reduction in suicide mortality. 

SUPREMOCOL (suicide prevention by monitoring and collaborative care) was implemented in Noord-Brabant, a province in the Netherlands that historically had high suicide rates. It combines technology and personal care, allowing healthcare providers to track a person’s mental health, including by phone calls, text messages, and mobile apps that help people express their feelings and report any changes in their mental state. By staying connected, the program aims to identify warning signs early and provide timely interventions.

The results from the 5-year project showed that rates dropped by 21.5%, from 14.4 per 100,000 to 11.8 per 100,000, and remained low, with a rate of 11.3 per 100,000 by 2021.

Finland used to have one of the highest suicide rates in the world. Now it is implementing its suicide prevention program for 2020-2030, with 36 proposed measures to prevent suicide mortality. 

The program includes measures such as increasing public awareness, early intervention, supporting at-risk groups, developing new treatment options, and enhancing research efforts. Earlier successful interventions included limiting access to firearms and poison, and increasing use of antidepressants and other targeted interventions.

“A key is to ensure that the individuals at risk of suicide have access to adequate, timely, and evidence-based care,” said Timo Partonen, MD, research professor at the Finnish Institute for Health and Welfare and associate professor of psychiatry at the University of Helsinki.

“Emergency and frontline professionals, as well as general practitioners and occupational health physicians, have a key role in identifying people at risk of suicide,” he noted. “High-quality competencies will be developed for healthcare professionals, including access to evidence-based suicide prevention models for addressing and assessing suicide risk.” 

 

Global Strategies

Policymakers across Europe are increasingly recognizing the importance of enhanced public health approaches to suicide prevention. 

The recently adopted EU Action Plan on Mental Health emphasizes the need for comprehensive suicide prevention strategies across Europe, including the promotion of mental health literacy and the provision of accessible mental health services.

The plan was informed by initiatives such as the European Alliance Against Depression (EAAD)-Best project, which ran from 2021 until March 2024. The collaborative project brought together researchers, healthcare providers, and community organizations to improve care for patients with depression and to prevent suicidal behavior in Europe. 

The multimodal approach included community engagement and training for healthcare professionals, as well as promoting the international uptake of the iFightDepression tool, an internet-based self-management approach for patients with depression. It has shown promise in reducing suicide rates in participating regions, including Europe, Australia, South America, and Africa.

“What we now know is that multiple interventions produce a synergic effect with a tendency to reduce suicidal behavior,” said EAAD founding member Ricardo Gusmão, MD, PhD, professor of public mental health at the University of Porto, Portugal. Current approaches to suicide prevention globally vary widely, with “many, fragmentary, atomized interventions, and we know that none of them, in isolation, produces spectacular results.” 

Gusmão explained that promising national suicide prevention strategies are based on multicomponent community interventions. On the clinical side, they encompass training primary health and specialized mental health professionals, and have a guaranteed chain of care and functioning pathways for access. They also involve educational programs in schools, universities, prisons, work settings, and geriatric care centers. Additionally, they have well-developed good standards for media communication and health marketing campaigns on well-being and mental health literacy.

Relevant and cohesive themes for successful strategies include the promotion of positive mental health, the identification and available treatments for depression and common mental disorders, and the management of suicidal crisis stigma. 

“We are now focusing on workplace settings and vulnerable groups such as youth, the elderly, unemployed, migrants and, of course, people affected by mental disorders,” he said. “Suicide prevention is like a web that must be weaved by long-lasting efforts and intersectoral collaboration.”

“Even one suicide is one too many,” Brendan Kelly, MD, PhD, professor of psychiatry, Trinity College Dublin, and author of The Modern Psychiatrist’s Guide to Contemporary Practice, told this news organization. “Nobody is born wanting to die by suicide. And every suicide is an individual tragedy, not a statistic. We need to work ever more intensively to reduce rates of suicide. All contributions to research and fresh thinking are welcome.”

Galynker, Calati, Partonen, and Kelly have disclosed no relevant financial relationships.  Gusmão has been involved in organizing Janssen-funded trainings for registrars on suicidal crisis management. 

 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 11/25/2024 - 15:11
Un-Gate On Date
Mon, 11/25/2024 - 15:11
Use ProPublica
CFC Schedule Remove Status
Mon, 11/25/2024 - 15:11
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 11/25/2024 - 15:11

Vaping Linked to Higher Risk of Blurred Vision & Eye Pain

Article Type
Changed
Wed, 11/27/2024 - 02:31

TOPLINE: 

Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.

METHODOLOGY: 

  • Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
  • 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
  • The e-cigarette users were propensity score–matched to non-e-cigarette-users.
  • People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
  • The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.

TAKEAWAY:

  • E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
  • This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
  • The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
  • A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).

IN PRACTICE:

“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.

SOURCE:

The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.

LIMITATIONS: 

The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes. 

DISCLOSURES:

Study authors reported no relevant financial disclosures. 

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE: 

Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.

METHODOLOGY: 

  • Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
  • 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
  • The e-cigarette users were propensity score–matched to non-e-cigarette-users.
  • People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
  • The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.

TAKEAWAY:

  • E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
  • This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
  • The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
  • A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).

IN PRACTICE:

“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.

SOURCE:

The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.

LIMITATIONS: 

The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes. 

DISCLOSURES:

Study authors reported no relevant financial disclosures. 

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE: 

Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.

METHODOLOGY: 

  • Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
  • 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
  • The e-cigarette users were propensity score–matched to non-e-cigarette-users.
  • People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
  • The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.

TAKEAWAY:

  • E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
  • This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
  • The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
  • A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).

IN PRACTICE:

“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.

SOURCE:

The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.

LIMITATIONS: 

The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes. 

DISCLOSURES:

Study authors reported no relevant financial disclosures. 

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/22/2024 - 13:41
Un-Gate On Date
Fri, 11/22/2024 - 13:41
Use ProPublica
CFC Schedule Remove Status
Fri, 11/22/2024 - 13:41
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/22/2024 - 13:41

NCCN Expands Cancer Genetic Risk Assessment Guidelines

Article Type
Changed
Wed, 11/27/2024 - 02:19

The National Comprehensive Cancer Network (NCCN) has expanded two cancer genetic risk assessment guidelines to meet the growing understanding of hereditary cancer risk and use of genetic tests in cancer prevention, screening, and treatment. 

Additional cancer types were included in the title and content for both guidelines. Prostate cancer was added to Genetic/Familial High-Risk Assessment: Breast, Ovarian, Pancreatic, and Prostate, and endometrial and gastric cancer were added to Genetic/Familial High-Risk Assessment: Colorectal, Endometrial, and Gastric.

For these cancers, the expanded guidelines include information on when genetic testing is recommended and what type of testing may be best. These guidelines also detail the hereditary conditions and genetic mutations associated with elevated cancer risk and include appropriate “next steps” for individuals who have them, which may involve increased screening or prevention surgeries.

“These updates include the spectrum of genes associated with genetic syndromes, the range of risk associated with each pathogenic variant, the improvements in screening and prevention strategies, the role of genetic data to inform cancer treatment, and the expansion of the role of genetic counseling as this field moves forward,” Mary B. Daly, MD, PhD, with Fox Chase Cancer Center, Philadelphia, Pennsylvania, said in a news release. Daly chaired the panel that updated the breast, ovarian, pancreatic, and prostate cancer guidelines.

Oncologists should, for instance, ask patients about their family and personal history of cancer and known germline variants at time of initial diagnosis. With prostate cancer, if patients meet criteria for germline testing, multigene testing should include a host of variants, including BRCA1, BRCA2, ATM, PALB2, CHEK2, HOXB13, MLH1, MSH2, MSH6, and PMS2.

The updated guidelines on genetic risk assessment of colorectal, endometrial, and gastric cancer include new recommendations to consider for hereditary cancer screening in patients with newly diagnosed endometrial cancer, for evaluating and managing CDH1-associated gastric cancer risk, and for managing gastric cancer risk in patients with APC pathogenic variants. 

For CDH1-associated gastric cancer, for instance, the guidelines recommend carriers be referred to institutions with expertise in managing risks for cancer associated with CDH1, “given the still limited understanding and rarity of this syndrome.” 

“These expanded guidelines reflect the recommendations from leading experts on genetic testing based on the latest scientific research across the cancer spectrum, consolidated into two convenient resources,” said NCCN CEO Crystal S. Denlinger, MD, with Fox Chase Cancer Center, in a news release

“This information is critical for guiding shared decision-making between health care providers and their patients, enhancing screening practices as appropriate, and potentially choosing options for prevention and targeted treatment choices. Genetic testing guidelines enable us to better care for people with cancer and their family members,” Denlinger added.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The National Comprehensive Cancer Network (NCCN) has expanded two cancer genetic risk assessment guidelines to meet the growing understanding of hereditary cancer risk and use of genetic tests in cancer prevention, screening, and treatment. 

Additional cancer types were included in the title and content for both guidelines. Prostate cancer was added to Genetic/Familial High-Risk Assessment: Breast, Ovarian, Pancreatic, and Prostate, and endometrial and gastric cancer were added to Genetic/Familial High-Risk Assessment: Colorectal, Endometrial, and Gastric.

For these cancers, the expanded guidelines include information on when genetic testing is recommended and what type of testing may be best. These guidelines also detail the hereditary conditions and genetic mutations associated with elevated cancer risk and include appropriate “next steps” for individuals who have them, which may involve increased screening or prevention surgeries.

“These updates include the spectrum of genes associated with genetic syndromes, the range of risk associated with each pathogenic variant, the improvements in screening and prevention strategies, the role of genetic data to inform cancer treatment, and the expansion of the role of genetic counseling as this field moves forward,” Mary B. Daly, MD, PhD, with Fox Chase Cancer Center, Philadelphia, Pennsylvania, said in a news release. Daly chaired the panel that updated the breast, ovarian, pancreatic, and prostate cancer guidelines.

Oncologists should, for instance, ask patients about their family and personal history of cancer and known germline variants at time of initial diagnosis. With prostate cancer, if patients meet criteria for germline testing, multigene testing should include a host of variants, including BRCA1, BRCA2, ATM, PALB2, CHEK2, HOXB13, MLH1, MSH2, MSH6, and PMS2.

The updated guidelines on genetic risk assessment of colorectal, endometrial, and gastric cancer include new recommendations to consider for hereditary cancer screening in patients with newly diagnosed endometrial cancer, for evaluating and managing CDH1-associated gastric cancer risk, and for managing gastric cancer risk in patients with APC pathogenic variants. 

For CDH1-associated gastric cancer, for instance, the guidelines recommend carriers be referred to institutions with expertise in managing risks for cancer associated with CDH1, “given the still limited understanding and rarity of this syndrome.” 

“These expanded guidelines reflect the recommendations from leading experts on genetic testing based on the latest scientific research across the cancer spectrum, consolidated into two convenient resources,” said NCCN CEO Crystal S. Denlinger, MD, with Fox Chase Cancer Center, in a news release

“This information is critical for guiding shared decision-making between health care providers and their patients, enhancing screening practices as appropriate, and potentially choosing options for prevention and targeted treatment choices. Genetic testing guidelines enable us to better care for people with cancer and their family members,” Denlinger added.

A version of this article first appeared on Medscape.com.

The National Comprehensive Cancer Network (NCCN) has expanded two cancer genetic risk assessment guidelines to meet the growing understanding of hereditary cancer risk and use of genetic tests in cancer prevention, screening, and treatment. 

Additional cancer types were included in the title and content for both guidelines. Prostate cancer was added to Genetic/Familial High-Risk Assessment: Breast, Ovarian, Pancreatic, and Prostate, and endometrial and gastric cancer were added to Genetic/Familial High-Risk Assessment: Colorectal, Endometrial, and Gastric.

For these cancers, the expanded guidelines include information on when genetic testing is recommended and what type of testing may be best. These guidelines also detail the hereditary conditions and genetic mutations associated with elevated cancer risk and include appropriate “next steps” for individuals who have them, which may involve increased screening or prevention surgeries.

“These updates include the spectrum of genes associated with genetic syndromes, the range of risk associated with each pathogenic variant, the improvements in screening and prevention strategies, the role of genetic data to inform cancer treatment, and the expansion of the role of genetic counseling as this field moves forward,” Mary B. Daly, MD, PhD, with Fox Chase Cancer Center, Philadelphia, Pennsylvania, said in a news release. Daly chaired the panel that updated the breast, ovarian, pancreatic, and prostate cancer guidelines.

Oncologists should, for instance, ask patients about their family and personal history of cancer and known germline variants at time of initial diagnosis. With prostate cancer, if patients meet criteria for germline testing, multigene testing should include a host of variants, including BRCA1, BRCA2, ATM, PALB2, CHEK2, HOXB13, MLH1, MSH2, MSH6, and PMS2.

The updated guidelines on genetic risk assessment of colorectal, endometrial, and gastric cancer include new recommendations to consider for hereditary cancer screening in patients with newly diagnosed endometrial cancer, for evaluating and managing CDH1-associated gastric cancer risk, and for managing gastric cancer risk in patients with APC pathogenic variants. 

For CDH1-associated gastric cancer, for instance, the guidelines recommend carriers be referred to institutions with expertise in managing risks for cancer associated with CDH1, “given the still limited understanding and rarity of this syndrome.” 

“These expanded guidelines reflect the recommendations from leading experts on genetic testing based on the latest scientific research across the cancer spectrum, consolidated into two convenient resources,” said NCCN CEO Crystal S. Denlinger, MD, with Fox Chase Cancer Center, in a news release

“This information is critical for guiding shared decision-making between health care providers and their patients, enhancing screening practices as appropriate, and potentially choosing options for prevention and targeted treatment choices. Genetic testing guidelines enable us to better care for people with cancer and their family members,” Denlinger added.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/22/2024 - 12:24
Un-Gate On Date
Fri, 11/22/2024 - 12:24
Use ProPublica
CFC Schedule Remove Status
Fri, 11/22/2024 - 12:24
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/22/2024 - 12:24

Stages I-III Screen-Detected CRC Boosts Disease-Free Survival Rates

Article Type
Changed
Wed, 11/27/2024 - 02:20

TOPLINE:

Patients with stages I-III screen-detected colorectal cancer (CRC) have better disease-free survival rates than those with non-screen–detected CRC, an effect that was independent of patient, tumor, and treatment characteristics.

METHODOLOGY:

  • Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
  • A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
  • Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.

TAKEAWAY:

  • Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
  • Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
  • The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
  • Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
  • Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
  • Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).

IN PRACTICE:

“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.

SOURCE:

The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.

LIMITATIONS:

The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.

DISCLOSURES:

There was no funding source for this study. The authors declared no conflicts of interest.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Patients with stages I-III screen-detected colorectal cancer (CRC) have better disease-free survival rates than those with non-screen–detected CRC, an effect that was independent of patient, tumor, and treatment characteristics.

METHODOLOGY:

  • Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
  • A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
  • Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.

TAKEAWAY:

  • Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
  • Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
  • The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
  • Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
  • Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
  • Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).

IN PRACTICE:

“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.

SOURCE:

The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.

LIMITATIONS:

The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.

DISCLOSURES:

There was no funding source for this study. The authors declared no conflicts of interest.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

Patients with stages I-III screen-detected colorectal cancer (CRC) have better disease-free survival rates than those with non-screen–detected CRC, an effect that was independent of patient, tumor, and treatment characteristics.

METHODOLOGY:

  • Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
  • A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
  • Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.

TAKEAWAY:

  • Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
  • Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
  • The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
  • Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
  • Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
  • Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).

IN PRACTICE:

“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.

SOURCE:

The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.

LIMITATIONS:

The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.

DISCLOSURES:

There was no funding source for this study. The authors declared no conflicts of interest.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/22/2024 - 12:08
Un-Gate On Date
Fri, 11/22/2024 - 12:08
Use ProPublica
CFC Schedule Remove Status
Fri, 11/22/2024 - 12:08
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/22/2024 - 12:08

Is Pancreatic Cancer Really Rising in Young People?

Article Type
Changed
Wed, 11/27/2024 - 02:17

TOPLINE:

The increase in incidence of pancreatic cancer among young Americans is largely caused by improved detection of early-stage endocrine cancer, not an increase in pancreatic adenocarcinoma. Given the stable mortality rates in this population, the increase in incidence likely reflects previously undetected cases instead of a true rise in new cases, researchers say.

METHODOLOGY:

  • Data from several registries have indicated that the incidence of pancreatic cancer among younger individuals, particularly women, is on the rise in the United States and worldwide.
  • In a new analysis, researchers wanted to see if the observed increase in pancreatic cancer incidence among young Americans represented a true rise in cancer occurrence or indicated greater diagnostic scrutiny. If pancreatic cancer incidence is really increasing, “incidence and mortality would be expected to increase concurrently, as would early- and late-stage diagnoses,” the researchers explained.
  • The researchers collected data on pancreatic cancer incidence, histology, and stage distribution for individuals aged 15-39 years from US Cancer Statistics, a database covering almost the entire US population from 2001 to 2020. Pancreatic cancer mortality data from the same timeframe came from the National Vital Statistics System.
  • The researchers looked at four histologic categories: Adenocarcinoma, the dominant pancreatic cancer histology, as well as more rare subtypes — endocrine and solid pseudopapillary — and “other” category. Researchers also categorized stage-specific incidence as early stage (in situ or localized) or late stage (regional or distant).

TAKEAWAY:

  • The incidence of pancreatic cancer increased 2.1-fold in young women (incidence, 3.3-6.9 per million) and 1.6-fold in young men (incidence, 3.9-6.2 per million) between 2001 and 2019. However, mortality rates remained stable for women (1.5 deaths per million; annual percent change [AAPC], −0.5%; 95% CI, –1.4% to 0.5%) and men (2.5 deaths per million; AAPC, –0.1%; 95% CI, –0.8% to 0.6%) over this period.
  • Looking at cancer subtypes, the increase in incidence was largely caused by early-stage endocrine cancer and solid pseudopapillary neoplasms in women, not adenocarcinoma (which remained stable over the study period).
  • Looking at cancer stage, most of the increase in incidence came from detection of smaller tumors (< 2 cm) and early-stage cancer, which rose from 0.6 to 3.7 per million in women and from 0.4 to 2.2 per million in men. The authors also found no statistically significant change in the incidence of late-stage cancer in women or men.
  • Rates of surgical treatment for pancreatic cancer increased, more than tripling among women (from 1.5 to 4.7 per million) and more than doubling among men (from 1.1 to 2.3 per million).

IN PRACTICE:

“Pancreatic cancer now can be another cancer subject to overdiagnosis: The detection of disease not destined to cause symptoms or death,” the authors concluded. “Although the observed changes in incidence are small, overdiagnosis is especially concerning for pancreatic cancer, as pancreatic surgery has substantial risk for morbidity (in particular, pancreatic fistulas) and mortality.”

SOURCE:

The study, with first author Vishal R. Patel, MD, MPH, and corresponding author H. Gilbert Welch, MD, MPH, from Brigham and Women’s Hospital, Boston, was published online on November 19 in Annals of Internal Medicine.

LIMITATIONS:

The study was limited by the lack of data on the method of cancer detection, which may have affected the interpretation of the findings.

DISCLOSURES:

Disclosure forms are available with the article online.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

The increase in incidence of pancreatic cancer among young Americans is largely caused by improved detection of early-stage endocrine cancer, not an increase in pancreatic adenocarcinoma. Given the stable mortality rates in this population, the increase in incidence likely reflects previously undetected cases instead of a true rise in new cases, researchers say.

METHODOLOGY:

  • Data from several registries have indicated that the incidence of pancreatic cancer among younger individuals, particularly women, is on the rise in the United States and worldwide.
  • In a new analysis, researchers wanted to see if the observed increase in pancreatic cancer incidence among young Americans represented a true rise in cancer occurrence or indicated greater diagnostic scrutiny. If pancreatic cancer incidence is really increasing, “incidence and mortality would be expected to increase concurrently, as would early- and late-stage diagnoses,” the researchers explained.
  • The researchers collected data on pancreatic cancer incidence, histology, and stage distribution for individuals aged 15-39 years from US Cancer Statistics, a database covering almost the entire US population from 2001 to 2020. Pancreatic cancer mortality data from the same timeframe came from the National Vital Statistics System.
  • The researchers looked at four histologic categories: Adenocarcinoma, the dominant pancreatic cancer histology, as well as more rare subtypes — endocrine and solid pseudopapillary — and “other” category. Researchers also categorized stage-specific incidence as early stage (in situ or localized) or late stage (regional or distant).

TAKEAWAY:

  • The incidence of pancreatic cancer increased 2.1-fold in young women (incidence, 3.3-6.9 per million) and 1.6-fold in young men (incidence, 3.9-6.2 per million) between 2001 and 2019. However, mortality rates remained stable for women (1.5 deaths per million; annual percent change [AAPC], −0.5%; 95% CI, –1.4% to 0.5%) and men (2.5 deaths per million; AAPC, –0.1%; 95% CI, –0.8% to 0.6%) over this period.
  • Looking at cancer subtypes, the increase in incidence was largely caused by early-stage endocrine cancer and solid pseudopapillary neoplasms in women, not adenocarcinoma (which remained stable over the study period).
  • Looking at cancer stage, most of the increase in incidence came from detection of smaller tumors (< 2 cm) and early-stage cancer, which rose from 0.6 to 3.7 per million in women and from 0.4 to 2.2 per million in men. The authors also found no statistically significant change in the incidence of late-stage cancer in women or men.
  • Rates of surgical treatment for pancreatic cancer increased, more than tripling among women (from 1.5 to 4.7 per million) and more than doubling among men (from 1.1 to 2.3 per million).

IN PRACTICE:

“Pancreatic cancer now can be another cancer subject to overdiagnosis: The detection of disease not destined to cause symptoms or death,” the authors concluded. “Although the observed changes in incidence are small, overdiagnosis is especially concerning for pancreatic cancer, as pancreatic surgery has substantial risk for morbidity (in particular, pancreatic fistulas) and mortality.”

SOURCE:

The study, with first author Vishal R. Patel, MD, MPH, and corresponding author H. Gilbert Welch, MD, MPH, from Brigham and Women’s Hospital, Boston, was published online on November 19 in Annals of Internal Medicine.

LIMITATIONS:

The study was limited by the lack of data on the method of cancer detection, which may have affected the interpretation of the findings.

DISCLOSURES:

Disclosure forms are available with the article online.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

The increase in incidence of pancreatic cancer among young Americans is largely caused by improved detection of early-stage endocrine cancer, not an increase in pancreatic adenocarcinoma. Given the stable mortality rates in this population, the increase in incidence likely reflects previously undetected cases instead of a true rise in new cases, researchers say.

METHODOLOGY:

  • Data from several registries have indicated that the incidence of pancreatic cancer among younger individuals, particularly women, is on the rise in the United States and worldwide.
  • In a new analysis, researchers wanted to see if the observed increase in pancreatic cancer incidence among young Americans represented a true rise in cancer occurrence or indicated greater diagnostic scrutiny. If pancreatic cancer incidence is really increasing, “incidence and mortality would be expected to increase concurrently, as would early- and late-stage diagnoses,” the researchers explained.
  • The researchers collected data on pancreatic cancer incidence, histology, and stage distribution for individuals aged 15-39 years from US Cancer Statistics, a database covering almost the entire US population from 2001 to 2020. Pancreatic cancer mortality data from the same timeframe came from the National Vital Statistics System.
  • The researchers looked at four histologic categories: Adenocarcinoma, the dominant pancreatic cancer histology, as well as more rare subtypes — endocrine and solid pseudopapillary — and “other” category. Researchers also categorized stage-specific incidence as early stage (in situ or localized) or late stage (regional or distant).

TAKEAWAY:

  • The incidence of pancreatic cancer increased 2.1-fold in young women (incidence, 3.3-6.9 per million) and 1.6-fold in young men (incidence, 3.9-6.2 per million) between 2001 and 2019. However, mortality rates remained stable for women (1.5 deaths per million; annual percent change [AAPC], −0.5%; 95% CI, –1.4% to 0.5%) and men (2.5 deaths per million; AAPC, –0.1%; 95% CI, –0.8% to 0.6%) over this period.
  • Looking at cancer subtypes, the increase in incidence was largely caused by early-stage endocrine cancer and solid pseudopapillary neoplasms in women, not adenocarcinoma (which remained stable over the study period).
  • Looking at cancer stage, most of the increase in incidence came from detection of smaller tumors (< 2 cm) and early-stage cancer, which rose from 0.6 to 3.7 per million in women and from 0.4 to 2.2 per million in men. The authors also found no statistically significant change in the incidence of late-stage cancer in women or men.
  • Rates of surgical treatment for pancreatic cancer increased, more than tripling among women (from 1.5 to 4.7 per million) and more than doubling among men (from 1.1 to 2.3 per million).

IN PRACTICE:

“Pancreatic cancer now can be another cancer subject to overdiagnosis: The detection of disease not destined to cause symptoms or death,” the authors concluded. “Although the observed changes in incidence are small, overdiagnosis is especially concerning for pancreatic cancer, as pancreatic surgery has substantial risk for morbidity (in particular, pancreatic fistulas) and mortality.”

SOURCE:

The study, with first author Vishal R. Patel, MD, MPH, and corresponding author H. Gilbert Welch, MD, MPH, from Brigham and Women’s Hospital, Boston, was published online on November 19 in Annals of Internal Medicine.

LIMITATIONS:

The study was limited by the lack of data on the method of cancer detection, which may have affected the interpretation of the findings.

DISCLOSURES:

Disclosure forms are available with the article online.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 11/22/2024 - 12:03
Un-Gate On Date
Fri, 11/22/2024 - 12:03
Use ProPublica
CFC Schedule Remove Status
Fri, 11/22/2024 - 12:03
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 11/22/2024 - 12:03