User login
Bimekizumab effective for axSpA with or without prior TNFi treatment
GHENT, BELGIUM – Patients with nonradiographic or radiographic axial spondyloarthritis (axSpA) experienced clinically relevant treatment responses to bimekizumab (Bimzelx) at similar rates that significantly exceeded placebo, regardless of prior experience with a tumor necrosis factor (TNF) inhibitor, according to results from two phase 3 trials presented at the 13th International Congress on Spondyloarthritides.
In addition, around half of patients with either nonradiographic or radiographic disease achieved complete remission of enthesitis by week 16 of treatment with bimekizumab. The drug, a humanized, monoclonal antibody dually inhibiting interleukins (IL) 17A and 17F, is approved in the European Union for treating adults with moderate to severe plaque psoriasis.
“Bimekizumab blockade works independently of axial spondyloarthritis pretreatment, which means this drug specifically blocks something that other drugs do not reach,” said Xenofon Baraliakos, MD, professor of internal medicine and rheumatology at Ruhr University Bochum (Germany). He presented 24-week data on the use of bimekizumab.
The BE MOBILE 1 trial involved 256 patients with nonradiographic axSpA, whereas BE MOBILE 2 involved 232 patients with radiographic axSpA. In both trials, bimekizumab 160 mg was administered subcutaneously every 4 weeks, and at week 16, all patients, including those who had received placebo, received open-label bimekizumab for another 8 weeks. This news organization previously reported results from BE MOBILE 2 that were presented at the European Alliance of Associations for Rheumatology (EULAR) 2022 annual meeting.
In Ghent, referring to the nonradiographic patients, Dr. Baraliakos said in an interview, “We saw a very clear response to the active drug even after 2 weeks. The curves separated out from placebo. The week 16 primary analysis showed patients on bimekizumab did significantly better, [and there was] a similar response in those who switched to [open-label] bimekizumab after placebo” at week 16.
In patients with nonradiographic disease at week 24, 52.3% on bimekizumab achieved the trial’s primary outcome of 40% improvement in Assessment in Spondyloarthritis International Society response criteria (ASAS 40), compared with 46.8% of patients who were receiving placebo and then switched to open-label bimekizumab at week 16, the latter rising from 21.4% at week 16. For comparison, 47.7% on bimekizumab achieved ASAS 40 at week 16.
At week 24 in BE MOBILE 2, 53.8% of patients with radiographic disease on continuous bimekizumab met ASAS 40 criteria, as did 56.8% of patients who switched from placebo to open-label bimekizumab, rising from 22.5% with placebo and 44.8% with bimekizumab at week 16.
Audience member Fabian Proft, MD, of Charité Medical University, Berlin, commented on the latest results as well as wider bimekizumab findings, including those relating to psoriasis. “When we compare this to drugs that are already approved and available, we can assume that bimekizumab is equally effective to existing ones,” he said, noting that “there is the additional option in patients with psoriasis, where it seems to be the most effective drug for this indication. If I had a patient with radiographic or nonradiographic axial SpA and who also had significant psoriasis, then bimekizumab would be my choice of treatment.”
Targeting IL-17A and IL-17F in one drug
In the BE MOBILE 1 study, Dr. Baraliakos and coinvestigators looked at whether inhibiting IL-17F as well as IL-17A “makes sense” in terms of clinical benefits in patients with axSpA.
“Previous experience with IL-17A inhibitors shows they work well but still miss some patients,” Dr. Baraliakos said, adding that, “the hope is that by blocking both IL-17A and IL-17F, the response will be a bit better in terms of both greater response and longevity of response than [with an] IL-17A [inhibitor] alone.”
Patients in BE MOBILE 1 were typical adult patients with nonradiographic axSpA who fulfilled ASAS classification criteria and had elevated C-reactive protein (CRP) and/or sacroiliitis on MRI. All patients were older than 18 years and had a mean age of 39 years. In each arm, 51%-57% were men. Overall, patients had a mean of 9 years of symptoms and a mean Ankylosing Spondylitis Disease Activity Score of 3.7 in both patient groups (placebo and bimekizumab).
All had active disease (Bath Ankylosing Spondylitis Disease Activity Index ≥ 4 and spinal pain ≥ 4) at baseline and demonstrated failure to respond to two different NSAIDs or had a history of intolerance to or contraindication to NSAIDs. Patients had previously received up to one TNF inhibitor (13.5% in the placebo group and 7.8% in the bimekizumab group).
The primary outcome compared rate of response to ASAS 40 criteria, which comprises patient global assessment of disease, spinal pain, function (as assessed by the Bath Ankylosing Spondylitis Functional Index [BASFI]), and inflammation (stiffness).
Early response seen regardless of previous TNF inhibitor experience
“We saw response to bimekizumab very early in our patients at 16 weeks. The amount of response was higher than that observed with IL-17A alone,” Dr. Baraliakos said in an interview. “It’s understood that IL-17A and IL-17F do not work together on the inflammatory cascade, but work separately, and this might explain the findings whereby this drug captures more inflammation.”
Dr. Baraliakos highlighted the unique response rates seen with bimekizumab regardless of past TNF inhibitor use. “The TNF inhibitor-experienced patients responded as well as the TNF inhibitor–naive ones. This is unusual because nonresponders to other drugs are usually more severely affected and have a lower chance of showing response to any drug. Also, we did not see this response in patients treated with IL-17A only.”
At 16 weeks, patients with nonradiographic disease without a past history of using a TNF inhibitor had ASAS 40 responses at rates of 46.6% with bimekizumab and 22.9% with placebo. These rates in patients with past TNF inhibitor use were 60% with bimekizumab and 11.8% with placebo.
Statistically significant differences between bimekizumab and placebo occurred for all primary and secondary outcomes. “This includes the MRI inflammation findings in bimekizumab-treated patients,” Dr. Baraliakos reported.
Complete resolution of enthesitis was also observed. By week 24, enthesitis completely resolved in 47.9% of patients with nonradiographic disease on continuous bimekizumab and 43.5% of those patients who switched from placebo to bimekizumab. In patients with radiographic disease, complete resolution occurred in 53% of those on continuous bimekizumab and 49.3% of patients who switched at week 16. “This was an excellent outcome,” Dr. Baraliakos said.
The safety profile at 24 weeks confirmed prior findings at 16 weeks in which the most common treatment-emergent adverse events with bimekizumab were nasopharyngitis (9.4%), upper respiratory tract infection (7%), and oral candidiasis (3.1%); fungal infections overall occurred in 7% taking bimekizumab.
“We saw slightly higher fungal infections, but this is because we block IL-17A and IL-17F, and [the risk for these infections] is linked to the mechanism of action. But we can deal with this,” Dr. Baraliakos said.
The trials were sponsored by UCB. Dr. Baraliakos disclosed serving on the speakers bureau and as a paid instructor and consultant for AbbVie, Bristol-Myers Squibb, Chugai, Eli Lilly, Galapagos, Gilead, Merck Sharp & Dohme, Novartis, Pfizer, and UCB. Dr. Proft disclosed serving on speakers bureaus for Amgen, AbbVie, Bristol-Myers Squibb, Celgene, Janssen, Merck Sharp & Dohme, Novartis, Pfizer, Roche, and UCB; being a consultant to Novartis; and receiving grant or research support from Novartis, UCB, and Lilly.
GHENT, BELGIUM – Patients with nonradiographic or radiographic axial spondyloarthritis (axSpA) experienced clinically relevant treatment responses to bimekizumab (Bimzelx) at similar rates that significantly exceeded placebo, regardless of prior experience with a tumor necrosis factor (TNF) inhibitor, according to results from two phase 3 trials presented at the 13th International Congress on Spondyloarthritides.
In addition, around half of patients with either nonradiographic or radiographic disease achieved complete remission of enthesitis by week 16 of treatment with bimekizumab. The drug, a humanized, monoclonal antibody dually inhibiting interleukins (IL) 17A and 17F, is approved in the European Union for treating adults with moderate to severe plaque psoriasis.
“Bimekizumab blockade works independently of axial spondyloarthritis pretreatment, which means this drug specifically blocks something that other drugs do not reach,” said Xenofon Baraliakos, MD, professor of internal medicine and rheumatology at Ruhr University Bochum (Germany). He presented 24-week data on the use of bimekizumab.
The BE MOBILE 1 trial involved 256 patients with nonradiographic axSpA, whereas BE MOBILE 2 involved 232 patients with radiographic axSpA. In both trials, bimekizumab 160 mg was administered subcutaneously every 4 weeks, and at week 16, all patients, including those who had received placebo, received open-label bimekizumab for another 8 weeks. This news organization previously reported results from BE MOBILE 2 that were presented at the European Alliance of Associations for Rheumatology (EULAR) 2022 annual meeting.
In Ghent, referring to the nonradiographic patients, Dr. Baraliakos said in an interview, “We saw a very clear response to the active drug even after 2 weeks. The curves separated out from placebo. The week 16 primary analysis showed patients on bimekizumab did significantly better, [and there was] a similar response in those who switched to [open-label] bimekizumab after placebo” at week 16.
In patients with nonradiographic disease at week 24, 52.3% on bimekizumab achieved the trial’s primary outcome of 40% improvement in Assessment in Spondyloarthritis International Society response criteria (ASAS 40), compared with 46.8% of patients who were receiving placebo and then switched to open-label bimekizumab at week 16, the latter rising from 21.4% at week 16. For comparison, 47.7% on bimekizumab achieved ASAS 40 at week 16.
At week 24 in BE MOBILE 2, 53.8% of patients with radiographic disease on continuous bimekizumab met ASAS 40 criteria, as did 56.8% of patients who switched from placebo to open-label bimekizumab, rising from 22.5% with placebo and 44.8% with bimekizumab at week 16.
Audience member Fabian Proft, MD, of Charité Medical University, Berlin, commented on the latest results as well as wider bimekizumab findings, including those relating to psoriasis. “When we compare this to drugs that are already approved and available, we can assume that bimekizumab is equally effective to existing ones,” he said, noting that “there is the additional option in patients with psoriasis, where it seems to be the most effective drug for this indication. If I had a patient with radiographic or nonradiographic axial SpA and who also had significant psoriasis, then bimekizumab would be my choice of treatment.”
Targeting IL-17A and IL-17F in one drug
In the BE MOBILE 1 study, Dr. Baraliakos and coinvestigators looked at whether inhibiting IL-17F as well as IL-17A “makes sense” in terms of clinical benefits in patients with axSpA.
“Previous experience with IL-17A inhibitors shows they work well but still miss some patients,” Dr. Baraliakos said, adding that, “the hope is that by blocking both IL-17A and IL-17F, the response will be a bit better in terms of both greater response and longevity of response than [with an] IL-17A [inhibitor] alone.”
Patients in BE MOBILE 1 were typical adult patients with nonradiographic axSpA who fulfilled ASAS classification criteria and had elevated C-reactive protein (CRP) and/or sacroiliitis on MRI. All patients were older than 18 years and had a mean age of 39 years. In each arm, 51%-57% were men. Overall, patients had a mean of 9 years of symptoms and a mean Ankylosing Spondylitis Disease Activity Score of 3.7 in both patient groups (placebo and bimekizumab).
All had active disease (Bath Ankylosing Spondylitis Disease Activity Index ≥ 4 and spinal pain ≥ 4) at baseline and demonstrated failure to respond to two different NSAIDs or had a history of intolerance to or contraindication to NSAIDs. Patients had previously received up to one TNF inhibitor (13.5% in the placebo group and 7.8% in the bimekizumab group).
The primary outcome compared rate of response to ASAS 40 criteria, which comprises patient global assessment of disease, spinal pain, function (as assessed by the Bath Ankylosing Spondylitis Functional Index [BASFI]), and inflammation (stiffness).
Early response seen regardless of previous TNF inhibitor experience
“We saw response to bimekizumab very early in our patients at 16 weeks. The amount of response was higher than that observed with IL-17A alone,” Dr. Baraliakos said in an interview. “It’s understood that IL-17A and IL-17F do not work together on the inflammatory cascade, but work separately, and this might explain the findings whereby this drug captures more inflammation.”
Dr. Baraliakos highlighted the unique response rates seen with bimekizumab regardless of past TNF inhibitor use. “The TNF inhibitor-experienced patients responded as well as the TNF inhibitor–naive ones. This is unusual because nonresponders to other drugs are usually more severely affected and have a lower chance of showing response to any drug. Also, we did not see this response in patients treated with IL-17A only.”
At 16 weeks, patients with nonradiographic disease without a past history of using a TNF inhibitor had ASAS 40 responses at rates of 46.6% with bimekizumab and 22.9% with placebo. These rates in patients with past TNF inhibitor use were 60% with bimekizumab and 11.8% with placebo.
Statistically significant differences between bimekizumab and placebo occurred for all primary and secondary outcomes. “This includes the MRI inflammation findings in bimekizumab-treated patients,” Dr. Baraliakos reported.
Complete resolution of enthesitis was also observed. By week 24, enthesitis completely resolved in 47.9% of patients with nonradiographic disease on continuous bimekizumab and 43.5% of those patients who switched from placebo to bimekizumab. In patients with radiographic disease, complete resolution occurred in 53% of those on continuous bimekizumab and 49.3% of patients who switched at week 16. “This was an excellent outcome,” Dr. Baraliakos said.
The safety profile at 24 weeks confirmed prior findings at 16 weeks in which the most common treatment-emergent adverse events with bimekizumab were nasopharyngitis (9.4%), upper respiratory tract infection (7%), and oral candidiasis (3.1%); fungal infections overall occurred in 7% taking bimekizumab.
“We saw slightly higher fungal infections, but this is because we block IL-17A and IL-17F, and [the risk for these infections] is linked to the mechanism of action. But we can deal with this,” Dr. Baraliakos said.
The trials were sponsored by UCB. Dr. Baraliakos disclosed serving on the speakers bureau and as a paid instructor and consultant for AbbVie, Bristol-Myers Squibb, Chugai, Eli Lilly, Galapagos, Gilead, Merck Sharp & Dohme, Novartis, Pfizer, and UCB. Dr. Proft disclosed serving on speakers bureaus for Amgen, AbbVie, Bristol-Myers Squibb, Celgene, Janssen, Merck Sharp & Dohme, Novartis, Pfizer, Roche, and UCB; being a consultant to Novartis; and receiving grant or research support from Novartis, UCB, and Lilly.
GHENT, BELGIUM – Patients with nonradiographic or radiographic axial spondyloarthritis (axSpA) experienced clinically relevant treatment responses to bimekizumab (Bimzelx) at similar rates that significantly exceeded placebo, regardless of prior experience with a tumor necrosis factor (TNF) inhibitor, according to results from two phase 3 trials presented at the 13th International Congress on Spondyloarthritides.
In addition, around half of patients with either nonradiographic or radiographic disease achieved complete remission of enthesitis by week 16 of treatment with bimekizumab. The drug, a humanized, monoclonal antibody dually inhibiting interleukins (IL) 17A and 17F, is approved in the European Union for treating adults with moderate to severe plaque psoriasis.
“Bimekizumab blockade works independently of axial spondyloarthritis pretreatment, which means this drug specifically blocks something that other drugs do not reach,” said Xenofon Baraliakos, MD, professor of internal medicine and rheumatology at Ruhr University Bochum (Germany). He presented 24-week data on the use of bimekizumab.
The BE MOBILE 1 trial involved 256 patients with nonradiographic axSpA, whereas BE MOBILE 2 involved 232 patients with radiographic axSpA. In both trials, bimekizumab 160 mg was administered subcutaneously every 4 weeks, and at week 16, all patients, including those who had received placebo, received open-label bimekizumab for another 8 weeks. This news organization previously reported results from BE MOBILE 2 that were presented at the European Alliance of Associations for Rheumatology (EULAR) 2022 annual meeting.
In Ghent, referring to the nonradiographic patients, Dr. Baraliakos said in an interview, “We saw a very clear response to the active drug even after 2 weeks. The curves separated out from placebo. The week 16 primary analysis showed patients on bimekizumab did significantly better, [and there was] a similar response in those who switched to [open-label] bimekizumab after placebo” at week 16.
In patients with nonradiographic disease at week 24, 52.3% on bimekizumab achieved the trial’s primary outcome of 40% improvement in Assessment in Spondyloarthritis International Society response criteria (ASAS 40), compared with 46.8% of patients who were receiving placebo and then switched to open-label bimekizumab at week 16, the latter rising from 21.4% at week 16. For comparison, 47.7% on bimekizumab achieved ASAS 40 at week 16.
At week 24 in BE MOBILE 2, 53.8% of patients with radiographic disease on continuous bimekizumab met ASAS 40 criteria, as did 56.8% of patients who switched from placebo to open-label bimekizumab, rising from 22.5% with placebo and 44.8% with bimekizumab at week 16.
Audience member Fabian Proft, MD, of Charité Medical University, Berlin, commented on the latest results as well as wider bimekizumab findings, including those relating to psoriasis. “When we compare this to drugs that are already approved and available, we can assume that bimekizumab is equally effective to existing ones,” he said, noting that “there is the additional option in patients with psoriasis, where it seems to be the most effective drug for this indication. If I had a patient with radiographic or nonradiographic axial SpA and who also had significant psoriasis, then bimekizumab would be my choice of treatment.”
Targeting IL-17A and IL-17F in one drug
In the BE MOBILE 1 study, Dr. Baraliakos and coinvestigators looked at whether inhibiting IL-17F as well as IL-17A “makes sense” in terms of clinical benefits in patients with axSpA.
“Previous experience with IL-17A inhibitors shows they work well but still miss some patients,” Dr. Baraliakos said, adding that, “the hope is that by blocking both IL-17A and IL-17F, the response will be a bit better in terms of both greater response and longevity of response than [with an] IL-17A [inhibitor] alone.”
Patients in BE MOBILE 1 were typical adult patients with nonradiographic axSpA who fulfilled ASAS classification criteria and had elevated C-reactive protein (CRP) and/or sacroiliitis on MRI. All patients were older than 18 years and had a mean age of 39 years. In each arm, 51%-57% were men. Overall, patients had a mean of 9 years of symptoms and a mean Ankylosing Spondylitis Disease Activity Score of 3.7 in both patient groups (placebo and bimekizumab).
All had active disease (Bath Ankylosing Spondylitis Disease Activity Index ≥ 4 and spinal pain ≥ 4) at baseline and demonstrated failure to respond to two different NSAIDs or had a history of intolerance to or contraindication to NSAIDs. Patients had previously received up to one TNF inhibitor (13.5% in the placebo group and 7.8% in the bimekizumab group).
The primary outcome compared rate of response to ASAS 40 criteria, which comprises patient global assessment of disease, spinal pain, function (as assessed by the Bath Ankylosing Spondylitis Functional Index [BASFI]), and inflammation (stiffness).
Early response seen regardless of previous TNF inhibitor experience
“We saw response to bimekizumab very early in our patients at 16 weeks. The amount of response was higher than that observed with IL-17A alone,” Dr. Baraliakos said in an interview. “It’s understood that IL-17A and IL-17F do not work together on the inflammatory cascade, but work separately, and this might explain the findings whereby this drug captures more inflammation.”
Dr. Baraliakos highlighted the unique response rates seen with bimekizumab regardless of past TNF inhibitor use. “The TNF inhibitor-experienced patients responded as well as the TNF inhibitor–naive ones. This is unusual because nonresponders to other drugs are usually more severely affected and have a lower chance of showing response to any drug. Also, we did not see this response in patients treated with IL-17A only.”
At 16 weeks, patients with nonradiographic disease without a past history of using a TNF inhibitor had ASAS 40 responses at rates of 46.6% with bimekizumab and 22.9% with placebo. These rates in patients with past TNF inhibitor use were 60% with bimekizumab and 11.8% with placebo.
Statistically significant differences between bimekizumab and placebo occurred for all primary and secondary outcomes. “This includes the MRI inflammation findings in bimekizumab-treated patients,” Dr. Baraliakos reported.
Complete resolution of enthesitis was also observed. By week 24, enthesitis completely resolved in 47.9% of patients with nonradiographic disease on continuous bimekizumab and 43.5% of those patients who switched from placebo to bimekizumab. In patients with radiographic disease, complete resolution occurred in 53% of those on continuous bimekizumab and 49.3% of patients who switched at week 16. “This was an excellent outcome,” Dr. Baraliakos said.
The safety profile at 24 weeks confirmed prior findings at 16 weeks in which the most common treatment-emergent adverse events with bimekizumab were nasopharyngitis (9.4%), upper respiratory tract infection (7%), and oral candidiasis (3.1%); fungal infections overall occurred in 7% taking bimekizumab.
“We saw slightly higher fungal infections, but this is because we block IL-17A and IL-17F, and [the risk for these infections] is linked to the mechanism of action. But we can deal with this,” Dr. Baraliakos said.
The trials were sponsored by UCB. Dr. Baraliakos disclosed serving on the speakers bureau and as a paid instructor and consultant for AbbVie, Bristol-Myers Squibb, Chugai, Eli Lilly, Galapagos, Gilead, Merck Sharp & Dohme, Novartis, Pfizer, and UCB. Dr. Proft disclosed serving on speakers bureaus for Amgen, AbbVie, Bristol-Myers Squibb, Celgene, Janssen, Merck Sharp & Dohme, Novartis, Pfizer, Roche, and UCB; being a consultant to Novartis; and receiving grant or research support from Novartis, UCB, and Lilly.
AT THE 2022 SPA CONGRESS
Largest-ever study into the effects of cannabis on the brain
The largest-ever independent study into the effects of cannabis on the brain is being carried out in the United Kingdom.
Even though cannabis is the most commonly used illegal drug in the United Kingdom and medicinal cannabis has been legal there since 2018 little is known about why some people react badly to it and others seem to benefit from it.
According to Home Office figures on drug use from 2019, 7.6% of adults aged 16-59 used cannabis in the previous year.
Medicinal cannabis in the United Kingdom can only be prescribed if no other licensed medicine could help the patient. At the moment, GPs can’t prescribe it, only specialist hospital doctors can. The National Health Service says it can only be used in three circumstances: in rare, severe epilepsy; to deal with chemotherapy side effects such as nausea; or to help with multiple sclerosis.
As part of the Cannabis&Me study, KCL needs to get 3,000 current cannabis users and 3,000 non–cannabis users to take part in an online survey, with a third of those survey respondents then taking part in a face-to-face assessment that includes virtual reality (VR) and psychological analysis. The study also aims to determine how the DNA of cannabis users and their endocannabinoid system impacts their experiences, both negative and positive, with the drug.
The study is spearheaded by Marta Di Forti, MD, PhD, and has been allocated over £2.5 million in funding by the Medical Research Council.
This news organization asked Dr. Di Forti about the study.
Question: How do you describe the study?
Answer: “It’s a really unique study. We are aiming to see what’s happening to people using cannabis in the privacy of their homes for medicinal, recreational reasons, or whatever other reason.
“The debate on cannabis has always been quite polarized. There have been people who experience adversities with cannabis use, especially psychosis, whose families may perhaps like cannabis to be abolished if possible. Then there are other people who are saying they get positive benefits from using cannabis.”
Q: So where does the study come in?
A: “The study wants to bring the two sides of the argument together and understand what’s really happening. The group I see as a clinician comes to severe harm when they use cannabis regularly. We want to find out who they are and whether we can identify them. While we need to make sure they never come to harm when using cannabis, we need to consider others who won’t come to harm from using cannabis and give them a chance to use it in a way that’s beneficial.”
Q: How does the study work?
A: “The first step of the study is to use an online questionnaire that can be filled in by anyone aged 18-45 who lives in the London area or can travel here if selected. The first set of questions are a general idea of their cannabis use: ‘Why do they use it?’ ‘What are its benefits?’ Then, general questions on what their life has been like up to that point: ‘Did they have any adversities in childhood?’ ‘How is their mood and anxiety levels?’ ‘Do they experience any paranoid responses in everyday life?’ It probably takes between 30 and 40 minutes to fill out the questionnaire.”
Q: Can you explain about paranoid responses?
A: “We go through the questionnaires looking at people’s paranoid response to everyday life, not in a clinical disorder term, just in terms of the differences in how we respond to certain circumstances. For example: ‘How do you feel if someone’s staring at you on the Tube?’ Some people are afraid, some feel uncomfortable, some people don’t notice, and others think a person is staring at them as they look good or another such positive feeling. So, we give people a paranoia score and will invite some at the top and some at the bottom of that score for a face-to-face assessment. We want to select those people who are using cannabis daily and they are getting either no paranoia or high paranoia.”
Q: What happens at the face-to-face assessments?
A: “We do two things which are very novel. We ask them to take part in a virtual reality experience. They are in a lovely shop and within this experience they come across challenges, which may or may not induce a benign paranoia response. We will ask them to donate a sample of blood before they go into the VR set. We will test for tetrahydrocannabinol (THC) and cannabidiol (CBD). We will also look at the metabolites of the two. People don’t take into account how differently individuals metabolize cannabis, which could be one of the reasons why some people can tolerate it and others can’t.”
Q: There’s also a genetic aspect of the study?
A: “From the same sample, we will extract DNA to look at the genetics across the genome and compare genetic variations between high and low paranoia in the context of cannabis use. Also, we will look at the epigenetics, as we have learned from neuroscience, and also cancer, that sometimes a substance we ingest has an effect on our health. It’s perhaps an interaction with the way our DNA is written but also with the changes to the way our DNA is read and translated into biology if exposed to that substance. We know that smoking tobacco does have an impact at an epigenetic level on the DNA. We do know that in people who stop smoking, these impacts on the epigenetics are partially reversed. This work hasn’t been done properly for cannabis.
“There have been four published studies that have looked at the effect of cannabis use on epigenetics but they have been quite inconclusive, and they haven’t looked at large numbers of current users taking into account how much they are using. Moreover, we do know that when THC and CBD get into our bodies, they interact with something that is already embedded in our biology which is the endocannabinoid system. Therefore, in the blood samples we also aim to measures the levels of the endocannabinoids we naturally produce.
“All of this data will then be analyzed to see if we can get close to understanding what makes some cannabis users susceptible to paranoia while others who are using cannabis get some benefits, even in the domain of mental health.”
Q: Who are you looking for to take part in your study?
A: “What we don’t want is to get only people who are the classic friends and family of academics to do the study. We want a representative sample of people out there who are using cannabis. My ideal candidate would be someone who hates me and usually sends me abusive emails saying I’m against cannabis, which is wrong. All I want to find out is who is susceptible to harm which will keep everybody else safe. We are not trying to demonize cannabis; it’s exactly the opposite. We would like people from all ethnic and socioeconomic backgrounds to join to give voice to everyone out there using cannabis, the reasons why, and the effects they experience.”
Q: Will this study perhaps give more information of when it’s appropriate to prescribe medicinal cannabis, as it’s still quite unusual for it to be prescribed in the United Kingdom isn’t it?
A: “Absolutely spot on. That’s exactly the point. We want to hear from people who are receiving medicinal cannabis as a prescription, as they are likely to take it on a daily basis and daily use is what epidemiological studies have linked to the highest risk of psychosis. There will be people taking THC everyday for pain, nausea, for Crohn’s disease, and more.
“Normally when you receive a prescription for a medication the physician in charge will tell you the potential side effects which will be monitored to make sure it’s safe, and you may have to swap to a different medication. Now this isn’t really happening with medicinal cannabis, which is one of the reasons clinicians are anxious about prescribing it, and they have been criticized for not prescribing it very much. There’s much less structure and guidance about ‘psychosis-related’ side effects monitoring. If we can really identify those people who are likely to develop psychosis or disabling paranoia when they use cannabis, physicians might be more prepared to prescribe more widely when indicated.
“You could even have a virtual reality scenario available as a screening tool when you get prescribed medicinal cannabis, to see if there are changes in your perception of the world, which is ultimately what psychosis is about. Could this be a way of implementing safe prescribing which will encourage physicians to use safe cannabis compounds and make some people less anxious about it?
“This study is not here to highlight the negativity of cannabis, on the contrary it’s to understand how it can be used recreationally, but even more important, medicinally in a safe way so people that are coming to no harm can continue to do so and people who are at risk can be kept safe, or at least monitored adequately.”
A version of this article first appeared on Medscape UK.
The largest-ever independent study into the effects of cannabis on the brain is being carried out in the United Kingdom.
Even though cannabis is the most commonly used illegal drug in the United Kingdom and medicinal cannabis has been legal there since 2018 little is known about why some people react badly to it and others seem to benefit from it.
According to Home Office figures on drug use from 2019, 7.6% of adults aged 16-59 used cannabis in the previous year.
Medicinal cannabis in the United Kingdom can only be prescribed if no other licensed medicine could help the patient. At the moment, GPs can’t prescribe it, only specialist hospital doctors can. The National Health Service says it can only be used in three circumstances: in rare, severe epilepsy; to deal with chemotherapy side effects such as nausea; or to help with multiple sclerosis.
As part of the Cannabis&Me study, KCL needs to get 3,000 current cannabis users and 3,000 non–cannabis users to take part in an online survey, with a third of those survey respondents then taking part in a face-to-face assessment that includes virtual reality (VR) and psychological analysis. The study also aims to determine how the DNA of cannabis users and their endocannabinoid system impacts their experiences, both negative and positive, with the drug.
The study is spearheaded by Marta Di Forti, MD, PhD, and has been allocated over £2.5 million in funding by the Medical Research Council.
This news organization asked Dr. Di Forti about the study.
Question: How do you describe the study?
Answer: “It’s a really unique study. We are aiming to see what’s happening to people using cannabis in the privacy of their homes for medicinal, recreational reasons, or whatever other reason.
“The debate on cannabis has always been quite polarized. There have been people who experience adversities with cannabis use, especially psychosis, whose families may perhaps like cannabis to be abolished if possible. Then there are other people who are saying they get positive benefits from using cannabis.”
Q: So where does the study come in?
A: “The study wants to bring the two sides of the argument together and understand what’s really happening. The group I see as a clinician comes to severe harm when they use cannabis regularly. We want to find out who they are and whether we can identify them. While we need to make sure they never come to harm when using cannabis, we need to consider others who won’t come to harm from using cannabis and give them a chance to use it in a way that’s beneficial.”
Q: How does the study work?
A: “The first step of the study is to use an online questionnaire that can be filled in by anyone aged 18-45 who lives in the London area or can travel here if selected. The first set of questions are a general idea of their cannabis use: ‘Why do they use it?’ ‘What are its benefits?’ Then, general questions on what their life has been like up to that point: ‘Did they have any adversities in childhood?’ ‘How is their mood and anxiety levels?’ ‘Do they experience any paranoid responses in everyday life?’ It probably takes between 30 and 40 minutes to fill out the questionnaire.”
Q: Can you explain about paranoid responses?
A: “We go through the questionnaires looking at people’s paranoid response to everyday life, not in a clinical disorder term, just in terms of the differences in how we respond to certain circumstances. For example: ‘How do you feel if someone’s staring at you on the Tube?’ Some people are afraid, some feel uncomfortable, some people don’t notice, and others think a person is staring at them as they look good or another such positive feeling. So, we give people a paranoia score and will invite some at the top and some at the bottom of that score for a face-to-face assessment. We want to select those people who are using cannabis daily and they are getting either no paranoia or high paranoia.”
Q: What happens at the face-to-face assessments?
A: “We do two things which are very novel. We ask them to take part in a virtual reality experience. They are in a lovely shop and within this experience they come across challenges, which may or may not induce a benign paranoia response. We will ask them to donate a sample of blood before they go into the VR set. We will test for tetrahydrocannabinol (THC) and cannabidiol (CBD). We will also look at the metabolites of the two. People don’t take into account how differently individuals metabolize cannabis, which could be one of the reasons why some people can tolerate it and others can’t.”
Q: There’s also a genetic aspect of the study?
A: “From the same sample, we will extract DNA to look at the genetics across the genome and compare genetic variations between high and low paranoia in the context of cannabis use. Also, we will look at the epigenetics, as we have learned from neuroscience, and also cancer, that sometimes a substance we ingest has an effect on our health. It’s perhaps an interaction with the way our DNA is written but also with the changes to the way our DNA is read and translated into biology if exposed to that substance. We know that smoking tobacco does have an impact at an epigenetic level on the DNA. We do know that in people who stop smoking, these impacts on the epigenetics are partially reversed. This work hasn’t been done properly for cannabis.
“There have been four published studies that have looked at the effect of cannabis use on epigenetics but they have been quite inconclusive, and they haven’t looked at large numbers of current users taking into account how much they are using. Moreover, we do know that when THC and CBD get into our bodies, they interact with something that is already embedded in our biology which is the endocannabinoid system. Therefore, in the blood samples we also aim to measures the levels of the endocannabinoids we naturally produce.
“All of this data will then be analyzed to see if we can get close to understanding what makes some cannabis users susceptible to paranoia while others who are using cannabis get some benefits, even in the domain of mental health.”
Q: Who are you looking for to take part in your study?
A: “What we don’t want is to get only people who are the classic friends and family of academics to do the study. We want a representative sample of people out there who are using cannabis. My ideal candidate would be someone who hates me and usually sends me abusive emails saying I’m against cannabis, which is wrong. All I want to find out is who is susceptible to harm which will keep everybody else safe. We are not trying to demonize cannabis; it’s exactly the opposite. We would like people from all ethnic and socioeconomic backgrounds to join to give voice to everyone out there using cannabis, the reasons why, and the effects they experience.”
Q: Will this study perhaps give more information of when it’s appropriate to prescribe medicinal cannabis, as it’s still quite unusual for it to be prescribed in the United Kingdom isn’t it?
A: “Absolutely spot on. That’s exactly the point. We want to hear from people who are receiving medicinal cannabis as a prescription, as they are likely to take it on a daily basis and daily use is what epidemiological studies have linked to the highest risk of psychosis. There will be people taking THC everyday for pain, nausea, for Crohn’s disease, and more.
“Normally when you receive a prescription for a medication the physician in charge will tell you the potential side effects which will be monitored to make sure it’s safe, and you may have to swap to a different medication. Now this isn’t really happening with medicinal cannabis, which is one of the reasons clinicians are anxious about prescribing it, and they have been criticized for not prescribing it very much. There’s much less structure and guidance about ‘psychosis-related’ side effects monitoring. If we can really identify those people who are likely to develop psychosis or disabling paranoia when they use cannabis, physicians might be more prepared to prescribe more widely when indicated.
“You could even have a virtual reality scenario available as a screening tool when you get prescribed medicinal cannabis, to see if there are changes in your perception of the world, which is ultimately what psychosis is about. Could this be a way of implementing safe prescribing which will encourage physicians to use safe cannabis compounds and make some people less anxious about it?
“This study is not here to highlight the negativity of cannabis, on the contrary it’s to understand how it can be used recreationally, but even more important, medicinally in a safe way so people that are coming to no harm can continue to do so and people who are at risk can be kept safe, or at least monitored adequately.”
A version of this article first appeared on Medscape UK.
The largest-ever independent study into the effects of cannabis on the brain is being carried out in the United Kingdom.
Even though cannabis is the most commonly used illegal drug in the United Kingdom and medicinal cannabis has been legal there since 2018 little is known about why some people react badly to it and others seem to benefit from it.
According to Home Office figures on drug use from 2019, 7.6% of adults aged 16-59 used cannabis in the previous year.
Medicinal cannabis in the United Kingdom can only be prescribed if no other licensed medicine could help the patient. At the moment, GPs can’t prescribe it, only specialist hospital doctors can. The National Health Service says it can only be used in three circumstances: in rare, severe epilepsy; to deal with chemotherapy side effects such as nausea; or to help with multiple sclerosis.
As part of the Cannabis&Me study, KCL needs to get 3,000 current cannabis users and 3,000 non–cannabis users to take part in an online survey, with a third of those survey respondents then taking part in a face-to-face assessment that includes virtual reality (VR) and psychological analysis. The study also aims to determine how the DNA of cannabis users and their endocannabinoid system impacts their experiences, both negative and positive, with the drug.
The study is spearheaded by Marta Di Forti, MD, PhD, and has been allocated over £2.5 million in funding by the Medical Research Council.
This news organization asked Dr. Di Forti about the study.
Question: How do you describe the study?
Answer: “It’s a really unique study. We are aiming to see what’s happening to people using cannabis in the privacy of their homes for medicinal, recreational reasons, or whatever other reason.
“The debate on cannabis has always been quite polarized. There have been people who experience adversities with cannabis use, especially psychosis, whose families may perhaps like cannabis to be abolished if possible. Then there are other people who are saying they get positive benefits from using cannabis.”
Q: So where does the study come in?
A: “The study wants to bring the two sides of the argument together and understand what’s really happening. The group I see as a clinician comes to severe harm when they use cannabis regularly. We want to find out who they are and whether we can identify them. While we need to make sure they never come to harm when using cannabis, we need to consider others who won’t come to harm from using cannabis and give them a chance to use it in a way that’s beneficial.”
Q: How does the study work?
A: “The first step of the study is to use an online questionnaire that can be filled in by anyone aged 18-45 who lives in the London area or can travel here if selected. The first set of questions are a general idea of their cannabis use: ‘Why do they use it?’ ‘What are its benefits?’ Then, general questions on what their life has been like up to that point: ‘Did they have any adversities in childhood?’ ‘How is their mood and anxiety levels?’ ‘Do they experience any paranoid responses in everyday life?’ It probably takes between 30 and 40 minutes to fill out the questionnaire.”
Q: Can you explain about paranoid responses?
A: “We go through the questionnaires looking at people’s paranoid response to everyday life, not in a clinical disorder term, just in terms of the differences in how we respond to certain circumstances. For example: ‘How do you feel if someone’s staring at you on the Tube?’ Some people are afraid, some feel uncomfortable, some people don’t notice, and others think a person is staring at them as they look good or another such positive feeling. So, we give people a paranoia score and will invite some at the top and some at the bottom of that score for a face-to-face assessment. We want to select those people who are using cannabis daily and they are getting either no paranoia or high paranoia.”
Q: What happens at the face-to-face assessments?
A: “We do two things which are very novel. We ask them to take part in a virtual reality experience. They are in a lovely shop and within this experience they come across challenges, which may or may not induce a benign paranoia response. We will ask them to donate a sample of blood before they go into the VR set. We will test for tetrahydrocannabinol (THC) and cannabidiol (CBD). We will also look at the metabolites of the two. People don’t take into account how differently individuals metabolize cannabis, which could be one of the reasons why some people can tolerate it and others can’t.”
Q: There’s also a genetic aspect of the study?
A: “From the same sample, we will extract DNA to look at the genetics across the genome and compare genetic variations between high and low paranoia in the context of cannabis use. Also, we will look at the epigenetics, as we have learned from neuroscience, and also cancer, that sometimes a substance we ingest has an effect on our health. It’s perhaps an interaction with the way our DNA is written but also with the changes to the way our DNA is read and translated into biology if exposed to that substance. We know that smoking tobacco does have an impact at an epigenetic level on the DNA. We do know that in people who stop smoking, these impacts on the epigenetics are partially reversed. This work hasn’t been done properly for cannabis.
“There have been four published studies that have looked at the effect of cannabis use on epigenetics but they have been quite inconclusive, and they haven’t looked at large numbers of current users taking into account how much they are using. Moreover, we do know that when THC and CBD get into our bodies, they interact with something that is already embedded in our biology which is the endocannabinoid system. Therefore, in the blood samples we also aim to measures the levels of the endocannabinoids we naturally produce.
“All of this data will then be analyzed to see if we can get close to understanding what makes some cannabis users susceptible to paranoia while others who are using cannabis get some benefits, even in the domain of mental health.”
Q: Who are you looking for to take part in your study?
A: “What we don’t want is to get only people who are the classic friends and family of academics to do the study. We want a representative sample of people out there who are using cannabis. My ideal candidate would be someone who hates me and usually sends me abusive emails saying I’m against cannabis, which is wrong. All I want to find out is who is susceptible to harm which will keep everybody else safe. We are not trying to demonize cannabis; it’s exactly the opposite. We would like people from all ethnic and socioeconomic backgrounds to join to give voice to everyone out there using cannabis, the reasons why, and the effects they experience.”
Q: Will this study perhaps give more information of when it’s appropriate to prescribe medicinal cannabis, as it’s still quite unusual for it to be prescribed in the United Kingdom isn’t it?
A: “Absolutely spot on. That’s exactly the point. We want to hear from people who are receiving medicinal cannabis as a prescription, as they are likely to take it on a daily basis and daily use is what epidemiological studies have linked to the highest risk of psychosis. There will be people taking THC everyday for pain, nausea, for Crohn’s disease, and more.
“Normally when you receive a prescription for a medication the physician in charge will tell you the potential side effects which will be monitored to make sure it’s safe, and you may have to swap to a different medication. Now this isn’t really happening with medicinal cannabis, which is one of the reasons clinicians are anxious about prescribing it, and they have been criticized for not prescribing it very much. There’s much less structure and guidance about ‘psychosis-related’ side effects monitoring. If we can really identify those people who are likely to develop psychosis or disabling paranoia when they use cannabis, physicians might be more prepared to prescribe more widely when indicated.
“You could even have a virtual reality scenario available as a screening tool when you get prescribed medicinal cannabis, to see if there are changes in your perception of the world, which is ultimately what psychosis is about. Could this be a way of implementing safe prescribing which will encourage physicians to use safe cannabis compounds and make some people less anxious about it?
“This study is not here to highlight the negativity of cannabis, on the contrary it’s to understand how it can be used recreationally, but even more important, medicinally in a safe way so people that are coming to no harm can continue to do so and people who are at risk can be kept safe, or at least monitored adequately.”
A version of this article first appeared on Medscape UK.
Vitamin D supplementation shows no COVID-19 prevention
Two large studies out of the United Kingdom and Norway show vitamin D supplementation has no benefit – as low dose, high dose, or in the form of cod liver oil supplementation – in preventing COVID-19 or acute respiratory tract infections, regardless of whether individuals are deficient or not.
The studies, published in the BMJ, underscore that “vaccination is still the most effective way to protect people from COVID-19, and vitamin D and cod liver oil supplementation should not be offered to healthy people with normal vitamin D levels,” writes Peter Bergman, MD, of the Karolinska Institute, Stockholm, in an editorial published alongside the studies.
Suboptimal levels of vitamin D are known to be associated with an increased risk of acute respiratory infections, and some observational studies have linked low 25-hydroxyvitamin D (25[OH]D) with more severe COVID-19; however, data on a possible protective effect of vitamin D supplementation in preventing infection have been inconsistent.
U.K. study compares doses
To further investigate the relationship with infections, including COVID-19, in a large cohort, the authors of the first of the two BMJ studies, a phase 3 open-label trial, enrolled 6,200 people in the United Kingdom aged 16 and older between December 2020 and June 2021 who were not taking vitamin D supplements at baseline.
Half of participants were offered a finger-prick blood test, and of the 2,674 who accepted, 86.3% were found to have low concentrations of 25(OH)D (< 75 nmol/L). These participants were provided with vitamin D supplementation at a lower (800 IU/day; n = 1328) or higher dose (3,200 IU/day; n = 1,346) for 6 months. The other half of the group received no tests or supplements.
The results showed minimal differences between groups in terms of rates of developing at least one acute respiratory infection, which occurred in 5% of those in the lower-dose group, 5.7% in the higher-dose group, and 4.6% of participants not offered supplementation.
Similarly, there were no significant differences in the development of real-time PCR-confirmed COVID-19, with rates of 3.6% in the lower-dose group, 3.0% in the higher-dose group, and 2.6% in the group not offered supplementation.
The study is “the first phase 3 randomized controlled trial to evaluate the effectiveness of a test-and-treat approach for correction of suboptimal vitamin D status to prevent acute respiratory tract infections,” report the authors, led by Adrian R. Martineau, MD, PhD, of Barts and The London School of Medicine and Dentistry, Queen Mary University of London.
While uptake and supplementation in the study were favorable, “no statistically significant effect of either dose was seen on the primary outcome of swab test, doctor-confirmed acute respiratory tract infection, or on the major secondary outcome of swab test-confirmed COVID-19,” they conclude.
Traditional use of cod liver oil of benefit?
In the second study, researchers in Norway, led by Arne Soraas, MD, PhD, of the department of microbiology, Oslo University Hospital, evaluated whether that country’s long-held tradition of consuming cod liver oil during the winter to prevent vitamin D deficiency could affect the development of COVID-19 or outcomes.
For the Cod Liver Oil for COVID-19 Prevention Study (CLOC), a large cohort of 34,601 adults with a mean age of 44.9 years who were not taking daily vitamin D supplements were randomized to receive 5 mL/day of cod liver oil, representing a surrogate dose of 400 IU/day of vitamin D (n = 17,278), or placebo (n = 17,323) for up to 6 months.
In contrast with the first study, the vast majority of patients in the CLOC study (86%) had adequate vitamin D levels, defined as greater than 50 nmol/L, at baseline.
Again, however, the results showed no association between increased vitamin D supplementation with cod liver oil and PCR-confirmed COVID-19 or acute respiratory infections, with approximately 1.3% in each group testing positive for COVID-19 over a median of 164 days.
Supplementation with cod liver oil was also not associated with a reduced risk of any of the coprimary endpoints, including other acute respiratory infections.
“Daily supplementation with cod liver oil, a low-dose vitamin D, eicosapentaenoic acid, and docosahexaenoic acid supplement, for 6 months during the SARS-CoV-2pandemic among Norwegian adults did not reduce the incidence of SARS-CoV-2 infection, serious COVID-19, or other acute respiratory infections,” the authors report.
Key study limitations
In his editorial, Dr. Bergman underscores the limitations of two studies – also acknowledged by the authors – including the key confounding role of vaccines that emerged during the studies.
“The null findings of the studies should be interpreted in the context of a highly effective vaccine rolled out during both studies,” Dr. Bergman writes.
In the U.K. study, for instance, whereas only 1.2% of participants were vaccinated at baseline, the rate soared to 89.1% having received at least one dose by study end, potentially masking any effect of vitamin D, he says.
Additionally, for the Norway study, Dr. Bergman notes that cod liver oil also contains a substantial amount of vitamin A, which can be a potent immunomodulator.
“Excessive intake of vitamin A can cause adverse effects and may also interfere with vitamin D-mediated effects on the immune system,” he writes.
With two recent large meta-analyses showing benefits of vitamin D supplementation to be specifically among people who are vitamin D deficient, “a pragmatic approach for the clinician could be to focus on risk groups” for supplementation, Dr. Bergman writes.
“[These include] those who could be tested before supplementation, including people with dark skin, or skin that is rarely exposed to the sun, pregnant women, and elderly people with chronic diseases.”
The U.K. trial was supported by Barts Charity, Pharma Nord, the Fischer Family Foundation, DSM Nutritional Products, the Exilarch’s Foundation, the Karl R. Pfleger Foundation, the AIM Foundation, Synergy Biologics, Cytoplan, the Clinical Research Network of the U.K. National Institute for Health and Care Research, the HDR UK BREATHE Hub, the U.K. Research and Innovation Industrial Strategy Challenge Fund, Thornton & Ross, Warburtons, Hyphens Pharma, and philanthropist Matthew Isaacs.
The CLOC trial was funded by Orkla Health, the manufacturer of the cod liver oil used in the trial. Dr. Bergman has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Two large studies out of the United Kingdom and Norway show vitamin D supplementation has no benefit – as low dose, high dose, or in the form of cod liver oil supplementation – in preventing COVID-19 or acute respiratory tract infections, regardless of whether individuals are deficient or not.
The studies, published in the BMJ, underscore that “vaccination is still the most effective way to protect people from COVID-19, and vitamin D and cod liver oil supplementation should not be offered to healthy people with normal vitamin D levels,” writes Peter Bergman, MD, of the Karolinska Institute, Stockholm, in an editorial published alongside the studies.
Suboptimal levels of vitamin D are known to be associated with an increased risk of acute respiratory infections, and some observational studies have linked low 25-hydroxyvitamin D (25[OH]D) with more severe COVID-19; however, data on a possible protective effect of vitamin D supplementation in preventing infection have been inconsistent.
U.K. study compares doses
To further investigate the relationship with infections, including COVID-19, in a large cohort, the authors of the first of the two BMJ studies, a phase 3 open-label trial, enrolled 6,200 people in the United Kingdom aged 16 and older between December 2020 and June 2021 who were not taking vitamin D supplements at baseline.
Half of participants were offered a finger-prick blood test, and of the 2,674 who accepted, 86.3% were found to have low concentrations of 25(OH)D (< 75 nmol/L). These participants were provided with vitamin D supplementation at a lower (800 IU/day; n = 1328) or higher dose (3,200 IU/day; n = 1,346) for 6 months. The other half of the group received no tests or supplements.
The results showed minimal differences between groups in terms of rates of developing at least one acute respiratory infection, which occurred in 5% of those in the lower-dose group, 5.7% in the higher-dose group, and 4.6% of participants not offered supplementation.
Similarly, there were no significant differences in the development of real-time PCR-confirmed COVID-19, with rates of 3.6% in the lower-dose group, 3.0% in the higher-dose group, and 2.6% in the group not offered supplementation.
The study is “the first phase 3 randomized controlled trial to evaluate the effectiveness of a test-and-treat approach for correction of suboptimal vitamin D status to prevent acute respiratory tract infections,” report the authors, led by Adrian R. Martineau, MD, PhD, of Barts and The London School of Medicine and Dentistry, Queen Mary University of London.
While uptake and supplementation in the study were favorable, “no statistically significant effect of either dose was seen on the primary outcome of swab test, doctor-confirmed acute respiratory tract infection, or on the major secondary outcome of swab test-confirmed COVID-19,” they conclude.
Traditional use of cod liver oil of benefit?
In the second study, researchers in Norway, led by Arne Soraas, MD, PhD, of the department of microbiology, Oslo University Hospital, evaluated whether that country’s long-held tradition of consuming cod liver oil during the winter to prevent vitamin D deficiency could affect the development of COVID-19 or outcomes.
For the Cod Liver Oil for COVID-19 Prevention Study (CLOC), a large cohort of 34,601 adults with a mean age of 44.9 years who were not taking daily vitamin D supplements were randomized to receive 5 mL/day of cod liver oil, representing a surrogate dose of 400 IU/day of vitamin D (n = 17,278), or placebo (n = 17,323) for up to 6 months.
In contrast with the first study, the vast majority of patients in the CLOC study (86%) had adequate vitamin D levels, defined as greater than 50 nmol/L, at baseline.
Again, however, the results showed no association between increased vitamin D supplementation with cod liver oil and PCR-confirmed COVID-19 or acute respiratory infections, with approximately 1.3% in each group testing positive for COVID-19 over a median of 164 days.
Supplementation with cod liver oil was also not associated with a reduced risk of any of the coprimary endpoints, including other acute respiratory infections.
“Daily supplementation with cod liver oil, a low-dose vitamin D, eicosapentaenoic acid, and docosahexaenoic acid supplement, for 6 months during the SARS-CoV-2pandemic among Norwegian adults did not reduce the incidence of SARS-CoV-2 infection, serious COVID-19, or other acute respiratory infections,” the authors report.
Key study limitations
In his editorial, Dr. Bergman underscores the limitations of two studies – also acknowledged by the authors – including the key confounding role of vaccines that emerged during the studies.
“The null findings of the studies should be interpreted in the context of a highly effective vaccine rolled out during both studies,” Dr. Bergman writes.
In the U.K. study, for instance, whereas only 1.2% of participants were vaccinated at baseline, the rate soared to 89.1% having received at least one dose by study end, potentially masking any effect of vitamin D, he says.
Additionally, for the Norway study, Dr. Bergman notes that cod liver oil also contains a substantial amount of vitamin A, which can be a potent immunomodulator.
“Excessive intake of vitamin A can cause adverse effects and may also interfere with vitamin D-mediated effects on the immune system,” he writes.
With two recent large meta-analyses showing benefits of vitamin D supplementation to be specifically among people who are vitamin D deficient, “a pragmatic approach for the clinician could be to focus on risk groups” for supplementation, Dr. Bergman writes.
“[These include] those who could be tested before supplementation, including people with dark skin, or skin that is rarely exposed to the sun, pregnant women, and elderly people with chronic diseases.”
The U.K. trial was supported by Barts Charity, Pharma Nord, the Fischer Family Foundation, DSM Nutritional Products, the Exilarch’s Foundation, the Karl R. Pfleger Foundation, the AIM Foundation, Synergy Biologics, Cytoplan, the Clinical Research Network of the U.K. National Institute for Health and Care Research, the HDR UK BREATHE Hub, the U.K. Research and Innovation Industrial Strategy Challenge Fund, Thornton & Ross, Warburtons, Hyphens Pharma, and philanthropist Matthew Isaacs.
The CLOC trial was funded by Orkla Health, the manufacturer of the cod liver oil used in the trial. Dr. Bergman has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Two large studies out of the United Kingdom and Norway show vitamin D supplementation has no benefit – as low dose, high dose, or in the form of cod liver oil supplementation – in preventing COVID-19 or acute respiratory tract infections, regardless of whether individuals are deficient or not.
The studies, published in the BMJ, underscore that “vaccination is still the most effective way to protect people from COVID-19, and vitamin D and cod liver oil supplementation should not be offered to healthy people with normal vitamin D levels,” writes Peter Bergman, MD, of the Karolinska Institute, Stockholm, in an editorial published alongside the studies.
Suboptimal levels of vitamin D are known to be associated with an increased risk of acute respiratory infections, and some observational studies have linked low 25-hydroxyvitamin D (25[OH]D) with more severe COVID-19; however, data on a possible protective effect of vitamin D supplementation in preventing infection have been inconsistent.
U.K. study compares doses
To further investigate the relationship with infections, including COVID-19, in a large cohort, the authors of the first of the two BMJ studies, a phase 3 open-label trial, enrolled 6,200 people in the United Kingdom aged 16 and older between December 2020 and June 2021 who were not taking vitamin D supplements at baseline.
Half of participants were offered a finger-prick blood test, and of the 2,674 who accepted, 86.3% were found to have low concentrations of 25(OH)D (< 75 nmol/L). These participants were provided with vitamin D supplementation at a lower (800 IU/day; n = 1328) or higher dose (3,200 IU/day; n = 1,346) for 6 months. The other half of the group received no tests or supplements.
The results showed minimal differences between groups in terms of rates of developing at least one acute respiratory infection, which occurred in 5% of those in the lower-dose group, 5.7% in the higher-dose group, and 4.6% of participants not offered supplementation.
Similarly, there were no significant differences in the development of real-time PCR-confirmed COVID-19, with rates of 3.6% in the lower-dose group, 3.0% in the higher-dose group, and 2.6% in the group not offered supplementation.
The study is “the first phase 3 randomized controlled trial to evaluate the effectiveness of a test-and-treat approach for correction of suboptimal vitamin D status to prevent acute respiratory tract infections,” report the authors, led by Adrian R. Martineau, MD, PhD, of Barts and The London School of Medicine and Dentistry, Queen Mary University of London.
While uptake and supplementation in the study were favorable, “no statistically significant effect of either dose was seen on the primary outcome of swab test, doctor-confirmed acute respiratory tract infection, or on the major secondary outcome of swab test-confirmed COVID-19,” they conclude.
Traditional use of cod liver oil of benefit?
In the second study, researchers in Norway, led by Arne Soraas, MD, PhD, of the department of microbiology, Oslo University Hospital, evaluated whether that country’s long-held tradition of consuming cod liver oil during the winter to prevent vitamin D deficiency could affect the development of COVID-19 or outcomes.
For the Cod Liver Oil for COVID-19 Prevention Study (CLOC), a large cohort of 34,601 adults with a mean age of 44.9 years who were not taking daily vitamin D supplements were randomized to receive 5 mL/day of cod liver oil, representing a surrogate dose of 400 IU/day of vitamin D (n = 17,278), or placebo (n = 17,323) for up to 6 months.
In contrast with the first study, the vast majority of patients in the CLOC study (86%) had adequate vitamin D levels, defined as greater than 50 nmol/L, at baseline.
Again, however, the results showed no association between increased vitamin D supplementation with cod liver oil and PCR-confirmed COVID-19 or acute respiratory infections, with approximately 1.3% in each group testing positive for COVID-19 over a median of 164 days.
Supplementation with cod liver oil was also not associated with a reduced risk of any of the coprimary endpoints, including other acute respiratory infections.
“Daily supplementation with cod liver oil, a low-dose vitamin D, eicosapentaenoic acid, and docosahexaenoic acid supplement, for 6 months during the SARS-CoV-2pandemic among Norwegian adults did not reduce the incidence of SARS-CoV-2 infection, serious COVID-19, or other acute respiratory infections,” the authors report.
Key study limitations
In his editorial, Dr. Bergman underscores the limitations of two studies – also acknowledged by the authors – including the key confounding role of vaccines that emerged during the studies.
“The null findings of the studies should be interpreted in the context of a highly effective vaccine rolled out during both studies,” Dr. Bergman writes.
In the U.K. study, for instance, whereas only 1.2% of participants were vaccinated at baseline, the rate soared to 89.1% having received at least one dose by study end, potentially masking any effect of vitamin D, he says.
Additionally, for the Norway study, Dr. Bergman notes that cod liver oil also contains a substantial amount of vitamin A, which can be a potent immunomodulator.
“Excessive intake of vitamin A can cause adverse effects and may also interfere with vitamin D-mediated effects on the immune system,” he writes.
With two recent large meta-analyses showing benefits of vitamin D supplementation to be specifically among people who are vitamin D deficient, “a pragmatic approach for the clinician could be to focus on risk groups” for supplementation, Dr. Bergman writes.
“[These include] those who could be tested before supplementation, including people with dark skin, or skin that is rarely exposed to the sun, pregnant women, and elderly people with chronic diseases.”
The U.K. trial was supported by Barts Charity, Pharma Nord, the Fischer Family Foundation, DSM Nutritional Products, the Exilarch’s Foundation, the Karl R. Pfleger Foundation, the AIM Foundation, Synergy Biologics, Cytoplan, the Clinical Research Network of the U.K. National Institute for Health and Care Research, the HDR UK BREATHE Hub, the U.K. Research and Innovation Industrial Strategy Challenge Fund, Thornton & Ross, Warburtons, Hyphens Pharma, and philanthropist Matthew Isaacs.
The CLOC trial was funded by Orkla Health, the manufacturer of the cod liver oil used in the trial. Dr. Bergman has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM BMJ
Fish in pregnancy not dangerous after all, says new study
A new study has called into question the decades-long official guidance advising pregnant women to limit consumption of certain fish because of their potentially high mercury content. That advice was based particularly on one 1997 study suggesting a correlation between fetal exposure to methylmercury and cognitive dysfunction at age 7.
The U.K’s National Health Service currently advises not only pregnant women but also all those who are potentially fertile (those “who are planning a pregnancy or may have a child one day”) to limit oily fish consumption to no more than two portions per week. During pregnancy and while trying to get pregnant, women are advised to avoid shark, swordfish, and marlin altogether.
Suspicions arose from study involving consumption of pilot whale
However, researchers from the University of Bristol (England) now suggest that assumptions generated by the original 1997 study – of a cohort of women in the Faroe Islands – were unwarranted. “It was clearly stated that the methylmercury levels were associated with consumption of pilot whale (a sea mammal, not a fish),” they said.
The pilot whale is a species known to concentrate cadmium and mercury, and indeed in 1989 Faroe Islanders themselves had been advised to limit consumption of both whale meat and blubber, and to abstain completely from liver and kidneys.
Yet, as the authors pointed out, following the 1997 study, “the subsequent assumptions were that seafood in general was responsible for increased mercury levels in the mother.”
New study shows ‘no evidence of harm’
Their new research, published in NeuroToxicology, has now shown that “there is no evidence of harm from these fish,” they said. They recommend that advice for pregnant women should now be revised.
The study drew together analyses on over 4,131 pregnant mothers from the Avon Longitudinal Study of Parents and Children (ALSPAC), also known as the ‘Children of the 90s’ study, with similar detailed studies conducted in the Seychelles. The two populations differ considerably in their frequency of fish consumption: fish is a major component of the diet in the Seychelles, but eaten less frequently in the Avon study area, centered on Bristol.
The team looked for studies using the data from these two contrasting cohorts where mercury levels had been measured during pregnancy and the children followed up at frequent intervals during their childhood. Longitudinal studies in the Seychelles “have not demonstrated harmful cognitive effects in children with increasing maternal mercury levels”, they reported.
The same proved true in the United Kingdom, a more-developed country where fish is eaten less frequently, they found. They summarized the results from various papers that used ALSPAC data and found no adverse associations between total mercury levels measured in maternal whole blood and umbilical cord tissue with children’s cognitive development, in terms of either IQ or scholastic abilities.
In addition, extensive dietary questionnaires during pregnancy had allowed estimates of total fish intake to be calculated, as well as variations in the amount of each type of seafood consumed. “Although seafood is a source of dietary mercury, it appeared to explain a relatively small proportion (9%) of the variation in total blood mercury in our U.K. study population,” they said – actually less than the variance attributable to socio-demographic characteristics of the mother (10.4%).
Positive benefits of eating fish irrespective of type
What mattered was not which types of fish were eaten but whether the woman ate fish or not, which emerged as the most important factor. The mother’s prenatal mercury level was positively associated with her child’s IQ if she had eaten fish in pregnancy, but not if she had not.
“Significantly beneficial associations with prenatal mercury levels were shown for total and performance IQ, mathematical/scientific reasoning, and birth weight, in fish-consuming versus non–fish-consuming mothers,” the authors said. “These beneficial findings are similar to those observed in the Seychelles, where fish consumption is high and prenatal mercury levels are 10 times higher than U.S. levels.”
Caroline Taylor, PhD, senior research fellow and coauthor of the study, said: “We found that the mother’s mercury level during pregnancy is likely to have no adverse effect on the development of the child provided that the mother eats fish. If she did not eat fish, then there was some evidence that her mercury level could have a harmful effect on the child.”
The team said that this was because the essential nutrients in the fish could be protective against the mercury content of the fish. “This could be because of the benefits from the mix of essential nutrients that fish provides, including long-chain fatty acids, iodine, vitamin D and selenium,” said Dr. Taylor.
Women stopped eating any fish ‘to be on the safe side’
The authors called for a change in official guidance. “Health advice to pregnant women concerning consumption of mercury-containing foods has resulted in anxiety, with subsequent avoidance of fish consumption during pregnancy.” Seafood contains many nutrients crucial for children’s growth and development, but “there is the possibility that some women will stop eating any fish ‘to be on the safe side.’ ”
The authors said: “Although advice to pregnant women was generally that fish was good, the accompanying caveat was to avoid fish with high levels of mercury. Psychologically, the latter was the message that women remembered, and the general reaction has been for women to reduce their intake of all seafood.”
Coauthor Jean Golding, emeritus professor of pediatric and perinatal epidemiology at the University of Bristol, said: “It is important that advisories from health professionals revise their advice warning against eating certain species of fish. There is no evidence of harm from these fish, but there is evidence from different countries that such advice can cause confusion in pregnant women. The guidance for pregnancy should highlight ‘Eat at least two portions of fish a week, one of which should be oily’ – and omit all warnings that certain fish should not be eaten.”
The study was funded via core support for ALSPAC by the UK Medical Research Council and the UK Wellcome Trust.
A version of this article first appeared on Medscape UK.
A new study has called into question the decades-long official guidance advising pregnant women to limit consumption of certain fish because of their potentially high mercury content. That advice was based particularly on one 1997 study suggesting a correlation between fetal exposure to methylmercury and cognitive dysfunction at age 7.
The U.K’s National Health Service currently advises not only pregnant women but also all those who are potentially fertile (those “who are planning a pregnancy or may have a child one day”) to limit oily fish consumption to no more than two portions per week. During pregnancy and while trying to get pregnant, women are advised to avoid shark, swordfish, and marlin altogether.
Suspicions arose from study involving consumption of pilot whale
However, researchers from the University of Bristol (England) now suggest that assumptions generated by the original 1997 study – of a cohort of women in the Faroe Islands – were unwarranted. “It was clearly stated that the methylmercury levels were associated with consumption of pilot whale (a sea mammal, not a fish),” they said.
The pilot whale is a species known to concentrate cadmium and mercury, and indeed in 1989 Faroe Islanders themselves had been advised to limit consumption of both whale meat and blubber, and to abstain completely from liver and kidneys.
Yet, as the authors pointed out, following the 1997 study, “the subsequent assumptions were that seafood in general was responsible for increased mercury levels in the mother.”
New study shows ‘no evidence of harm’
Their new research, published in NeuroToxicology, has now shown that “there is no evidence of harm from these fish,” they said. They recommend that advice for pregnant women should now be revised.
The study drew together analyses on over 4,131 pregnant mothers from the Avon Longitudinal Study of Parents and Children (ALSPAC), also known as the ‘Children of the 90s’ study, with similar detailed studies conducted in the Seychelles. The two populations differ considerably in their frequency of fish consumption: fish is a major component of the diet in the Seychelles, but eaten less frequently in the Avon study area, centered on Bristol.
The team looked for studies using the data from these two contrasting cohorts where mercury levels had been measured during pregnancy and the children followed up at frequent intervals during their childhood. Longitudinal studies in the Seychelles “have not demonstrated harmful cognitive effects in children with increasing maternal mercury levels”, they reported.
The same proved true in the United Kingdom, a more-developed country where fish is eaten less frequently, they found. They summarized the results from various papers that used ALSPAC data and found no adverse associations between total mercury levels measured in maternal whole blood and umbilical cord tissue with children’s cognitive development, in terms of either IQ or scholastic abilities.
In addition, extensive dietary questionnaires during pregnancy had allowed estimates of total fish intake to be calculated, as well as variations in the amount of each type of seafood consumed. “Although seafood is a source of dietary mercury, it appeared to explain a relatively small proportion (9%) of the variation in total blood mercury in our U.K. study population,” they said – actually less than the variance attributable to socio-demographic characteristics of the mother (10.4%).
Positive benefits of eating fish irrespective of type
What mattered was not which types of fish were eaten but whether the woman ate fish or not, which emerged as the most important factor. The mother’s prenatal mercury level was positively associated with her child’s IQ if she had eaten fish in pregnancy, but not if she had not.
“Significantly beneficial associations with prenatal mercury levels were shown for total and performance IQ, mathematical/scientific reasoning, and birth weight, in fish-consuming versus non–fish-consuming mothers,” the authors said. “These beneficial findings are similar to those observed in the Seychelles, where fish consumption is high and prenatal mercury levels are 10 times higher than U.S. levels.”
Caroline Taylor, PhD, senior research fellow and coauthor of the study, said: “We found that the mother’s mercury level during pregnancy is likely to have no adverse effect on the development of the child provided that the mother eats fish. If she did not eat fish, then there was some evidence that her mercury level could have a harmful effect on the child.”
The team said that this was because the essential nutrients in the fish could be protective against the mercury content of the fish. “This could be because of the benefits from the mix of essential nutrients that fish provides, including long-chain fatty acids, iodine, vitamin D and selenium,” said Dr. Taylor.
Women stopped eating any fish ‘to be on the safe side’
The authors called for a change in official guidance. “Health advice to pregnant women concerning consumption of mercury-containing foods has resulted in anxiety, with subsequent avoidance of fish consumption during pregnancy.” Seafood contains many nutrients crucial for children’s growth and development, but “there is the possibility that some women will stop eating any fish ‘to be on the safe side.’ ”
The authors said: “Although advice to pregnant women was generally that fish was good, the accompanying caveat was to avoid fish with high levels of mercury. Psychologically, the latter was the message that women remembered, and the general reaction has been for women to reduce their intake of all seafood.”
Coauthor Jean Golding, emeritus professor of pediatric and perinatal epidemiology at the University of Bristol, said: “It is important that advisories from health professionals revise their advice warning against eating certain species of fish. There is no evidence of harm from these fish, but there is evidence from different countries that such advice can cause confusion in pregnant women. The guidance for pregnancy should highlight ‘Eat at least two portions of fish a week, one of which should be oily’ – and omit all warnings that certain fish should not be eaten.”
The study was funded via core support for ALSPAC by the UK Medical Research Council and the UK Wellcome Trust.
A version of this article first appeared on Medscape UK.
A new study has called into question the decades-long official guidance advising pregnant women to limit consumption of certain fish because of their potentially high mercury content. That advice was based particularly on one 1997 study suggesting a correlation between fetal exposure to methylmercury and cognitive dysfunction at age 7.
The U.K’s National Health Service currently advises not only pregnant women but also all those who are potentially fertile (those “who are planning a pregnancy or may have a child one day”) to limit oily fish consumption to no more than two portions per week. During pregnancy and while trying to get pregnant, women are advised to avoid shark, swordfish, and marlin altogether.
Suspicions arose from study involving consumption of pilot whale
However, researchers from the University of Bristol (England) now suggest that assumptions generated by the original 1997 study – of a cohort of women in the Faroe Islands – were unwarranted. “It was clearly stated that the methylmercury levels were associated with consumption of pilot whale (a sea mammal, not a fish),” they said.
The pilot whale is a species known to concentrate cadmium and mercury, and indeed in 1989 Faroe Islanders themselves had been advised to limit consumption of both whale meat and blubber, and to abstain completely from liver and kidneys.
Yet, as the authors pointed out, following the 1997 study, “the subsequent assumptions were that seafood in general was responsible for increased mercury levels in the mother.”
New study shows ‘no evidence of harm’
Their new research, published in NeuroToxicology, has now shown that “there is no evidence of harm from these fish,” they said. They recommend that advice for pregnant women should now be revised.
The study drew together analyses on over 4,131 pregnant mothers from the Avon Longitudinal Study of Parents and Children (ALSPAC), also known as the ‘Children of the 90s’ study, with similar detailed studies conducted in the Seychelles. The two populations differ considerably in their frequency of fish consumption: fish is a major component of the diet in the Seychelles, but eaten less frequently in the Avon study area, centered on Bristol.
The team looked for studies using the data from these two contrasting cohorts where mercury levels had been measured during pregnancy and the children followed up at frequent intervals during their childhood. Longitudinal studies in the Seychelles “have not demonstrated harmful cognitive effects in children with increasing maternal mercury levels”, they reported.
The same proved true in the United Kingdom, a more-developed country where fish is eaten less frequently, they found. They summarized the results from various papers that used ALSPAC data and found no adverse associations between total mercury levels measured in maternal whole blood and umbilical cord tissue with children’s cognitive development, in terms of either IQ or scholastic abilities.
In addition, extensive dietary questionnaires during pregnancy had allowed estimates of total fish intake to be calculated, as well as variations in the amount of each type of seafood consumed. “Although seafood is a source of dietary mercury, it appeared to explain a relatively small proportion (9%) of the variation in total blood mercury in our U.K. study population,” they said – actually less than the variance attributable to socio-demographic characteristics of the mother (10.4%).
Positive benefits of eating fish irrespective of type
What mattered was not which types of fish were eaten but whether the woman ate fish or not, which emerged as the most important factor. The mother’s prenatal mercury level was positively associated with her child’s IQ if she had eaten fish in pregnancy, but not if she had not.
“Significantly beneficial associations with prenatal mercury levels were shown for total and performance IQ, mathematical/scientific reasoning, and birth weight, in fish-consuming versus non–fish-consuming mothers,” the authors said. “These beneficial findings are similar to those observed in the Seychelles, where fish consumption is high and prenatal mercury levels are 10 times higher than U.S. levels.”
Caroline Taylor, PhD, senior research fellow and coauthor of the study, said: “We found that the mother’s mercury level during pregnancy is likely to have no adverse effect on the development of the child provided that the mother eats fish. If she did not eat fish, then there was some evidence that her mercury level could have a harmful effect on the child.”
The team said that this was because the essential nutrients in the fish could be protective against the mercury content of the fish. “This could be because of the benefits from the mix of essential nutrients that fish provides, including long-chain fatty acids, iodine, vitamin D and selenium,” said Dr. Taylor.
Women stopped eating any fish ‘to be on the safe side’
The authors called for a change in official guidance. “Health advice to pregnant women concerning consumption of mercury-containing foods has resulted in anxiety, with subsequent avoidance of fish consumption during pregnancy.” Seafood contains many nutrients crucial for children’s growth and development, but “there is the possibility that some women will stop eating any fish ‘to be on the safe side.’ ”
The authors said: “Although advice to pregnant women was generally that fish was good, the accompanying caveat was to avoid fish with high levels of mercury. Psychologically, the latter was the message that women remembered, and the general reaction has been for women to reduce their intake of all seafood.”
Coauthor Jean Golding, emeritus professor of pediatric and perinatal epidemiology at the University of Bristol, said: “It is important that advisories from health professionals revise their advice warning against eating certain species of fish. There is no evidence of harm from these fish, but there is evidence from different countries that such advice can cause confusion in pregnant women. The guidance for pregnancy should highlight ‘Eat at least two portions of fish a week, one of which should be oily’ – and omit all warnings that certain fish should not be eaten.”
The study was funded via core support for ALSPAC by the UK Medical Research Council and the UK Wellcome Trust.
A version of this article first appeared on Medscape UK.
FROM NEUROTOXICOLOGY
Low testosterone may raise risk of COVID hospitalization
researchers have found.
Low testosterone has long been linked to multiple chronic conditions, including obesity, heart disease, and type 2 diabetes, as well as acute conditions, such as heart attack and stroke. A study published earlier in the pandemic suggested that suppressing the sex hormone might protect against COVID-19. The new study, published in JAMA Network Open, is among the first to suggest a link between low testosterone and the risk for severe COVID.
Researchers at Washington University in St. Louis evaluated data from 723 unvaccinated men who had been infected with SARS-CoV-2. Of those, 116 had been diagnosed with hypogonadism, and 180 were receiving testosterone supplementation.
The study found that men whose testosterone levels were less than 200 ng/dL were 2.4 times more likely to experience a severe case of COVID-19 that required hospitalization than were those with normal levels of the hormone. The study accounted for the fact that participants with low testosterone were also more likely to have comorbidities such as diabetes and obesity.
Paresh Dandona, MD, PhD, distinguished professor of medicine and endocrinology at the State University of New York at Buffalo, called the findings “very exciting” and “fundamental.”
“In the world of hypogonadism, this is the first to show that low testosterone makes you vulnerable” to COVID, added Dr. Dandona, who was not involved with the research.
Men who were receiving hormone replacement therapy were at lower risk of hospitalization, compared with those who were not receiving treatment, the study found.
“Testosterone therapy seemed to negate the harmful effects of COVID,” said Sandeep Dhindsa, MD, an endocrinologist at Saint Louis University and lead author of the study.
Approximately 50% more men have died from confirmed COVID-19 than women since the start of the pandemic, according to the Sex, Gender and COVID-19 Project. Previous findings suggesting that sex may be a risk factor for death from COVID prompted researchers to consider whether hormones may play a role in the increased risk among men and whether treatments that suppress androgen levels could cut hospitalizations, but researchers consistently found that androgen suppression was not effective.
“There are other reasons women might be doing better – they may have followed public health guidelines a lot better,” according to Abhinav Diwan, MD, professor of medicine at Washington University in St. Louis, who helped conduct the new study. “It may be chromosomal and not necessarily just hormonal. The differences between men and women go beyond one factor.”
According to the researchers, the findings do not suggest that hormone therapy be used as a preventive measure against COVID.
“We don’t want patients to get excited and start to ask their doctors for testosterone,” Dr. Dhindsa said.
However, viewing low testosterone as a risk factor for COVID could be considered a shift in thinking for some clinicians, according to Dr. Dandana.
“All obese and all [men with] type 2 diabetes should be tested for testosterone, which is the practice in my clinic right now, even if they have no symptoms,” Dr. Dandana said. “Certainly, those with symptoms [of low testosterone] but no diagnosis, they should be tested, too.”
Participants in the study were infected with SARS-CoV-2 early in 2020, before vaccines were available. The researchers did not assess whether the rate of hospitalizations among participants with low testosterone would be different had they been vaccinated.
“Whatever benefits we saw with testosterone might be minor compared to getting the vaccine,” Dr. Dhindsa said.
Dr. Diwan agreed. “COVID hospitalization continues to be a problem, the strains are evolving, and new vaccines are coming in,” he said. “The bottom line is to get vaccinated.”
Dr. Dhindsa has received personal fees from Bayer and Acerus Pharmaceuticals and grants from Clarus Therapeutics outside the submitted work. Dr. Diwan has served as a consultant for the interpretation of echocardiograms for clinical trials for Clario (previously ERT) and has received nonfinancial support from Dewpoint Therapeutics outside the submitted work. Dr. Dandana has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
researchers have found.
Low testosterone has long been linked to multiple chronic conditions, including obesity, heart disease, and type 2 diabetes, as well as acute conditions, such as heart attack and stroke. A study published earlier in the pandemic suggested that suppressing the sex hormone might protect against COVID-19. The new study, published in JAMA Network Open, is among the first to suggest a link between low testosterone and the risk for severe COVID.
Researchers at Washington University in St. Louis evaluated data from 723 unvaccinated men who had been infected with SARS-CoV-2. Of those, 116 had been diagnosed with hypogonadism, and 180 were receiving testosterone supplementation.
The study found that men whose testosterone levels were less than 200 ng/dL were 2.4 times more likely to experience a severe case of COVID-19 that required hospitalization than were those with normal levels of the hormone. The study accounted for the fact that participants with low testosterone were also more likely to have comorbidities such as diabetes and obesity.
Paresh Dandona, MD, PhD, distinguished professor of medicine and endocrinology at the State University of New York at Buffalo, called the findings “very exciting” and “fundamental.”
“In the world of hypogonadism, this is the first to show that low testosterone makes you vulnerable” to COVID, added Dr. Dandona, who was not involved with the research.
Men who were receiving hormone replacement therapy were at lower risk of hospitalization, compared with those who were not receiving treatment, the study found.
“Testosterone therapy seemed to negate the harmful effects of COVID,” said Sandeep Dhindsa, MD, an endocrinologist at Saint Louis University and lead author of the study.
Approximately 50% more men have died from confirmed COVID-19 than women since the start of the pandemic, according to the Sex, Gender and COVID-19 Project. Previous findings suggesting that sex may be a risk factor for death from COVID prompted researchers to consider whether hormones may play a role in the increased risk among men and whether treatments that suppress androgen levels could cut hospitalizations, but researchers consistently found that androgen suppression was not effective.
“There are other reasons women might be doing better – they may have followed public health guidelines a lot better,” according to Abhinav Diwan, MD, professor of medicine at Washington University in St. Louis, who helped conduct the new study. “It may be chromosomal and not necessarily just hormonal. The differences between men and women go beyond one factor.”
According to the researchers, the findings do not suggest that hormone therapy be used as a preventive measure against COVID.
“We don’t want patients to get excited and start to ask their doctors for testosterone,” Dr. Dhindsa said.
However, viewing low testosterone as a risk factor for COVID could be considered a shift in thinking for some clinicians, according to Dr. Dandana.
“All obese and all [men with] type 2 diabetes should be tested for testosterone, which is the practice in my clinic right now, even if they have no symptoms,” Dr. Dandana said. “Certainly, those with symptoms [of low testosterone] but no diagnosis, they should be tested, too.”
Participants in the study were infected with SARS-CoV-2 early in 2020, before vaccines were available. The researchers did not assess whether the rate of hospitalizations among participants with low testosterone would be different had they been vaccinated.
“Whatever benefits we saw with testosterone might be minor compared to getting the vaccine,” Dr. Dhindsa said.
Dr. Diwan agreed. “COVID hospitalization continues to be a problem, the strains are evolving, and new vaccines are coming in,” he said. “The bottom line is to get vaccinated.”
Dr. Dhindsa has received personal fees from Bayer and Acerus Pharmaceuticals and grants from Clarus Therapeutics outside the submitted work. Dr. Diwan has served as a consultant for the interpretation of echocardiograms for clinical trials for Clario (previously ERT) and has received nonfinancial support from Dewpoint Therapeutics outside the submitted work. Dr. Dandana has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
researchers have found.
Low testosterone has long been linked to multiple chronic conditions, including obesity, heart disease, and type 2 diabetes, as well as acute conditions, such as heart attack and stroke. A study published earlier in the pandemic suggested that suppressing the sex hormone might protect against COVID-19. The new study, published in JAMA Network Open, is among the first to suggest a link between low testosterone and the risk for severe COVID.
Researchers at Washington University in St. Louis evaluated data from 723 unvaccinated men who had been infected with SARS-CoV-2. Of those, 116 had been diagnosed with hypogonadism, and 180 were receiving testosterone supplementation.
The study found that men whose testosterone levels were less than 200 ng/dL were 2.4 times more likely to experience a severe case of COVID-19 that required hospitalization than were those with normal levels of the hormone. The study accounted for the fact that participants with low testosterone were also more likely to have comorbidities such as diabetes and obesity.
Paresh Dandona, MD, PhD, distinguished professor of medicine and endocrinology at the State University of New York at Buffalo, called the findings “very exciting” and “fundamental.”
“In the world of hypogonadism, this is the first to show that low testosterone makes you vulnerable” to COVID, added Dr. Dandona, who was not involved with the research.
Men who were receiving hormone replacement therapy were at lower risk of hospitalization, compared with those who were not receiving treatment, the study found.
“Testosterone therapy seemed to negate the harmful effects of COVID,” said Sandeep Dhindsa, MD, an endocrinologist at Saint Louis University and lead author of the study.
Approximately 50% more men have died from confirmed COVID-19 than women since the start of the pandemic, according to the Sex, Gender and COVID-19 Project. Previous findings suggesting that sex may be a risk factor for death from COVID prompted researchers to consider whether hormones may play a role in the increased risk among men and whether treatments that suppress androgen levels could cut hospitalizations, but researchers consistently found that androgen suppression was not effective.
“There are other reasons women might be doing better – they may have followed public health guidelines a lot better,” according to Abhinav Diwan, MD, professor of medicine at Washington University in St. Louis, who helped conduct the new study. “It may be chromosomal and not necessarily just hormonal. The differences between men and women go beyond one factor.”
According to the researchers, the findings do not suggest that hormone therapy be used as a preventive measure against COVID.
“We don’t want patients to get excited and start to ask their doctors for testosterone,” Dr. Dhindsa said.
However, viewing low testosterone as a risk factor for COVID could be considered a shift in thinking for some clinicians, according to Dr. Dandana.
“All obese and all [men with] type 2 diabetes should be tested for testosterone, which is the practice in my clinic right now, even if they have no symptoms,” Dr. Dandana said. “Certainly, those with symptoms [of low testosterone] but no diagnosis, they should be tested, too.”
Participants in the study were infected with SARS-CoV-2 early in 2020, before vaccines were available. The researchers did not assess whether the rate of hospitalizations among participants with low testosterone would be different had they been vaccinated.
“Whatever benefits we saw with testosterone might be minor compared to getting the vaccine,” Dr. Dhindsa said.
Dr. Diwan agreed. “COVID hospitalization continues to be a problem, the strains are evolving, and new vaccines are coming in,” he said. “The bottom line is to get vaccinated.”
Dr. Dhindsa has received personal fees from Bayer and Acerus Pharmaceuticals and grants from Clarus Therapeutics outside the submitted work. Dr. Diwan has served as a consultant for the interpretation of echocardiograms for clinical trials for Clario (previously ERT) and has received nonfinancial support from Dewpoint Therapeutics outside the submitted work. Dr. Dandana has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Baseline neuromotor abnormalities persist in schizophrenia
Neuromotor abnormalities in psychotic disorders have long been ignored as side effects of antipsychotic drugs, but they are gaining new attention as a component of the disease process, with implications for outcomes and management, wrote Victor Peralta, MD, PhD, of Servicio Navarro de Salud, Pamplona, Spain, and colleagues.
Previous research has suggested links between increased levels of parkinsonism, dyskinesia, and NSS and poor symptomatic and functional outcomes, but “the impact of primary neuromotor dysfunction on the long-term course and outcome of psychotic disorders remains largely unknown,” they said.
In a study published in Schizophrenia Research , the investigators identified 243 consecutive schizophrenia patients admitted to a psychiatric ward at a single center.
Patients were assessed at baseline for variables including parkinsonism, dyskinesia, NSS, and catatonia, and were reassessed 21 years later for the same variables, along with psychopathology, functioning, personal recovery, cognitive performance, and comorbidity.
Overall, baseline dyskinesia and NSS measures were stable over time, with Intraclass Correlation Coefficients (ICC) of 0.92 and 0.86, respectively, while rating stability was low for parkinsonism and catatonia (ICC = 0.42 and 0.31, respectively).
Baseline dyskinesia and NSS each were independent predictors of more positive and negative symptoms, poor functioning, and less personal recovery at 21 years. In a multivariate model, neuromotor dysfunction at follow-up was significantly associated with family history of schizophrenia, obstetric complications, neurodevelopmental delay, and premorbid IQ, as well as baseline dyskinesia and NSS; “these variables explained 51% of the variance in the neuromotor outcome, 35% of which corresponded to baseline dyskinesia and NSS,” the researchers said. As for other outcomes, baseline neuromotor ratings predicted a range from 4% for medical comorbidity to 15% for cognitive impairment.
“The distinction between primary and drug-induced neuromotor dysfunction is a very complex issue, mainly because antipsychotic drugs may cause de novo motor dysfunction, such as improve or worsen the disease-based motor dysfunction,” the researchers explained in their discussion.
Baseline parkinsonism, dyskinesia, and NSS were significantly related to increased risk of antipsychotic exposure over the illness course, possibly because primary neuromotor dysfunction was predictive of greater severity of illness in general, which confounds differentiation between primary and drug-induced motor symptoms, they noted.
The study findings were limited by several factors including potential selection bias because of the selection of first-admission psychosis, which may limit generalizability, the researchers noted. Other limitations include the use of standard clinical rating scales rather than instrumental procedures to measuring neuromotor abnormalities.
However, “our findings confirm the significance of baseline and follow-up neuromotor abnormalities as a core dimension of psychosis,” and future studies “should complement clinical rating scales with instrumental assessment to capture neuromotor dysfunction more comprehensively,” they said.
The results highlight the clinical relevance of examining neuromotor abnormalities as a routine part of practice prior to starting antipsychotics because of their potential as predictors of long-term outcomes “and to disentangle the primary versus drug-induced character of neuromotor impairment in treated patients,” they concluded.
The study was supported by the Spanish Ministry of Economy, Industry, and Competitiveness, and the Regional Government of Navarra. The researchers had no financial conflicts to disclose.
Neuromotor abnormalities in psychotic disorders have long been ignored as side effects of antipsychotic drugs, but they are gaining new attention as a component of the disease process, with implications for outcomes and management, wrote Victor Peralta, MD, PhD, of Servicio Navarro de Salud, Pamplona, Spain, and colleagues.
Previous research has suggested links between increased levels of parkinsonism, dyskinesia, and NSS and poor symptomatic and functional outcomes, but “the impact of primary neuromotor dysfunction on the long-term course and outcome of psychotic disorders remains largely unknown,” they said.
In a study published in Schizophrenia Research , the investigators identified 243 consecutive schizophrenia patients admitted to a psychiatric ward at a single center.
Patients were assessed at baseline for variables including parkinsonism, dyskinesia, NSS, and catatonia, and were reassessed 21 years later for the same variables, along with psychopathology, functioning, personal recovery, cognitive performance, and comorbidity.
Overall, baseline dyskinesia and NSS measures were stable over time, with Intraclass Correlation Coefficients (ICC) of 0.92 and 0.86, respectively, while rating stability was low for parkinsonism and catatonia (ICC = 0.42 and 0.31, respectively).
Baseline dyskinesia and NSS each were independent predictors of more positive and negative symptoms, poor functioning, and less personal recovery at 21 years. In a multivariate model, neuromotor dysfunction at follow-up was significantly associated with family history of schizophrenia, obstetric complications, neurodevelopmental delay, and premorbid IQ, as well as baseline dyskinesia and NSS; “these variables explained 51% of the variance in the neuromotor outcome, 35% of which corresponded to baseline dyskinesia and NSS,” the researchers said. As for other outcomes, baseline neuromotor ratings predicted a range from 4% for medical comorbidity to 15% for cognitive impairment.
“The distinction between primary and drug-induced neuromotor dysfunction is a very complex issue, mainly because antipsychotic drugs may cause de novo motor dysfunction, such as improve or worsen the disease-based motor dysfunction,” the researchers explained in their discussion.
Baseline parkinsonism, dyskinesia, and NSS were significantly related to increased risk of antipsychotic exposure over the illness course, possibly because primary neuromotor dysfunction was predictive of greater severity of illness in general, which confounds differentiation between primary and drug-induced motor symptoms, they noted.
The study findings were limited by several factors including potential selection bias because of the selection of first-admission psychosis, which may limit generalizability, the researchers noted. Other limitations include the use of standard clinical rating scales rather than instrumental procedures to measuring neuromotor abnormalities.
However, “our findings confirm the significance of baseline and follow-up neuromotor abnormalities as a core dimension of psychosis,” and future studies “should complement clinical rating scales with instrumental assessment to capture neuromotor dysfunction more comprehensively,” they said.
The results highlight the clinical relevance of examining neuromotor abnormalities as a routine part of practice prior to starting antipsychotics because of their potential as predictors of long-term outcomes “and to disentangle the primary versus drug-induced character of neuromotor impairment in treated patients,” they concluded.
The study was supported by the Spanish Ministry of Economy, Industry, and Competitiveness, and the Regional Government of Navarra. The researchers had no financial conflicts to disclose.
Neuromotor abnormalities in psychotic disorders have long been ignored as side effects of antipsychotic drugs, but they are gaining new attention as a component of the disease process, with implications for outcomes and management, wrote Victor Peralta, MD, PhD, of Servicio Navarro de Salud, Pamplona, Spain, and colleagues.
Previous research has suggested links between increased levels of parkinsonism, dyskinesia, and NSS and poor symptomatic and functional outcomes, but “the impact of primary neuromotor dysfunction on the long-term course and outcome of psychotic disorders remains largely unknown,” they said.
In a study published in Schizophrenia Research , the investigators identified 243 consecutive schizophrenia patients admitted to a psychiatric ward at a single center.
Patients were assessed at baseline for variables including parkinsonism, dyskinesia, NSS, and catatonia, and were reassessed 21 years later for the same variables, along with psychopathology, functioning, personal recovery, cognitive performance, and comorbidity.
Overall, baseline dyskinesia and NSS measures were stable over time, with Intraclass Correlation Coefficients (ICC) of 0.92 and 0.86, respectively, while rating stability was low for parkinsonism and catatonia (ICC = 0.42 and 0.31, respectively).
Baseline dyskinesia and NSS each were independent predictors of more positive and negative symptoms, poor functioning, and less personal recovery at 21 years. In a multivariate model, neuromotor dysfunction at follow-up was significantly associated with family history of schizophrenia, obstetric complications, neurodevelopmental delay, and premorbid IQ, as well as baseline dyskinesia and NSS; “these variables explained 51% of the variance in the neuromotor outcome, 35% of which corresponded to baseline dyskinesia and NSS,” the researchers said. As for other outcomes, baseline neuromotor ratings predicted a range from 4% for medical comorbidity to 15% for cognitive impairment.
“The distinction between primary and drug-induced neuromotor dysfunction is a very complex issue, mainly because antipsychotic drugs may cause de novo motor dysfunction, such as improve or worsen the disease-based motor dysfunction,” the researchers explained in their discussion.
Baseline parkinsonism, dyskinesia, and NSS were significantly related to increased risk of antipsychotic exposure over the illness course, possibly because primary neuromotor dysfunction was predictive of greater severity of illness in general, which confounds differentiation between primary and drug-induced motor symptoms, they noted.
The study findings were limited by several factors including potential selection bias because of the selection of first-admission psychosis, which may limit generalizability, the researchers noted. Other limitations include the use of standard clinical rating scales rather than instrumental procedures to measuring neuromotor abnormalities.
However, “our findings confirm the significance of baseline and follow-up neuromotor abnormalities as a core dimension of psychosis,” and future studies “should complement clinical rating scales with instrumental assessment to capture neuromotor dysfunction more comprehensively,” they said.
The results highlight the clinical relevance of examining neuromotor abnormalities as a routine part of practice prior to starting antipsychotics because of their potential as predictors of long-term outcomes “and to disentangle the primary versus drug-induced character of neuromotor impairment in treated patients,” they concluded.
The study was supported by the Spanish Ministry of Economy, Industry, and Competitiveness, and the Regional Government of Navarra. The researchers had no financial conflicts to disclose.
FROM SCHIZOPHRENIA RESEARCH
New Parkinson’s test developed thanks to woman who could smell the disease
The test has been years in the making after academics realized that Joy Milne could smell the condition.
The 72-year-old from Perth, Scotland, has a rare condition that gives her a heightened sense of smell.
She noticed that her late husband Les developed a different odor when he was 33 – some 12 years before he was diagnosed with the disease, which leads to parts of the brain become progressively damaged over many years.
Mrs. Milne, dubbed ‘the woman who can smell Parkinson’s, described a “musky” aroma, different from his normal scent.
Her observation piqued the interest of scientists who decided to research what she could smell, and whether this could be harnessed to help identify people with the neurological condition.
‘Early phases of research’
Years later, academics at the University of Manchester (England) have made a breakthrough by developing a test that can identify people with Parkinson’s disease using a simple cotton bud run along the back of the neck.
Researchers can examine the sample to identify molecules linked to the disease to help diagnose whether someone has the disease.
While still in the early phases of research, scientists are excited about the prospect of the NHS being able to deploy a simple test for the disease.
There is currently no definitive test for Parkinson’s disease, with diagnosis based on a patient’s symptoms and medical history.
If the new skin swab is successful outside laboratory conditions it could be rolled out to achieve faster diagnosis.
Mrs. Milne told the PA news agency that it was “not acceptable” that people with Parkinson’s had such high degrees of neurologic damage at the time of diagnosis, adding: “I think it has to be detected far earlier – the same as cancer and diabetes, earlier diagnosis means far more efficient treatment and a better lifestyle for people.
“It has been found that exercise and change of diet can make a phenomenal difference.”
She said her husband, a former doctor, was “determined” to find the right researcher to examine the link between odor and Parkinson’s and they sought out Tilo Kunath, PhD, at the University of Edinburgh in 2012.
Chemical change in sebum
Dr. Kunath paired up with Perdita Barran, PhD, to examine Mrs. Milne’s sense of smell.
The scientists believed that the scent may be caused by a chemical change in skin oil, known as sebum, that is triggered by the disease.
In their preliminary work they asked Mrs. Milne to smell t-shirts worn by people who have Parkinson’s and those who did not.
Mrs. Milne correctly identified the t-shirts worn by Parkinson’s patients but she also said that one from the group of people without Parkinson’s smelled like the disease – 8 months later the individual who wore the t-shirt was diagnosed with Parkinson’s.
Researchers hoped the finding could lead to a test being developed to detect Parkinson’s, working under the assumption that if they were able to identify a unique chemical signature in the skin linked to Parkinson’s, they may eventually be able to diagnose the condition from simple skin swabs.
In 2019 researchers at the University of Manchester, led by Dr. Barran, announced that they had identified molecules linked to the disease found in skin swabs.
And now the scientists have developed a test using this information.
The tests have been successfully conducted in research labs and now scientists are assessing whether they can be used in hospital settings.
If successful, the test could potentially be used in the NHS so GPs can refer patients for Parkinson’s tests.
The findings, which have been published in the Journal of the American Chemical Society, detail how sebum can be analyzed with mass spectrometry – a method which weighs molecules – to identify the disease.
Some molecules are present only in people who have Parkinson’s disease.
Researchers compared swabs from 79 people with Parkinson’s with a healthy control group of 71 people.
Dr. Barran told the PA news agency: “At the moment, there are no cures for Parkinson’s, but a confirmatory diagnostic would allow them to get the right treatment and get the drugs that will help to alleviate their symptoms.
“There would also be nonpharmaceutical interventions, including movement and also nutritional classes, which can really help.
“And I think most critically, it will allow them to have a confirmed diagnosis to actually know what’s wrong with them.”
She added: “What we are now doing is seeing if [hospital laboratories] can do what we’ve done in a research lab in a hospital lab. Once that’s happened then we want to see if we can make this a confirmatory diagnostic that could be used along with the referral process from a GP to a consultant. At the moment in Greater Manchester there are about 18,000 people waiting for a neurological consult and just to clear that list, without any new people joining it, will take up to 2 years. Of those 10%-15% are suspect Parkinson’s. Our test would be able to tell them whether they did or whether they didn’t [have Parkinson’s] and allow them to be referred to the right specialist. So at the moment, we’re talking about being able to refer people in a timely manner to the right specialism and that will be transformative.”
Mrs. Milne may be able to smell other diseases
Mrs. Milne is now working with scientists around the world to see if she can smell other diseases like cancer and tuberculosis.
“I have to go shopping very early or very late because of people’s perfumes, I can’t go into the chemical aisle in the supermarket,” she told the PA news agency. “So yes, a curse sometimes but I have also been out to Tanzania and have done research on TB, and research on cancer in the U.S. – just preliminary work. So it is a curse and a benefit.”
She said that she can sometimes smell people who have Parkinson’s while in the supermarket or walking down the street but has been told by medical ethicists she cannot tell them. “Which GP would accept a man or a woman walking in saying ‘the woman who smells Parkinson’s has told me I have it?’ Maybe in the future but not now.”
Mrs. Milne said that her husband, who died 7 years ago, was like a “changed man” after researchers found the link between Parkinson’s and odor.
A version of this article first appeared on Medscape UK.
The test has been years in the making after academics realized that Joy Milne could smell the condition.
The 72-year-old from Perth, Scotland, has a rare condition that gives her a heightened sense of smell.
She noticed that her late husband Les developed a different odor when he was 33 – some 12 years before he was diagnosed with the disease, which leads to parts of the brain become progressively damaged over many years.
Mrs. Milne, dubbed ‘the woman who can smell Parkinson’s, described a “musky” aroma, different from his normal scent.
Her observation piqued the interest of scientists who decided to research what she could smell, and whether this could be harnessed to help identify people with the neurological condition.
‘Early phases of research’
Years later, academics at the University of Manchester (England) have made a breakthrough by developing a test that can identify people with Parkinson’s disease using a simple cotton bud run along the back of the neck.
Researchers can examine the sample to identify molecules linked to the disease to help diagnose whether someone has the disease.
While still in the early phases of research, scientists are excited about the prospect of the NHS being able to deploy a simple test for the disease.
There is currently no definitive test for Parkinson’s disease, with diagnosis based on a patient’s symptoms and medical history.
If the new skin swab is successful outside laboratory conditions it could be rolled out to achieve faster diagnosis.
Mrs. Milne told the PA news agency that it was “not acceptable” that people with Parkinson’s had such high degrees of neurologic damage at the time of diagnosis, adding: “I think it has to be detected far earlier – the same as cancer and diabetes, earlier diagnosis means far more efficient treatment and a better lifestyle for people.
“It has been found that exercise and change of diet can make a phenomenal difference.”
She said her husband, a former doctor, was “determined” to find the right researcher to examine the link between odor and Parkinson’s and they sought out Tilo Kunath, PhD, at the University of Edinburgh in 2012.
Chemical change in sebum
Dr. Kunath paired up with Perdita Barran, PhD, to examine Mrs. Milne’s sense of smell.
The scientists believed that the scent may be caused by a chemical change in skin oil, known as sebum, that is triggered by the disease.
In their preliminary work they asked Mrs. Milne to smell t-shirts worn by people who have Parkinson’s and those who did not.
Mrs. Milne correctly identified the t-shirts worn by Parkinson’s patients but she also said that one from the group of people without Parkinson’s smelled like the disease – 8 months later the individual who wore the t-shirt was diagnosed with Parkinson’s.
Researchers hoped the finding could lead to a test being developed to detect Parkinson’s, working under the assumption that if they were able to identify a unique chemical signature in the skin linked to Parkinson’s, they may eventually be able to diagnose the condition from simple skin swabs.
In 2019 researchers at the University of Manchester, led by Dr. Barran, announced that they had identified molecules linked to the disease found in skin swabs.
And now the scientists have developed a test using this information.
The tests have been successfully conducted in research labs and now scientists are assessing whether they can be used in hospital settings.
If successful, the test could potentially be used in the NHS so GPs can refer patients for Parkinson’s tests.
The findings, which have been published in the Journal of the American Chemical Society, detail how sebum can be analyzed with mass spectrometry – a method which weighs molecules – to identify the disease.
Some molecules are present only in people who have Parkinson’s disease.
Researchers compared swabs from 79 people with Parkinson’s with a healthy control group of 71 people.
Dr. Barran told the PA news agency: “At the moment, there are no cures for Parkinson’s, but a confirmatory diagnostic would allow them to get the right treatment and get the drugs that will help to alleviate their symptoms.
“There would also be nonpharmaceutical interventions, including movement and also nutritional classes, which can really help.
“And I think most critically, it will allow them to have a confirmed diagnosis to actually know what’s wrong with them.”
She added: “What we are now doing is seeing if [hospital laboratories] can do what we’ve done in a research lab in a hospital lab. Once that’s happened then we want to see if we can make this a confirmatory diagnostic that could be used along with the referral process from a GP to a consultant. At the moment in Greater Manchester there are about 18,000 people waiting for a neurological consult and just to clear that list, without any new people joining it, will take up to 2 years. Of those 10%-15% are suspect Parkinson’s. Our test would be able to tell them whether they did or whether they didn’t [have Parkinson’s] and allow them to be referred to the right specialist. So at the moment, we’re talking about being able to refer people in a timely manner to the right specialism and that will be transformative.”
Mrs. Milne may be able to smell other diseases
Mrs. Milne is now working with scientists around the world to see if she can smell other diseases like cancer and tuberculosis.
“I have to go shopping very early or very late because of people’s perfumes, I can’t go into the chemical aisle in the supermarket,” she told the PA news agency. “So yes, a curse sometimes but I have also been out to Tanzania and have done research on TB, and research on cancer in the U.S. – just preliminary work. So it is a curse and a benefit.”
She said that she can sometimes smell people who have Parkinson’s while in the supermarket or walking down the street but has been told by medical ethicists she cannot tell them. “Which GP would accept a man or a woman walking in saying ‘the woman who smells Parkinson’s has told me I have it?’ Maybe in the future but not now.”
Mrs. Milne said that her husband, who died 7 years ago, was like a “changed man” after researchers found the link between Parkinson’s and odor.
A version of this article first appeared on Medscape UK.
The test has been years in the making after academics realized that Joy Milne could smell the condition.
The 72-year-old from Perth, Scotland, has a rare condition that gives her a heightened sense of smell.
She noticed that her late husband Les developed a different odor when he was 33 – some 12 years before he was diagnosed with the disease, which leads to parts of the brain become progressively damaged over many years.
Mrs. Milne, dubbed ‘the woman who can smell Parkinson’s, described a “musky” aroma, different from his normal scent.
Her observation piqued the interest of scientists who decided to research what she could smell, and whether this could be harnessed to help identify people with the neurological condition.
‘Early phases of research’
Years later, academics at the University of Manchester (England) have made a breakthrough by developing a test that can identify people with Parkinson’s disease using a simple cotton bud run along the back of the neck.
Researchers can examine the sample to identify molecules linked to the disease to help diagnose whether someone has the disease.
While still in the early phases of research, scientists are excited about the prospect of the NHS being able to deploy a simple test for the disease.
There is currently no definitive test for Parkinson’s disease, with diagnosis based on a patient’s symptoms and medical history.
If the new skin swab is successful outside laboratory conditions it could be rolled out to achieve faster diagnosis.
Mrs. Milne told the PA news agency that it was “not acceptable” that people with Parkinson’s had such high degrees of neurologic damage at the time of diagnosis, adding: “I think it has to be detected far earlier – the same as cancer and diabetes, earlier diagnosis means far more efficient treatment and a better lifestyle for people.
“It has been found that exercise and change of diet can make a phenomenal difference.”
She said her husband, a former doctor, was “determined” to find the right researcher to examine the link between odor and Parkinson’s and they sought out Tilo Kunath, PhD, at the University of Edinburgh in 2012.
Chemical change in sebum
Dr. Kunath paired up with Perdita Barran, PhD, to examine Mrs. Milne’s sense of smell.
The scientists believed that the scent may be caused by a chemical change in skin oil, known as sebum, that is triggered by the disease.
In their preliminary work they asked Mrs. Milne to smell t-shirts worn by people who have Parkinson’s and those who did not.
Mrs. Milne correctly identified the t-shirts worn by Parkinson’s patients but she also said that one from the group of people without Parkinson’s smelled like the disease – 8 months later the individual who wore the t-shirt was diagnosed with Parkinson’s.
Researchers hoped the finding could lead to a test being developed to detect Parkinson’s, working under the assumption that if they were able to identify a unique chemical signature in the skin linked to Parkinson’s, they may eventually be able to diagnose the condition from simple skin swabs.
In 2019 researchers at the University of Manchester, led by Dr. Barran, announced that they had identified molecules linked to the disease found in skin swabs.
And now the scientists have developed a test using this information.
The tests have been successfully conducted in research labs and now scientists are assessing whether they can be used in hospital settings.
If successful, the test could potentially be used in the NHS so GPs can refer patients for Parkinson’s tests.
The findings, which have been published in the Journal of the American Chemical Society, detail how sebum can be analyzed with mass spectrometry – a method which weighs molecules – to identify the disease.
Some molecules are present only in people who have Parkinson’s disease.
Researchers compared swabs from 79 people with Parkinson’s with a healthy control group of 71 people.
Dr. Barran told the PA news agency: “At the moment, there are no cures for Parkinson’s, but a confirmatory diagnostic would allow them to get the right treatment and get the drugs that will help to alleviate their symptoms.
“There would also be nonpharmaceutical interventions, including movement and also nutritional classes, which can really help.
“And I think most critically, it will allow them to have a confirmed diagnosis to actually know what’s wrong with them.”
She added: “What we are now doing is seeing if [hospital laboratories] can do what we’ve done in a research lab in a hospital lab. Once that’s happened then we want to see if we can make this a confirmatory diagnostic that could be used along with the referral process from a GP to a consultant. At the moment in Greater Manchester there are about 18,000 people waiting for a neurological consult and just to clear that list, without any new people joining it, will take up to 2 years. Of those 10%-15% are suspect Parkinson’s. Our test would be able to tell them whether they did or whether they didn’t [have Parkinson’s] and allow them to be referred to the right specialist. So at the moment, we’re talking about being able to refer people in a timely manner to the right specialism and that will be transformative.”
Mrs. Milne may be able to smell other diseases
Mrs. Milne is now working with scientists around the world to see if she can smell other diseases like cancer and tuberculosis.
“I have to go shopping very early or very late because of people’s perfumes, I can’t go into the chemical aisle in the supermarket,” she told the PA news agency. “So yes, a curse sometimes but I have also been out to Tanzania and have done research on TB, and research on cancer in the U.S. – just preliminary work. So it is a curse and a benefit.”
She said that she can sometimes smell people who have Parkinson’s while in the supermarket or walking down the street but has been told by medical ethicists she cannot tell them. “Which GP would accept a man or a woman walking in saying ‘the woman who smells Parkinson’s has told me I have it?’ Maybe in the future but not now.”
Mrs. Milne said that her husband, who died 7 years ago, was like a “changed man” after researchers found the link between Parkinson’s and odor.
A version of this article first appeared on Medscape UK.
Warfarin associated with higher upper GI bleeding rates, compared with DOACs
Warfarin is associated with higher rates of upper gastrointestinal bleeding but not overall or lower GI bleeding rates, compared with direct oral anticoagulants (DOACs), according to a new nationwide report from Iceland.
In addition, warfarin is associated with higher rates of major GI bleeding, compared with apixaban.
“Although there has been a myriad of studies comparing GI bleeding rates between warfarin and DOACs, very few studies have compared upper and lower GI bleeding rates specifically,” Arnar Ingason, MD, PhD, a gastroenterology resident at the University of Iceland and Landspitali University Hospital, Reykjavik, said in an interview.
“Knowing whether the risk of upper and lower GI bleeding differs between warfarin and DOACs is important, as it can help guide oral anticoagulant selection,” he said.
“Given that warfarin was associated with higher rates of upper GI bleeding compared to DOACs in our study, warfarin may not be optimal for patients with high risk of upper GI bleeding, such as patients with previous history of upper GI bleeding,” Dr. Ingason added.
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing bleed rates
Dr. Ingason and colleagues analyzed data from electronic medical records for more than 7,000 patients in Iceland who began a prescription for oral anticoagulants between 2014 and 2019. They used inverse probability weighting to yield balanced study groups and calculate the rates of overall, major, upper, and lower GI bleeding. All events of gastrointestinal bleeding were manually confirmed by chart review.
Clinically relevant GI bleeding was defined as bleeding that led to medical intervention, unscheduled physician contact, or temporary cessation of treatment. Upper GI bleeding was defined as hematemesis or a confirmed upper GI bleed site on endoscopy, whereas lower gastrointestinal bleeding was defined as hematochezia or a confirmed lower GI bleed site on endoscopy. Patients with melena and uncertain bleeding site on endoscopy were classified as having a gastrointestinal bleed of unknown location.
Major bleeding was defined as a drop in hemoglobin of at least 20 g/L, transfusion of two or more packs of red blood cells, or bleeding into a closed compartment such as the retroperitoneum.
In total, 295 gastrointestinal bleed events were identified, with 150 events (51%) classified as lower, 105 events (36%) classified as upper, and 40 events (14%) of an unknown location. About 71% required hospitalization, and 63% met the criteria for major bleeding. Five patients died, including three taking warfarin and the other two taking apixaban and rivaroxaban.
Overall, warfarin was associated with double the rate of upper GI bleeding, with 1.7 events per 100 person-years, compared with 0.8 events per 100 person-years for DOACs. The rates of lower GI bleeding were similar for the drugs.
Specifically, warfarin was associated with nearly 5.5 times higher rates of upper gastrointestinal bleeding, compared with dabigatran (Pradaxa, Boehringer Ingelheim), 2.6 times higher than apixaban (Eliquis, Bristol-Myers Squibb), and 1.7 times higher than rivaroxaban (Xarelto, Janssen). The risk for upper GI bleeding also was higher in men taking warfarin.
Warfarin was associated with higher rates of major bleeding, compared with apixaban, with 2.3 events per 100 person-years versus 1.5 events per 100 person-years. Otherwise, overall and major bleed rates were similar for users of warfarin and DOACs.
“GI bleeding among cardiac patients on anticoagulants and antiplatelets is the fastest growing group of GI bleeders,” Neena Abraham, MD, professor of medicine and a gastroenterologist at the Mayo Clinic in Scottsdale, Ariz., said in an interview.
Dr. Abraham, who wasn’t involved with this study, runs a dedicated cardiogastroenterology practice and has studied these patients’ bleeding risk for 20 years.
“This is a group that is ever increasing with aging baby boomers,” she said. “It is anticipated by 2040 that more than 40% of the U.S. adult population will have one or more cardiovascular conditions requiring the chronic prescription of anticoagulant or antiplatelet drugs.”
Considering future research
In this study, peptic ulcer disease was a proportionally less common cause of upper GI bleeding for warfarin at 18%, compared with DOACs at 39%. At the same time, the absolute propensity-weighted incidence rates of peptic ulcer–induced bleeding were similar, with 0.3 events per 100 person-years for both groups.
“As warfarin is not thought to induce peptic ulcer disease but rather promote bleeding from pre-existing lesions, one explanation may be that peptic ulcer disease almost always leads to overt bleeding in anticoagulated patients, while other lesions, such as mucosal erosions and angiodysplasias, may be more likely to lead to overt bleeding in warfarin patients due to a potentially more intense anticoagulation,” Dr. Ingason said.
Dr. Ingason and colleagues now plan to compare GI bleeding severity between warfarin and DOACs. Previous studies have suggested that GI bleeding may be more severe in patients receiving warfarin than in those receiving DOACs, he said.
In addition, large studies with manual verification of GI bleed events could better estimate the potential differences in the sources of upper and lower bleeding between warfarin and DOACs, Dr. Ingason noted.
“Some DOACs, specifically dabigatran, are known to have a mucosal effect on the luminal GI tract, as well as a systemic effect,” Dr. Abraham said. “This pharmacologic effect may contribute to an increase in lower gastrointestinal bleeding in the setting of colonic diverticulosis or mucosal injuries from inflammatory processes.”
Ongoing research should also look at different ways to reduce anticoagulant-related GI bleeding among cardiac patients, she noted.
“Our research group continues to study the risk of cardiac and bleeding adverse events in patients prescribed to DOACs compared to those patients who receive a left atrial appendage occlusion device,” Dr. Abraham said. “This device often permits patients at high risk of GI bleeding to transition off anticoagulant and antiplatelet drugs.”
The study was funded by the Icelandic Centre for Research and the Landspitali University Hospital Research Fund. The funders had no role in the design, conduct, or reporting of the study. The authors declared no competing interests. Dr. Abraham reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Warfarin is associated with higher rates of upper gastrointestinal bleeding but not overall or lower GI bleeding rates, compared with direct oral anticoagulants (DOACs), according to a new nationwide report from Iceland.
In addition, warfarin is associated with higher rates of major GI bleeding, compared with apixaban.
“Although there has been a myriad of studies comparing GI bleeding rates between warfarin and DOACs, very few studies have compared upper and lower GI bleeding rates specifically,” Arnar Ingason, MD, PhD, a gastroenterology resident at the University of Iceland and Landspitali University Hospital, Reykjavik, said in an interview.
“Knowing whether the risk of upper and lower GI bleeding differs between warfarin and DOACs is important, as it can help guide oral anticoagulant selection,” he said.
“Given that warfarin was associated with higher rates of upper GI bleeding compared to DOACs in our study, warfarin may not be optimal for patients with high risk of upper GI bleeding, such as patients with previous history of upper GI bleeding,” Dr. Ingason added.
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing bleed rates
Dr. Ingason and colleagues analyzed data from electronic medical records for more than 7,000 patients in Iceland who began a prescription for oral anticoagulants between 2014 and 2019. They used inverse probability weighting to yield balanced study groups and calculate the rates of overall, major, upper, and lower GI bleeding. All events of gastrointestinal bleeding were manually confirmed by chart review.
Clinically relevant GI bleeding was defined as bleeding that led to medical intervention, unscheduled physician contact, or temporary cessation of treatment. Upper GI bleeding was defined as hematemesis or a confirmed upper GI bleed site on endoscopy, whereas lower gastrointestinal bleeding was defined as hematochezia or a confirmed lower GI bleed site on endoscopy. Patients with melena and uncertain bleeding site on endoscopy were classified as having a gastrointestinal bleed of unknown location.
Major bleeding was defined as a drop in hemoglobin of at least 20 g/L, transfusion of two or more packs of red blood cells, or bleeding into a closed compartment such as the retroperitoneum.
In total, 295 gastrointestinal bleed events were identified, with 150 events (51%) classified as lower, 105 events (36%) classified as upper, and 40 events (14%) of an unknown location. About 71% required hospitalization, and 63% met the criteria for major bleeding. Five patients died, including three taking warfarin and the other two taking apixaban and rivaroxaban.
Overall, warfarin was associated with double the rate of upper GI bleeding, with 1.7 events per 100 person-years, compared with 0.8 events per 100 person-years for DOACs. The rates of lower GI bleeding were similar for the drugs.
Specifically, warfarin was associated with nearly 5.5 times higher rates of upper gastrointestinal bleeding, compared with dabigatran (Pradaxa, Boehringer Ingelheim), 2.6 times higher than apixaban (Eliquis, Bristol-Myers Squibb), and 1.7 times higher than rivaroxaban (Xarelto, Janssen). The risk for upper GI bleeding also was higher in men taking warfarin.
Warfarin was associated with higher rates of major bleeding, compared with apixaban, with 2.3 events per 100 person-years versus 1.5 events per 100 person-years. Otherwise, overall and major bleed rates were similar for users of warfarin and DOACs.
“GI bleeding among cardiac patients on anticoagulants and antiplatelets is the fastest growing group of GI bleeders,” Neena Abraham, MD, professor of medicine and a gastroenterologist at the Mayo Clinic in Scottsdale, Ariz., said in an interview.
Dr. Abraham, who wasn’t involved with this study, runs a dedicated cardiogastroenterology practice and has studied these patients’ bleeding risk for 20 years.
“This is a group that is ever increasing with aging baby boomers,” she said. “It is anticipated by 2040 that more than 40% of the U.S. adult population will have one or more cardiovascular conditions requiring the chronic prescription of anticoagulant or antiplatelet drugs.”
Considering future research
In this study, peptic ulcer disease was a proportionally less common cause of upper GI bleeding for warfarin at 18%, compared with DOACs at 39%. At the same time, the absolute propensity-weighted incidence rates of peptic ulcer–induced bleeding were similar, with 0.3 events per 100 person-years for both groups.
“As warfarin is not thought to induce peptic ulcer disease but rather promote bleeding from pre-existing lesions, one explanation may be that peptic ulcer disease almost always leads to overt bleeding in anticoagulated patients, while other lesions, such as mucosal erosions and angiodysplasias, may be more likely to lead to overt bleeding in warfarin patients due to a potentially more intense anticoagulation,” Dr. Ingason said.
Dr. Ingason and colleagues now plan to compare GI bleeding severity between warfarin and DOACs. Previous studies have suggested that GI bleeding may be more severe in patients receiving warfarin than in those receiving DOACs, he said.
In addition, large studies with manual verification of GI bleed events could better estimate the potential differences in the sources of upper and lower bleeding between warfarin and DOACs, Dr. Ingason noted.
“Some DOACs, specifically dabigatran, are known to have a mucosal effect on the luminal GI tract, as well as a systemic effect,” Dr. Abraham said. “This pharmacologic effect may contribute to an increase in lower gastrointestinal bleeding in the setting of colonic diverticulosis or mucosal injuries from inflammatory processes.”
Ongoing research should also look at different ways to reduce anticoagulant-related GI bleeding among cardiac patients, she noted.
“Our research group continues to study the risk of cardiac and bleeding adverse events in patients prescribed to DOACs compared to those patients who receive a left atrial appendage occlusion device,” Dr. Abraham said. “This device often permits patients at high risk of GI bleeding to transition off anticoagulant and antiplatelet drugs.”
The study was funded by the Icelandic Centre for Research and the Landspitali University Hospital Research Fund. The funders had no role in the design, conduct, or reporting of the study. The authors declared no competing interests. Dr. Abraham reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Warfarin is associated with higher rates of upper gastrointestinal bleeding but not overall or lower GI bleeding rates, compared with direct oral anticoagulants (DOACs), according to a new nationwide report from Iceland.
In addition, warfarin is associated with higher rates of major GI bleeding, compared with apixaban.
“Although there has been a myriad of studies comparing GI bleeding rates between warfarin and DOACs, very few studies have compared upper and lower GI bleeding rates specifically,” Arnar Ingason, MD, PhD, a gastroenterology resident at the University of Iceland and Landspitali University Hospital, Reykjavik, said in an interview.
“Knowing whether the risk of upper and lower GI bleeding differs between warfarin and DOACs is important, as it can help guide oral anticoagulant selection,” he said.
“Given that warfarin was associated with higher rates of upper GI bleeding compared to DOACs in our study, warfarin may not be optimal for patients with high risk of upper GI bleeding, such as patients with previous history of upper GI bleeding,” Dr. Ingason added.
The study was published online in Clinical Gastroenterology and Hepatology.
Analyzing bleed rates
Dr. Ingason and colleagues analyzed data from electronic medical records for more than 7,000 patients in Iceland who began a prescription for oral anticoagulants between 2014 and 2019. They used inverse probability weighting to yield balanced study groups and calculate the rates of overall, major, upper, and lower GI bleeding. All events of gastrointestinal bleeding were manually confirmed by chart review.
Clinically relevant GI bleeding was defined as bleeding that led to medical intervention, unscheduled physician contact, or temporary cessation of treatment. Upper GI bleeding was defined as hematemesis or a confirmed upper GI bleed site on endoscopy, whereas lower gastrointestinal bleeding was defined as hematochezia or a confirmed lower GI bleed site on endoscopy. Patients with melena and uncertain bleeding site on endoscopy were classified as having a gastrointestinal bleed of unknown location.
Major bleeding was defined as a drop in hemoglobin of at least 20 g/L, transfusion of two or more packs of red blood cells, or bleeding into a closed compartment such as the retroperitoneum.
In total, 295 gastrointestinal bleed events were identified, with 150 events (51%) classified as lower, 105 events (36%) classified as upper, and 40 events (14%) of an unknown location. About 71% required hospitalization, and 63% met the criteria for major bleeding. Five patients died, including three taking warfarin and the other two taking apixaban and rivaroxaban.
Overall, warfarin was associated with double the rate of upper GI bleeding, with 1.7 events per 100 person-years, compared with 0.8 events per 100 person-years for DOACs. The rates of lower GI bleeding were similar for the drugs.
Specifically, warfarin was associated with nearly 5.5 times higher rates of upper gastrointestinal bleeding, compared with dabigatran (Pradaxa, Boehringer Ingelheim), 2.6 times higher than apixaban (Eliquis, Bristol-Myers Squibb), and 1.7 times higher than rivaroxaban (Xarelto, Janssen). The risk for upper GI bleeding also was higher in men taking warfarin.
Warfarin was associated with higher rates of major bleeding, compared with apixaban, with 2.3 events per 100 person-years versus 1.5 events per 100 person-years. Otherwise, overall and major bleed rates were similar for users of warfarin and DOACs.
“GI bleeding among cardiac patients on anticoagulants and antiplatelets is the fastest growing group of GI bleeders,” Neena Abraham, MD, professor of medicine and a gastroenterologist at the Mayo Clinic in Scottsdale, Ariz., said in an interview.
Dr. Abraham, who wasn’t involved with this study, runs a dedicated cardiogastroenterology practice and has studied these patients’ bleeding risk for 20 years.
“This is a group that is ever increasing with aging baby boomers,” she said. “It is anticipated by 2040 that more than 40% of the U.S. adult population will have one or more cardiovascular conditions requiring the chronic prescription of anticoagulant or antiplatelet drugs.”
Considering future research
In this study, peptic ulcer disease was a proportionally less common cause of upper GI bleeding for warfarin at 18%, compared with DOACs at 39%. At the same time, the absolute propensity-weighted incidence rates of peptic ulcer–induced bleeding were similar, with 0.3 events per 100 person-years for both groups.
“As warfarin is not thought to induce peptic ulcer disease but rather promote bleeding from pre-existing lesions, one explanation may be that peptic ulcer disease almost always leads to overt bleeding in anticoagulated patients, while other lesions, such as mucosal erosions and angiodysplasias, may be more likely to lead to overt bleeding in warfarin patients due to a potentially more intense anticoagulation,” Dr. Ingason said.
Dr. Ingason and colleagues now plan to compare GI bleeding severity between warfarin and DOACs. Previous studies have suggested that GI bleeding may be more severe in patients receiving warfarin than in those receiving DOACs, he said.
In addition, large studies with manual verification of GI bleed events could better estimate the potential differences in the sources of upper and lower bleeding between warfarin and DOACs, Dr. Ingason noted.
“Some DOACs, specifically dabigatran, are known to have a mucosal effect on the luminal GI tract, as well as a systemic effect,” Dr. Abraham said. “This pharmacologic effect may contribute to an increase in lower gastrointestinal bleeding in the setting of colonic diverticulosis or mucosal injuries from inflammatory processes.”
Ongoing research should also look at different ways to reduce anticoagulant-related GI bleeding among cardiac patients, she noted.
“Our research group continues to study the risk of cardiac and bleeding adverse events in patients prescribed to DOACs compared to those patients who receive a left atrial appendage occlusion device,” Dr. Abraham said. “This device often permits patients at high risk of GI bleeding to transition off anticoagulant and antiplatelet drugs.”
The study was funded by the Icelandic Centre for Research and the Landspitali University Hospital Research Fund. The funders had no role in the design, conduct, or reporting of the study. The authors declared no competing interests. Dr. Abraham reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Reassuring data on NSAIDs in IBD flares
A new study has found no convincing evidence to suggest a causal relationship between nonsteroidal anti-inflammatory drug (NSAID) use and inflammatory bowel disease (IBD) exacerbations.
Rather, the study suggests that observed associations between IBD exacerbation and NSAID exposure may be explained by preexisting underlying risks for IBD flares, residual confounding, and reverse causality.
“We hope these study findings will aid providers in better directing IBD patients on their risk for IBD exacerbation with NSAID use,” write Shirley Cohen-Mekelburg, MD, with University of Michigan Medicine, Ann Arbor, and colleagues.
“This may guide therapy for both IBD and non-IBD related pain management, and the comfort of patients with IBD and the clinicians who treat them when considering NSAIDs as a non-opioid treatment option,” they add.
The study was published online in the American Journal of Gastroenterology.
Taking a second look
Patients with IBD (Crohn’s disease and ulcerative colitis) are prone to both inflammatory and noninflammatory pain, and there has been long-standing concern that NSAIDs may play a role in disease flare-ups.
To see whether a true association exists, Dr. Cohen-Mekelburg and colleagues conducted a series of studies that involved roughly 35,000 patients with IBD.
First, they created a propensity-matched cohort of 15,705 patients who had received NSAIDs and 19,326 who had not taken NSAIDs. Findings from a Cox proportional hazards model suggested a higher likelihood of IBD exacerbation in the group that had taken NSAIDs (hazard ratio, 1.24; 95% confidence interval, 1.16-1.33), after adjusting for age, gender, race, Charlson comorbidity index, smoking status, IBD type, and use of immunomodulator or biologic medications.
However, those who received NSAIDs were already at increased risk of experiencing a disease flare. And the prior event rate ratio for IBD exacerbation, as determined by dividing the adjusted HR after NSAID exposure by the adjusted HR for pre-NSAID exposure, was 0.95 (95% CI, 0.89-1.01).
The researchers used a self-controlled case series to verify their findings and to adjust for other immeasurable patient-level confounders. In this analysis, which involved 3,968 patients, the risk of IBD flare did not increase in the period from 2 weeks to 6 months after exposure to an NSAID.
The incidence of IBD exacerbations was higher in the 0- to 2-week transition period after an NSAID was prescribed, but it dropped after the 2-week “risk” window. This suggests that these short-term flares may be secondary to residual confounding related to reverse causality, rather than the NSAIDs themselves, the researchers say.
While NSAIDs represent the most common first-line analgesic, their use for patients with IBD is variable, in part due to the suspected risk of IBD exacerbation “despite inconclusive evidence of harm to date,” Dr. Cohen-Mekelburg and colleagues note.
They also note that about 36% of patients with IBD in their cohort received at least one NSAID prescription, and three-quarters of these patients did not experience an IBD exacerbation during an average of 5.9 years of follow-up.
Good study, reassuring data
“This is a good study trying to understand the potential sources of bias in associations,” Ashwin Ananthakrishnan, MD, MPH, with Massachusetts General Hospital and Harvard Medical School in Boston, who wasn’t involved in the study, told this news organization.
Overall, he said the study “provides reassurance that cautious, short-duration or low-dose use is likely well tolerated in most patients with IBD. But more work is needed to understand the impact of higher dose or more frequent use.”
Also weighing in, Adam Steinlauf, MD, with Mount Sinai Health System and Icahn School of Medicine at Mount Sinai in New York, noted that patients with IBD experience pain throughout the course of their disease, both intestinal and extraintestinal.
“Treating the underlying IBD is important, but medications used to treat joint pain and inflammation specifically are few,” said Dr. Steinlauf, who wasn’t involved in the new study.
“Sulfasalazine has been used with success, but it does not work in everyone, and many are allergic to the sulfa component, limiting its use. Narcotics are often reluctantly used for these issues as well. Medical marijuana has emerged on the scene to control both types of pain, but to date, there is inconclusive evidence that it significantly treats both pain and underlying inflammation,” Dr. Steinlauf pointed out.
NSAIDs, on the other hand, are “excellent” choices for treating joint pain and inflammation, but gastroenterologists often try to avoid these medications, given the fear of triggering flares of underlying IBD, Dr. Steinlauf told this news organization.
In his view, this new study is “important in that it quite elegantly challenges the notion that gastroenterologists should avoid NSAIDs in patients with IBD.”
Although more data are clearly needed, Dr. Steinlauf said this study “should give practitioners a bit more confidence in prescribing NSAIDs for their patients with IBD if absolutely necessary to control pain and inflammation and improve quality of life when other standard treatments fail.
“The association of NSAIDs with subsequent flares, of which we are all so well aware of and afraid of, may in fact be related more to our patients’ underlying risks for IBD and reverse causality rather than the NSAIDs themselves. Future studies should further clarify this notion,” Dr. Steinlauf said.
The study received no commercial funding. Dr. Cohen-Mekelburg, Dr. Ananthakrishnan, and Dr. Steinlauf have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study has found no convincing evidence to suggest a causal relationship between nonsteroidal anti-inflammatory drug (NSAID) use and inflammatory bowel disease (IBD) exacerbations.
Rather, the study suggests that observed associations between IBD exacerbation and NSAID exposure may be explained by preexisting underlying risks for IBD flares, residual confounding, and reverse causality.
“We hope these study findings will aid providers in better directing IBD patients on their risk for IBD exacerbation with NSAID use,” write Shirley Cohen-Mekelburg, MD, with University of Michigan Medicine, Ann Arbor, and colleagues.
“This may guide therapy for both IBD and non-IBD related pain management, and the comfort of patients with IBD and the clinicians who treat them when considering NSAIDs as a non-opioid treatment option,” they add.
The study was published online in the American Journal of Gastroenterology.
Taking a second look
Patients with IBD (Crohn’s disease and ulcerative colitis) are prone to both inflammatory and noninflammatory pain, and there has been long-standing concern that NSAIDs may play a role in disease flare-ups.
To see whether a true association exists, Dr. Cohen-Mekelburg and colleagues conducted a series of studies that involved roughly 35,000 patients with IBD.
First, they created a propensity-matched cohort of 15,705 patients who had received NSAIDs and 19,326 who had not taken NSAIDs. Findings from a Cox proportional hazards model suggested a higher likelihood of IBD exacerbation in the group that had taken NSAIDs (hazard ratio, 1.24; 95% confidence interval, 1.16-1.33), after adjusting for age, gender, race, Charlson comorbidity index, smoking status, IBD type, and use of immunomodulator or biologic medications.
However, those who received NSAIDs were already at increased risk of experiencing a disease flare. And the prior event rate ratio for IBD exacerbation, as determined by dividing the adjusted HR after NSAID exposure by the adjusted HR for pre-NSAID exposure, was 0.95 (95% CI, 0.89-1.01).
The researchers used a self-controlled case series to verify their findings and to adjust for other immeasurable patient-level confounders. In this analysis, which involved 3,968 patients, the risk of IBD flare did not increase in the period from 2 weeks to 6 months after exposure to an NSAID.
The incidence of IBD exacerbations was higher in the 0- to 2-week transition period after an NSAID was prescribed, but it dropped after the 2-week “risk” window. This suggests that these short-term flares may be secondary to residual confounding related to reverse causality, rather than the NSAIDs themselves, the researchers say.
While NSAIDs represent the most common first-line analgesic, their use for patients with IBD is variable, in part due to the suspected risk of IBD exacerbation “despite inconclusive evidence of harm to date,” Dr. Cohen-Mekelburg and colleagues note.
They also note that about 36% of patients with IBD in their cohort received at least one NSAID prescription, and three-quarters of these patients did not experience an IBD exacerbation during an average of 5.9 years of follow-up.
Good study, reassuring data
“This is a good study trying to understand the potential sources of bias in associations,” Ashwin Ananthakrishnan, MD, MPH, with Massachusetts General Hospital and Harvard Medical School in Boston, who wasn’t involved in the study, told this news organization.
Overall, he said the study “provides reassurance that cautious, short-duration or low-dose use is likely well tolerated in most patients with IBD. But more work is needed to understand the impact of higher dose or more frequent use.”
Also weighing in, Adam Steinlauf, MD, with Mount Sinai Health System and Icahn School of Medicine at Mount Sinai in New York, noted that patients with IBD experience pain throughout the course of their disease, both intestinal and extraintestinal.
“Treating the underlying IBD is important, but medications used to treat joint pain and inflammation specifically are few,” said Dr. Steinlauf, who wasn’t involved in the new study.
“Sulfasalazine has been used with success, but it does not work in everyone, and many are allergic to the sulfa component, limiting its use. Narcotics are often reluctantly used for these issues as well. Medical marijuana has emerged on the scene to control both types of pain, but to date, there is inconclusive evidence that it significantly treats both pain and underlying inflammation,” Dr. Steinlauf pointed out.
NSAIDs, on the other hand, are “excellent” choices for treating joint pain and inflammation, but gastroenterologists often try to avoid these medications, given the fear of triggering flares of underlying IBD, Dr. Steinlauf told this news organization.
In his view, this new study is “important in that it quite elegantly challenges the notion that gastroenterologists should avoid NSAIDs in patients with IBD.”
Although more data are clearly needed, Dr. Steinlauf said this study “should give practitioners a bit more confidence in prescribing NSAIDs for their patients with IBD if absolutely necessary to control pain and inflammation and improve quality of life when other standard treatments fail.
“The association of NSAIDs with subsequent flares, of which we are all so well aware of and afraid of, may in fact be related more to our patients’ underlying risks for IBD and reverse causality rather than the NSAIDs themselves. Future studies should further clarify this notion,” Dr. Steinlauf said.
The study received no commercial funding. Dr. Cohen-Mekelburg, Dr. Ananthakrishnan, and Dr. Steinlauf have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study has found no convincing evidence to suggest a causal relationship between nonsteroidal anti-inflammatory drug (NSAID) use and inflammatory bowel disease (IBD) exacerbations.
Rather, the study suggests that observed associations between IBD exacerbation and NSAID exposure may be explained by preexisting underlying risks for IBD flares, residual confounding, and reverse causality.
“We hope these study findings will aid providers in better directing IBD patients on their risk for IBD exacerbation with NSAID use,” write Shirley Cohen-Mekelburg, MD, with University of Michigan Medicine, Ann Arbor, and colleagues.
“This may guide therapy for both IBD and non-IBD related pain management, and the comfort of patients with IBD and the clinicians who treat them when considering NSAIDs as a non-opioid treatment option,” they add.
The study was published online in the American Journal of Gastroenterology.
Taking a second look
Patients with IBD (Crohn’s disease and ulcerative colitis) are prone to both inflammatory and noninflammatory pain, and there has been long-standing concern that NSAIDs may play a role in disease flare-ups.
To see whether a true association exists, Dr. Cohen-Mekelburg and colleagues conducted a series of studies that involved roughly 35,000 patients with IBD.
First, they created a propensity-matched cohort of 15,705 patients who had received NSAIDs and 19,326 who had not taken NSAIDs. Findings from a Cox proportional hazards model suggested a higher likelihood of IBD exacerbation in the group that had taken NSAIDs (hazard ratio, 1.24; 95% confidence interval, 1.16-1.33), after adjusting for age, gender, race, Charlson comorbidity index, smoking status, IBD type, and use of immunomodulator or biologic medications.
However, those who received NSAIDs were already at increased risk of experiencing a disease flare. And the prior event rate ratio for IBD exacerbation, as determined by dividing the adjusted HR after NSAID exposure by the adjusted HR for pre-NSAID exposure, was 0.95 (95% CI, 0.89-1.01).
The researchers used a self-controlled case series to verify their findings and to adjust for other immeasurable patient-level confounders. In this analysis, which involved 3,968 patients, the risk of IBD flare did not increase in the period from 2 weeks to 6 months after exposure to an NSAID.
The incidence of IBD exacerbations was higher in the 0- to 2-week transition period after an NSAID was prescribed, but it dropped after the 2-week “risk” window. This suggests that these short-term flares may be secondary to residual confounding related to reverse causality, rather than the NSAIDs themselves, the researchers say.
While NSAIDs represent the most common first-line analgesic, their use for patients with IBD is variable, in part due to the suspected risk of IBD exacerbation “despite inconclusive evidence of harm to date,” Dr. Cohen-Mekelburg and colleagues note.
They also note that about 36% of patients with IBD in their cohort received at least one NSAID prescription, and three-quarters of these patients did not experience an IBD exacerbation during an average of 5.9 years of follow-up.
Good study, reassuring data
“This is a good study trying to understand the potential sources of bias in associations,” Ashwin Ananthakrishnan, MD, MPH, with Massachusetts General Hospital and Harvard Medical School in Boston, who wasn’t involved in the study, told this news organization.
Overall, he said the study “provides reassurance that cautious, short-duration or low-dose use is likely well tolerated in most patients with IBD. But more work is needed to understand the impact of higher dose or more frequent use.”
Also weighing in, Adam Steinlauf, MD, with Mount Sinai Health System and Icahn School of Medicine at Mount Sinai in New York, noted that patients with IBD experience pain throughout the course of their disease, both intestinal and extraintestinal.
“Treating the underlying IBD is important, but medications used to treat joint pain and inflammation specifically are few,” said Dr. Steinlauf, who wasn’t involved in the new study.
“Sulfasalazine has been used with success, but it does not work in everyone, and many are allergic to the sulfa component, limiting its use. Narcotics are often reluctantly used for these issues as well. Medical marijuana has emerged on the scene to control both types of pain, but to date, there is inconclusive evidence that it significantly treats both pain and underlying inflammation,” Dr. Steinlauf pointed out.
NSAIDs, on the other hand, are “excellent” choices for treating joint pain and inflammation, but gastroenterologists often try to avoid these medications, given the fear of triggering flares of underlying IBD, Dr. Steinlauf told this news organization.
In his view, this new study is “important in that it quite elegantly challenges the notion that gastroenterologists should avoid NSAIDs in patients with IBD.”
Although more data are clearly needed, Dr. Steinlauf said this study “should give practitioners a bit more confidence in prescribing NSAIDs for their patients with IBD if absolutely necessary to control pain and inflammation and improve quality of life when other standard treatments fail.
“The association of NSAIDs with subsequent flares, of which we are all so well aware of and afraid of, may in fact be related more to our patients’ underlying risks for IBD and reverse causality rather than the NSAIDs themselves. Future studies should further clarify this notion,” Dr. Steinlauf said.
The study received no commercial funding. Dr. Cohen-Mekelburg, Dr. Ananthakrishnan, and Dr. Steinlauf have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM AMERICAN JOURNAL OF GASTROENTEROLOGY
10-year delay in one-third of EoE diagnoses has persisted for decades
It takes at least 10 years for one-third of patients with eosinophilic esophagitis (EoE) to receive a diagnosis and a median of 4 years for patients overall to get their diagnosis – numbers that haven’t budged in 3 decades, according to a study published online in the American Journal of Gastroenterology.
This delay has persisted despite more than 2,000 publications on the condition since 2014 and a variety of educational events about it, reported Fritz R. Murray, MD, of the department of gastroenterology and hepatology at University Hospital Zurich and his colleagues.
“Bearing in mind that eosinophilic esophagitis is a chronic and progressive disease, that, if left untreated, leads to esophageal structuring ultimately causing food impaction, the results of our analysis are a cause for concern,” the authors wrote.
“Substantial efforts are warranted to increase awareness for eosinophilic esophagitis and its hallmark symptom, solid-food dysphagia, as an age-independent red-flag symptom … in order to lower risk of long-term complications,” they added.
The researchers retrospectively analyzed prospectively collected data from 1,152 patients in a Swiss database. The patients (74% male; median age, 38 years) had all been diagnosed with EoE according to established criteria. The authors calculated the diagnostic delay from 1989 to 2021 and at three key time points: 1993 – first description of the condition, 2007 – first consensus recommendations, and 2011 – updated consensus recommendations
The median diagnostic delay over the 3 decades studied was 4 years overall and was at least 10 years in nearly one-third (32%) of the population. Diagnostic delay did not significantly change throughout the study period, year by year, or at or after any of the milestones included in the analysis, retaining the minimum 10-year delay in about one-third of all patients.
The median age at symptom onset was 30 years, with 51% of patients first experiencing symptoms between 10 and 30 years of age.
“Age at diagnosis showed a normal distribution with its peak between 30 and 40 years with 25% of the study population being diagnosed with EoE during that period,” the authors reported.
Although diagnostic delay did not differ between sexes, the length of time before diagnosis did vary on the basis of the patient’s age at diagnosis, increasing from a median of 0 years for those aged 10 years or younger to 5 years for those aged 31-40 years.
“When examining variation in diagnostic delay based on age at symptom onset, we observed an inverse association of age at symptom onset and diagnostic delay, with longest diagnostic delay observed in children,” they wrote.
Diagnostic delay was longer in those who needed an endoscopic disimpaction – a median of 6 years – before being diagnosed, compared with those who did not require this procedure, who had a median delay of 3 years. Nearly one-third (31%) of participants had at least one food impaction requiring endoscopic removal before receiving their diagnosis.
Three in four participants (74%) had a confirmed atopic condition besides eosinophilic esophagitis, with 13% not having an atopic comorbidity and another 13% lacking information on whether they did or didn’t. Those with atopic conditions were younger (median age, 29 years) when symptoms began than were those without atopic conditions (median age, 34 years).
Similarly, those with atopic conditions were younger (median age, 38 years) than those without these conditions (median age, 41 years) at the time of diagnosis. Diagnostic delay was a median 2 years shorter – 3 years vs. 5 years – for patients with concomitant atopic conditions.
“Importantly, the length of diagnosis delay (untreated disease) directly correlates with the occurrence of esophageal strictures,” the authors wrote, citing previous research finding that the prevalence of strictures rose from 17% in patients with a delay of up to 2 years to 71% in patients with a delay of more than 20 years.
“Esophageal strictures were present in around 38% of patients with a delay between 8-11 years” in that study, “a delay that is prevalent in about one third of our study population,” the authors wrote. “However, even a median delay of 4 years resulted in strictures in around 31% of untreated patients.”
Other research has found that the risk for esophageal strictures increases an estimated 9% each year that eosinophilic esophagitis goes untreated.
The authors suggested that patients’ denying symptoms or attempting to address their symptoms with changes in diet or eating behavior may be one reason for the long diagnostic delay, given other findings showing that patient-dependent delay was 18 months, compared with 6 months for physician-dependent delay. Although the authors didn’t have the information in their dataset to assess patient- vs. physician-dependent delay, a subgroup analysis revealed that patients and nongastroenterologist doctors combined made up the largest proportion of diagnosis delay.
“This fact indicates that future efforts should target the general population, and potentially primary physicians, to strengthen the awareness for eosinophilic esophagitis as a potential underlying condition in patients with dysphagia,” the authors wrote.
“A change in eating behavior, especially in cases with prolonged chewing, slow swallowing or even the necessity of drinking fluids after swallowing of solid food, should raise suspicion also in the general population,” they added.
Dr. Murray received travel support from Janssen, and 9 of the other 11 authors reported consulting, speaking, research and/or travel fees from 23 various pharmaceutical and related companies.
A version of this article first appeared on Medscape.com.
It takes at least 10 years for one-third of patients with eosinophilic esophagitis (EoE) to receive a diagnosis and a median of 4 years for patients overall to get their diagnosis – numbers that haven’t budged in 3 decades, according to a study published online in the American Journal of Gastroenterology.
This delay has persisted despite more than 2,000 publications on the condition since 2014 and a variety of educational events about it, reported Fritz R. Murray, MD, of the department of gastroenterology and hepatology at University Hospital Zurich and his colleagues.
“Bearing in mind that eosinophilic esophagitis is a chronic and progressive disease, that, if left untreated, leads to esophageal structuring ultimately causing food impaction, the results of our analysis are a cause for concern,” the authors wrote.
“Substantial efforts are warranted to increase awareness for eosinophilic esophagitis and its hallmark symptom, solid-food dysphagia, as an age-independent red-flag symptom … in order to lower risk of long-term complications,” they added.
The researchers retrospectively analyzed prospectively collected data from 1,152 patients in a Swiss database. The patients (74% male; median age, 38 years) had all been diagnosed with EoE according to established criteria. The authors calculated the diagnostic delay from 1989 to 2021 and at three key time points: 1993 – first description of the condition, 2007 – first consensus recommendations, and 2011 – updated consensus recommendations
The median diagnostic delay over the 3 decades studied was 4 years overall and was at least 10 years in nearly one-third (32%) of the population. Diagnostic delay did not significantly change throughout the study period, year by year, or at or after any of the milestones included in the analysis, retaining the minimum 10-year delay in about one-third of all patients.
The median age at symptom onset was 30 years, with 51% of patients first experiencing symptoms between 10 and 30 years of age.
“Age at diagnosis showed a normal distribution with its peak between 30 and 40 years with 25% of the study population being diagnosed with EoE during that period,” the authors reported.
Although diagnostic delay did not differ between sexes, the length of time before diagnosis did vary on the basis of the patient’s age at diagnosis, increasing from a median of 0 years for those aged 10 years or younger to 5 years for those aged 31-40 years.
“When examining variation in diagnostic delay based on age at symptom onset, we observed an inverse association of age at symptom onset and diagnostic delay, with longest diagnostic delay observed in children,” they wrote.
Diagnostic delay was longer in those who needed an endoscopic disimpaction – a median of 6 years – before being diagnosed, compared with those who did not require this procedure, who had a median delay of 3 years. Nearly one-third (31%) of participants had at least one food impaction requiring endoscopic removal before receiving their diagnosis.
Three in four participants (74%) had a confirmed atopic condition besides eosinophilic esophagitis, with 13% not having an atopic comorbidity and another 13% lacking information on whether they did or didn’t. Those with atopic conditions were younger (median age, 29 years) when symptoms began than were those without atopic conditions (median age, 34 years).
Similarly, those with atopic conditions were younger (median age, 38 years) than those without these conditions (median age, 41 years) at the time of diagnosis. Diagnostic delay was a median 2 years shorter – 3 years vs. 5 years – for patients with concomitant atopic conditions.
“Importantly, the length of diagnosis delay (untreated disease) directly correlates with the occurrence of esophageal strictures,” the authors wrote, citing previous research finding that the prevalence of strictures rose from 17% in patients with a delay of up to 2 years to 71% in patients with a delay of more than 20 years.
“Esophageal strictures were present in around 38% of patients with a delay between 8-11 years” in that study, “a delay that is prevalent in about one third of our study population,” the authors wrote. “However, even a median delay of 4 years resulted in strictures in around 31% of untreated patients.”
Other research has found that the risk for esophageal strictures increases an estimated 9% each year that eosinophilic esophagitis goes untreated.
The authors suggested that patients’ denying symptoms or attempting to address their symptoms with changes in diet or eating behavior may be one reason for the long diagnostic delay, given other findings showing that patient-dependent delay was 18 months, compared with 6 months for physician-dependent delay. Although the authors didn’t have the information in their dataset to assess patient- vs. physician-dependent delay, a subgroup analysis revealed that patients and nongastroenterologist doctors combined made up the largest proportion of diagnosis delay.
“This fact indicates that future efforts should target the general population, and potentially primary physicians, to strengthen the awareness for eosinophilic esophagitis as a potential underlying condition in patients with dysphagia,” the authors wrote.
“A change in eating behavior, especially in cases with prolonged chewing, slow swallowing or even the necessity of drinking fluids after swallowing of solid food, should raise suspicion also in the general population,” they added.
Dr. Murray received travel support from Janssen, and 9 of the other 11 authors reported consulting, speaking, research and/or travel fees from 23 various pharmaceutical and related companies.
A version of this article first appeared on Medscape.com.
It takes at least 10 years for one-third of patients with eosinophilic esophagitis (EoE) to receive a diagnosis and a median of 4 years for patients overall to get their diagnosis – numbers that haven’t budged in 3 decades, according to a study published online in the American Journal of Gastroenterology.
This delay has persisted despite more than 2,000 publications on the condition since 2014 and a variety of educational events about it, reported Fritz R. Murray, MD, of the department of gastroenterology and hepatology at University Hospital Zurich and his colleagues.
“Bearing in mind that eosinophilic esophagitis is a chronic and progressive disease, that, if left untreated, leads to esophageal structuring ultimately causing food impaction, the results of our analysis are a cause for concern,” the authors wrote.
“Substantial efforts are warranted to increase awareness for eosinophilic esophagitis and its hallmark symptom, solid-food dysphagia, as an age-independent red-flag symptom … in order to lower risk of long-term complications,” they added.
The researchers retrospectively analyzed prospectively collected data from 1,152 patients in a Swiss database. The patients (74% male; median age, 38 years) had all been diagnosed with EoE according to established criteria. The authors calculated the diagnostic delay from 1989 to 2021 and at three key time points: 1993 – first description of the condition, 2007 – first consensus recommendations, and 2011 – updated consensus recommendations
The median diagnostic delay over the 3 decades studied was 4 years overall and was at least 10 years in nearly one-third (32%) of the population. Diagnostic delay did not significantly change throughout the study period, year by year, or at or after any of the milestones included in the analysis, retaining the minimum 10-year delay in about one-third of all patients.
The median age at symptom onset was 30 years, with 51% of patients first experiencing symptoms between 10 and 30 years of age.
“Age at diagnosis showed a normal distribution with its peak between 30 and 40 years with 25% of the study population being diagnosed with EoE during that period,” the authors reported.
Although diagnostic delay did not differ between sexes, the length of time before diagnosis did vary on the basis of the patient’s age at diagnosis, increasing from a median of 0 years for those aged 10 years or younger to 5 years for those aged 31-40 years.
“When examining variation in diagnostic delay based on age at symptom onset, we observed an inverse association of age at symptom onset and diagnostic delay, with longest diagnostic delay observed in children,” they wrote.
Diagnostic delay was longer in those who needed an endoscopic disimpaction – a median of 6 years – before being diagnosed, compared with those who did not require this procedure, who had a median delay of 3 years. Nearly one-third (31%) of participants had at least one food impaction requiring endoscopic removal before receiving their diagnosis.
Three in four participants (74%) had a confirmed atopic condition besides eosinophilic esophagitis, with 13% not having an atopic comorbidity and another 13% lacking information on whether they did or didn’t. Those with atopic conditions were younger (median age, 29 years) when symptoms began than were those without atopic conditions (median age, 34 years).
Similarly, those with atopic conditions were younger (median age, 38 years) than those without these conditions (median age, 41 years) at the time of diagnosis. Diagnostic delay was a median 2 years shorter – 3 years vs. 5 years – for patients with concomitant atopic conditions.
“Importantly, the length of diagnosis delay (untreated disease) directly correlates with the occurrence of esophageal strictures,” the authors wrote, citing previous research finding that the prevalence of strictures rose from 17% in patients with a delay of up to 2 years to 71% in patients with a delay of more than 20 years.
“Esophageal strictures were present in around 38% of patients with a delay between 8-11 years” in that study, “a delay that is prevalent in about one third of our study population,” the authors wrote. “However, even a median delay of 4 years resulted in strictures in around 31% of untreated patients.”
Other research has found that the risk for esophageal strictures increases an estimated 9% each year that eosinophilic esophagitis goes untreated.
The authors suggested that patients’ denying symptoms or attempting to address their symptoms with changes in diet or eating behavior may be one reason for the long diagnostic delay, given other findings showing that patient-dependent delay was 18 months, compared with 6 months for physician-dependent delay. Although the authors didn’t have the information in their dataset to assess patient- vs. physician-dependent delay, a subgroup analysis revealed that patients and nongastroenterologist doctors combined made up the largest proportion of diagnosis delay.
“This fact indicates that future efforts should target the general population, and potentially primary physicians, to strengthen the awareness for eosinophilic esophagitis as a potential underlying condition in patients with dysphagia,” the authors wrote.
“A change in eating behavior, especially in cases with prolonged chewing, slow swallowing or even the necessity of drinking fluids after swallowing of solid food, should raise suspicion also in the general population,” they added.
Dr. Murray received travel support from Janssen, and 9 of the other 11 authors reported consulting, speaking, research and/or travel fees from 23 various pharmaceutical and related companies.
A version of this article first appeared on Medscape.com.
FROM AMERICAN JOURNAL OF GASTROENTEROLOGY