User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
nav[contains(@class, 'nav-ce-stack nav-ce-stack__large-screen')]
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'main-prefix')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
div[contains(@class, 'view-medstat-quiz-listing-panes')]
div[contains(@class, 'pane-article-sidebar-latest-news')]
Placing New Therapies for Myasthenia Gravis in the Treatment Paradigm
Nicholas J. Silvestri, MD: Hi there. My name is Dr Nick Silvestri, and I'm at the University of Buffalo. Today, I'd like to answer a few questions that I commonly receive from colleagues about the treatment of myasthenia gravis. As you know, over the past several years, we've had many new treatments approved to treat myasthenia gravis. One of the common questions that I get is, how do these new treatments fit into my treatment paradigm?
First and foremost, I'd like to say that we've been very successful at treating myasthenia gravis for many years. The mainstay of therapy has typically been acetylcholinesterase inhibitors, corticosteroids, and nonsteroidal immunosuppressants. These medicines by and large have helped control the disease in many, but maybe not all, patients.
The good news about these treatments is they're very efficacious, and as I said, they are able to treat most patients with myasthenia gravis. But the bad news on these medications is that they can have some serious short- and long-term consequences. So as I think about the treatment paradigm right now in 2024 and treating patients with myasthenia gravis, I typically start with prednisone or corticosteroids and transition patients onto an oral immunosuppressant.
But because it takes about a year for those oral immunosuppressants to become effective, I'm typically using steroids as a bridge. The goal, really, is to have patients on an oral immunosuppressant alone at the 1-year mark or thereabouts so that we don't have patients on steroids.
When it comes to the new therapies, one of the things that I'm doing is I'm using them, if a patient does not respond to an oral immunosuppressant or in situations where patients have medical comorbidities that make me not want to use steroids or use steroids at high doses.
Specifically, FcRn antagonists are often used as next-line therapy after an oral immunosuppressant fails or if I don't feel comfortable using prednisone at the outset and possibly bringing the patient to the oral immunosuppressant. The rationale behind this is that these medications are effective. They've been shown to be effective in clinical trials. They work fairly quickly, usually within 2-4 weeks. They're convenient for patients. And they have a pretty good safety profile.
The major side effects with the FcRn antagonists tend to be an increased risk for infection, which is true for most medications used to treat myasthenia gravis. One is associated with headache. And they can be associated with joint pains and infusion issues as well. But by and large, they are well tolerated. So again, if a patient is not responding to an oral immunosuppressant or it has toxicity or side effects, or I'm leery of using prednisone, I'll typically use an FcRn antagonist.
The other main class of medications is complement inhibitors. There are three complement inhibitors approved to use in the United States. Complement inhibitors are also very effective medications. I've used them with success in a number of patients, and I think that the paradigm is shifting.
I've used complement inhibitors, as with the FcRn antagonists, in patients who aren't responding to the first line of therapy or if they have toxicity. I've also used complement inhibitors in instances where patients have not responded very robustly to FcRn antagonists, which thankfully is the minority of patients, but it's worth noting.
I view the treatment paradigm for 2024 as oral immunosuppressant first, then FcRn antagonist next, and then complement inhibitor next. But to be truthful, we don't have head-to-head comparisons right now. What works for one patient may not work for another. In myasthenia gravis, it would be great to have biomarkers that allow us to predict who would respond to what form of therapy better.
In other words, it would be great to be able to send off a test to know whether a patient would respond to an oral immunosuppressant better than perhaps to one of the newer therapies, or whether a patient would respond to an FcRn antagonist better than a complement inhibitor or vice versa. That's really one of the gold standards or holy grails in the treatment of myasthenia gravis.
Another thing that comes up in relation to the first question has to do with, what patient characteristics do I keep in mind when selecting therapies? There's a couple of things. I think that first and foremost, many of our patients with myasthenia gravis are women of childbearing age. So we want to be mindful that many pregnancies are not planned, and be careful when we're choosing therapies that might have a role or might be deleterious to fetuses.
This is particularly true with oral immunosuppressants, many of which are contraindicated in pregnancy. But medical comorbidities in general are helpful to understand. Again, using the corticosteroid example, in patients with high blood pressure, diabetes, or osteoporosis, I'm very leery about corticosteroids and may use one of the newer therapies earlier on.
Another aspect is patient preference. We have oral therapies, we have intravenous therapies, we now have subcutaneous therapies. Route of administration is very important to consider as well, not only for patient comfort — some patients may prefer intravenous routes of administration vs subcutaneous — but also for patient convenience.
Many of our patients with myasthenia gravis have very busy lives, with full-time jobs and other responsibilities, such as parenting or taking care of parents that are maybe older in age. So I think that tolerability and convenience are very important to getting patients the therapies they need and allowing patients the flexibility and convenience to be able to live their lives as well.
I hope this was helpful to you. I look forward to speaking with you again at some point in the very near future. Stay well.
Nicholas J. Silvestri, MD: Hi there. My name is Dr Nick Silvestri, and I'm at the University of Buffalo. Today, I'd like to answer a few questions that I commonly receive from colleagues about the treatment of myasthenia gravis. As you know, over the past several years, we've had many new treatments approved to treat myasthenia gravis. One of the common questions that I get is, how do these new treatments fit into my treatment paradigm?
First and foremost, I'd like to say that we've been very successful at treating myasthenia gravis for many years. The mainstay of therapy has typically been acetylcholinesterase inhibitors, corticosteroids, and nonsteroidal immunosuppressants. These medicines by and large have helped control the disease in many, but maybe not all, patients.
The good news about these treatments is they're very efficacious, and as I said, they are able to treat most patients with myasthenia gravis. But the bad news on these medications is that they can have some serious short- and long-term consequences. So as I think about the treatment paradigm right now in 2024 and treating patients with myasthenia gravis, I typically start with prednisone or corticosteroids and transition patients onto an oral immunosuppressant.
But because it takes about a year for those oral immunosuppressants to become effective, I'm typically using steroids as a bridge. The goal, really, is to have patients on an oral immunosuppressant alone at the 1-year mark or thereabouts so that we don't have patients on steroids.
When it comes to the new therapies, one of the things that I'm doing is I'm using them, if a patient does not respond to an oral immunosuppressant or in situations where patients have medical comorbidities that make me not want to use steroids or use steroids at high doses.
Specifically, FcRn antagonists are often used as next-line therapy after an oral immunosuppressant fails or if I don't feel comfortable using prednisone at the outset and possibly bringing the patient to the oral immunosuppressant. The rationale behind this is that these medications are effective. They've been shown to be effective in clinical trials. They work fairly quickly, usually within 2-4 weeks. They're convenient for patients. And they have a pretty good safety profile.
The major side effects with the FcRn antagonists tend to be an increased risk for infection, which is true for most medications used to treat myasthenia gravis. One is associated with headache. And they can be associated with joint pains and infusion issues as well. But by and large, they are well tolerated. So again, if a patient is not responding to an oral immunosuppressant or it has toxicity or side effects, or I'm leery of using prednisone, I'll typically use an FcRn antagonist.
The other main class of medications is complement inhibitors. There are three complement inhibitors approved to use in the United States. Complement inhibitors are also very effective medications. I've used them with success in a number of patients, and I think that the paradigm is shifting.
I've used complement inhibitors, as with the FcRn antagonists, in patients who aren't responding to the first line of therapy or if they have toxicity. I've also used complement inhibitors in instances where patients have not responded very robustly to FcRn antagonists, which thankfully is the minority of patients, but it's worth noting.
I view the treatment paradigm for 2024 as oral immunosuppressant first, then FcRn antagonist next, and then complement inhibitor next. But to be truthful, we don't have head-to-head comparisons right now. What works for one patient may not work for another. In myasthenia gravis, it would be great to have biomarkers that allow us to predict who would respond to what form of therapy better.
In other words, it would be great to be able to send off a test to know whether a patient would respond to an oral immunosuppressant better than perhaps to one of the newer therapies, or whether a patient would respond to an FcRn antagonist better than a complement inhibitor or vice versa. That's really one of the gold standards or holy grails in the treatment of myasthenia gravis.
Another thing that comes up in relation to the first question has to do with, what patient characteristics do I keep in mind when selecting therapies? There's a couple of things. I think that first and foremost, many of our patients with myasthenia gravis are women of childbearing age. So we want to be mindful that many pregnancies are not planned, and be careful when we're choosing therapies that might have a role or might be deleterious to fetuses.
This is particularly true with oral immunosuppressants, many of which are contraindicated in pregnancy. But medical comorbidities in general are helpful to understand. Again, using the corticosteroid example, in patients with high blood pressure, diabetes, or osteoporosis, I'm very leery about corticosteroids and may use one of the newer therapies earlier on.
Another aspect is patient preference. We have oral therapies, we have intravenous therapies, we now have subcutaneous therapies. Route of administration is very important to consider as well, not only for patient comfort — some patients may prefer intravenous routes of administration vs subcutaneous — but also for patient convenience.
Many of our patients with myasthenia gravis have very busy lives, with full-time jobs and other responsibilities, such as parenting or taking care of parents that are maybe older in age. So I think that tolerability and convenience are very important to getting patients the therapies they need and allowing patients the flexibility and convenience to be able to live their lives as well.
I hope this was helpful to you. I look forward to speaking with you again at some point in the very near future. Stay well.
Nicholas J. Silvestri, MD: Hi there. My name is Dr Nick Silvestri, and I'm at the University of Buffalo. Today, I'd like to answer a few questions that I commonly receive from colleagues about the treatment of myasthenia gravis. As you know, over the past several years, we've had many new treatments approved to treat myasthenia gravis. One of the common questions that I get is, how do these new treatments fit into my treatment paradigm?
First and foremost, I'd like to say that we've been very successful at treating myasthenia gravis for many years. The mainstay of therapy has typically been acetylcholinesterase inhibitors, corticosteroids, and nonsteroidal immunosuppressants. These medicines by and large have helped control the disease in many, but maybe not all, patients.
The good news about these treatments is they're very efficacious, and as I said, they are able to treat most patients with myasthenia gravis. But the bad news on these medications is that they can have some serious short- and long-term consequences. So as I think about the treatment paradigm right now in 2024 and treating patients with myasthenia gravis, I typically start with prednisone or corticosteroids and transition patients onto an oral immunosuppressant.
But because it takes about a year for those oral immunosuppressants to become effective, I'm typically using steroids as a bridge. The goal, really, is to have patients on an oral immunosuppressant alone at the 1-year mark or thereabouts so that we don't have patients on steroids.
When it comes to the new therapies, one of the things that I'm doing is I'm using them, if a patient does not respond to an oral immunosuppressant or in situations where patients have medical comorbidities that make me not want to use steroids or use steroids at high doses.
Specifically, FcRn antagonists are often used as next-line therapy after an oral immunosuppressant fails or if I don't feel comfortable using prednisone at the outset and possibly bringing the patient to the oral immunosuppressant. The rationale behind this is that these medications are effective. They've been shown to be effective in clinical trials. They work fairly quickly, usually within 2-4 weeks. They're convenient for patients. And they have a pretty good safety profile.
The major side effects with the FcRn antagonists tend to be an increased risk for infection, which is true for most medications used to treat myasthenia gravis. One is associated with headache. And they can be associated with joint pains and infusion issues as well. But by and large, they are well tolerated. So again, if a patient is not responding to an oral immunosuppressant or it has toxicity or side effects, or I'm leery of using prednisone, I'll typically use an FcRn antagonist.
The other main class of medications is complement inhibitors. There are three complement inhibitors approved to use in the United States. Complement inhibitors are also very effective medications. I've used them with success in a number of patients, and I think that the paradigm is shifting.
I've used complement inhibitors, as with the FcRn antagonists, in patients who aren't responding to the first line of therapy or if they have toxicity. I've also used complement inhibitors in instances where patients have not responded very robustly to FcRn antagonists, which thankfully is the minority of patients, but it's worth noting.
I view the treatment paradigm for 2024 as oral immunosuppressant first, then FcRn antagonist next, and then complement inhibitor next. But to be truthful, we don't have head-to-head comparisons right now. What works for one patient may not work for another. In myasthenia gravis, it would be great to have biomarkers that allow us to predict who would respond to what form of therapy better.
In other words, it would be great to be able to send off a test to know whether a patient would respond to an oral immunosuppressant better than perhaps to one of the newer therapies, or whether a patient would respond to an FcRn antagonist better than a complement inhibitor or vice versa. That's really one of the gold standards or holy grails in the treatment of myasthenia gravis.
Another thing that comes up in relation to the first question has to do with, what patient characteristics do I keep in mind when selecting therapies? There's a couple of things. I think that first and foremost, many of our patients with myasthenia gravis are women of childbearing age. So we want to be mindful that many pregnancies are not planned, and be careful when we're choosing therapies that might have a role or might be deleterious to fetuses.
This is particularly true with oral immunosuppressants, many of which are contraindicated in pregnancy. But medical comorbidities in general are helpful to understand. Again, using the corticosteroid example, in patients with high blood pressure, diabetes, or osteoporosis, I'm very leery about corticosteroids and may use one of the newer therapies earlier on.
Another aspect is patient preference. We have oral therapies, we have intravenous therapies, we now have subcutaneous therapies. Route of administration is very important to consider as well, not only for patient comfort — some patients may prefer intravenous routes of administration vs subcutaneous — but also for patient convenience.
Many of our patients with myasthenia gravis have very busy lives, with full-time jobs and other responsibilities, such as parenting or taking care of parents that are maybe older in age. So I think that tolerability and convenience are very important to getting patients the therapies they need and allowing patients the flexibility and convenience to be able to live their lives as well.
I hope this was helpful to you. I look forward to speaking with you again at some point in the very near future. Stay well.
Severe Flu Confers Higher Risk for Neurologic Disorders Versus COVID
TOPLINE:
, results of a large study show.
METHODOLOGY:
- Researchers used healthcare claims data to compare 77,300 people hospitalized with COVID-19 with 77,300 hospitalized with influenza. The study did not include individuals with long COVID.
- In the final sample of 154,500 participants, the mean age was 51 years, and more than half (58%) were female.
- Investigators followed participants from both cohorts for a year to find out how many of them had medical care for six of the most common neurologic disorders: migraine, epilepsy, stroke, neuropathy, movement disorders, and dementia.
- If participants had one of these neurologic disorders prior to the original hospitalization, the primary outcome involved subsequent healthcare encounters for the neurologic diagnosis.
TAKEAWAY:
- Participants hospitalized with COVID-19 versus influenza were significantly less likely to require care in the following year for migraine (2% vs 3.2%), epilepsy (1.6% vs 2.1%), neuropathy (1.9% vs 3.6%), movement disorders (1.5% vs 2.5%), stroke (2% vs 2.4%), and dementia (2% vs 2.3%) (all P < .001).
- After adjusting for age, sex, and other health conditions, researchers found that people hospitalized with COVID-19 had a 35% lower risk of receiving care for migraine, a 22% lower risk of receiving care for epilepsy, and a 44% lower risk of receiving care for neuropathy than those with influenza. They also had a 36% lower risk of receiving care for movement disorders, a 10% lower risk for stroke (all P < .001), as well as a 7% lower risk for dementia (P = .0007).
- In participants who did not have a preexisting neurologic condition at the time of hospitalization for either COVID-19 or influenza, 2.8% hospitalized with COVID-19 developed one in the next year compared with 5% of those hospitalized with influenza.
IN PRACTICE:
“While the results were not what we expected to find, they are reassuring in that we found being hospitalized with COVID did not lead to more care for common neurologic conditions when compared to being hospitalized with influenza,” study investigator Brian C. Callaghan, MD, of University of Michigan, Ann Arbor, said in a press release.
SOURCE:
Adam de Havenon, MD, of Yale University in New Haven, Connecticut, led the study, which was published online on March 20 in Neurology.
LIMITATIONS:
The study relied on ICD codes in health claims databases, which could introduce misclassification bias. Also, by selecting only individuals who had associated hospital-based care, there may have been a selection bias based on disease severity.
DISCLOSURES:
The study was funded by the American Academy of Neurology. Dr. De Havenon reported receiving consultant fees from Integra and Novo Nordisk and royalty fees from UpToDate and has equity in Titin KM and Certus. Dr. Callaghan has consulted for DynaMed and the Vaccine Injury Compensation Program. Other disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
TOPLINE:
, results of a large study show.
METHODOLOGY:
- Researchers used healthcare claims data to compare 77,300 people hospitalized with COVID-19 with 77,300 hospitalized with influenza. The study did not include individuals with long COVID.
- In the final sample of 154,500 participants, the mean age was 51 years, and more than half (58%) were female.
- Investigators followed participants from both cohorts for a year to find out how many of them had medical care for six of the most common neurologic disorders: migraine, epilepsy, stroke, neuropathy, movement disorders, and dementia.
- If participants had one of these neurologic disorders prior to the original hospitalization, the primary outcome involved subsequent healthcare encounters for the neurologic diagnosis.
TAKEAWAY:
- Participants hospitalized with COVID-19 versus influenza were significantly less likely to require care in the following year for migraine (2% vs 3.2%), epilepsy (1.6% vs 2.1%), neuropathy (1.9% vs 3.6%), movement disorders (1.5% vs 2.5%), stroke (2% vs 2.4%), and dementia (2% vs 2.3%) (all P < .001).
- After adjusting for age, sex, and other health conditions, researchers found that people hospitalized with COVID-19 had a 35% lower risk of receiving care for migraine, a 22% lower risk of receiving care for epilepsy, and a 44% lower risk of receiving care for neuropathy than those with influenza. They also had a 36% lower risk of receiving care for movement disorders, a 10% lower risk for stroke (all P < .001), as well as a 7% lower risk for dementia (P = .0007).
- In participants who did not have a preexisting neurologic condition at the time of hospitalization for either COVID-19 or influenza, 2.8% hospitalized with COVID-19 developed one in the next year compared with 5% of those hospitalized with influenza.
IN PRACTICE:
“While the results were not what we expected to find, they are reassuring in that we found being hospitalized with COVID did not lead to more care for common neurologic conditions when compared to being hospitalized with influenza,” study investigator Brian C. Callaghan, MD, of University of Michigan, Ann Arbor, said in a press release.
SOURCE:
Adam de Havenon, MD, of Yale University in New Haven, Connecticut, led the study, which was published online on March 20 in Neurology.
LIMITATIONS:
The study relied on ICD codes in health claims databases, which could introduce misclassification bias. Also, by selecting only individuals who had associated hospital-based care, there may have been a selection bias based on disease severity.
DISCLOSURES:
The study was funded by the American Academy of Neurology. Dr. De Havenon reported receiving consultant fees from Integra and Novo Nordisk and royalty fees from UpToDate and has equity in Titin KM and Certus. Dr. Callaghan has consulted for DynaMed and the Vaccine Injury Compensation Program. Other disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
TOPLINE:
, results of a large study show.
METHODOLOGY:
- Researchers used healthcare claims data to compare 77,300 people hospitalized with COVID-19 with 77,300 hospitalized with influenza. The study did not include individuals with long COVID.
- In the final sample of 154,500 participants, the mean age was 51 years, and more than half (58%) were female.
- Investigators followed participants from both cohorts for a year to find out how many of them had medical care for six of the most common neurologic disorders: migraine, epilepsy, stroke, neuropathy, movement disorders, and dementia.
- If participants had one of these neurologic disorders prior to the original hospitalization, the primary outcome involved subsequent healthcare encounters for the neurologic diagnosis.
TAKEAWAY:
- Participants hospitalized with COVID-19 versus influenza were significantly less likely to require care in the following year for migraine (2% vs 3.2%), epilepsy (1.6% vs 2.1%), neuropathy (1.9% vs 3.6%), movement disorders (1.5% vs 2.5%), stroke (2% vs 2.4%), and dementia (2% vs 2.3%) (all P < .001).
- After adjusting for age, sex, and other health conditions, researchers found that people hospitalized with COVID-19 had a 35% lower risk of receiving care for migraine, a 22% lower risk of receiving care for epilepsy, and a 44% lower risk of receiving care for neuropathy than those with influenza. They also had a 36% lower risk of receiving care for movement disorders, a 10% lower risk for stroke (all P < .001), as well as a 7% lower risk for dementia (P = .0007).
- In participants who did not have a preexisting neurologic condition at the time of hospitalization for either COVID-19 or influenza, 2.8% hospitalized with COVID-19 developed one in the next year compared with 5% of those hospitalized with influenza.
IN PRACTICE:
“While the results were not what we expected to find, they are reassuring in that we found being hospitalized with COVID did not lead to more care for common neurologic conditions when compared to being hospitalized with influenza,” study investigator Brian C. Callaghan, MD, of University of Michigan, Ann Arbor, said in a press release.
SOURCE:
Adam de Havenon, MD, of Yale University in New Haven, Connecticut, led the study, which was published online on March 20 in Neurology.
LIMITATIONS:
The study relied on ICD codes in health claims databases, which could introduce misclassification bias. Also, by selecting only individuals who had associated hospital-based care, there may have been a selection bias based on disease severity.
DISCLOSURES:
The study was funded by the American Academy of Neurology. Dr. De Havenon reported receiving consultant fees from Integra and Novo Nordisk and royalty fees from UpToDate and has equity in Titin KM and Certus. Dr. Callaghan has consulted for DynaMed and the Vaccine Injury Compensation Program. Other disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
Glucose Level Fluctuations Affect Cognition in T1D
TOPLINE:
Naturally occurring glucose fluctuations affect cognitive function in people with type 1 diabetes, according to a new study. It matters less whether glucose is considerably higher or lower than the patient’s usual glucose level. Rather,
METHODOLOGY:
- The investigators used continuous glucose monitoring (CGM) digital sensors and smartphone-based cognitive tests (cognitive ecological momentary assessment [EMA]) to collect repeated, high-frequency glucose and cognitive data. Glucose data were collected every 5 minutes; cognitive data were collected three times daily for 15 days as participants went about their daily lives.
- The study included 200 participants (mean [standard deviation] age, 47.5 [15.6] years; 53.5% female; 86% White; mean A1c, 7.5 mmol/mol [1.3]).
- Using CGM and EMA, the researchers obtained “intensive” longitudinal measurements of glucose as well as cognition (processing speed and sustained attention).
- Hierarchical Bayesian modeling estimated dynamic, within-person associations between glucose and cognition, and data-driven lasso regression identified identify clinical characteristics that predicted differences from person to person in cognitive vulnerability to glucose fluctuations.
TAKEAWAY:
- Cognitive performance was reduced both at low and high glucose levels, “reflecting vulnerability to glucose fluctuations.”
- Large glucose fluctuations were associated with slower as well as less accurate processing speed, although slight glucose elevations (relative to the individual’s own means) were associated with faster processing speed, regardless of the absolute level (eg, euglycemic vs hyperglycemic) of those means.
- By contrast, glucose fluctuations were unrelated to sustained attention.
- The researchers identified seven clinical characteristics that predicted individual differences in cognitive vulnerability to glucose fluctuations: Older age, time in hypoglycemia, lifetime severe hypoglycemic events, microvascular complications, glucose variability, fatigue, and larger neck circumference.
IN PRACTICE:
“Our results demonstrate that people can differ a lot from one another in how their brains are impacted by glucose,” co-senior author Laura Germine, PhD, director of the Laboratory for Brain and Cognitive Health Technology, McLean Hospital, Boston, said in a news release. “We found that minimizing glucose fluctuations in daily life is important for optimizing processing speed, and this is especially true for people who are older or have other diabetes-related health conditions.”
SOURCE:
Zoë Hawks, PhD, research investigator, McLean Hospital, Boston, was the lead and corresponding author on the study. It was published online on March 18 in Digital Medicine.
LIMITATIONS:
The researchers required 24-hour access to a smartphone with reliable Internet access, which might have biased sampling toward people of higher economic status. Moreover, the present sample was predominantly White and non-Hispanic, so findings may not be generalizable to other populations.
DISCLOSURES:
The research was supported by grants from the National Institutes of Health, the Brain and Behavior Research Foundation, and the Alzheimer’s Association. Dr. Hawks received consulting fees from Blueprint Health. The other authors’ disclosures were listed in the original paper.
A version of this article appeared on Medscape.com.
TOPLINE:
Naturally occurring glucose fluctuations affect cognitive function in people with type 1 diabetes, according to a new study. It matters less whether glucose is considerably higher or lower than the patient’s usual glucose level. Rather,
METHODOLOGY:
- The investigators used continuous glucose monitoring (CGM) digital sensors and smartphone-based cognitive tests (cognitive ecological momentary assessment [EMA]) to collect repeated, high-frequency glucose and cognitive data. Glucose data were collected every 5 minutes; cognitive data were collected three times daily for 15 days as participants went about their daily lives.
- The study included 200 participants (mean [standard deviation] age, 47.5 [15.6] years; 53.5% female; 86% White; mean A1c, 7.5 mmol/mol [1.3]).
- Using CGM and EMA, the researchers obtained “intensive” longitudinal measurements of glucose as well as cognition (processing speed and sustained attention).
- Hierarchical Bayesian modeling estimated dynamic, within-person associations between glucose and cognition, and data-driven lasso regression identified identify clinical characteristics that predicted differences from person to person in cognitive vulnerability to glucose fluctuations.
TAKEAWAY:
- Cognitive performance was reduced both at low and high glucose levels, “reflecting vulnerability to glucose fluctuations.”
- Large glucose fluctuations were associated with slower as well as less accurate processing speed, although slight glucose elevations (relative to the individual’s own means) were associated with faster processing speed, regardless of the absolute level (eg, euglycemic vs hyperglycemic) of those means.
- By contrast, glucose fluctuations were unrelated to sustained attention.
- The researchers identified seven clinical characteristics that predicted individual differences in cognitive vulnerability to glucose fluctuations: Older age, time in hypoglycemia, lifetime severe hypoglycemic events, microvascular complications, glucose variability, fatigue, and larger neck circumference.
IN PRACTICE:
“Our results demonstrate that people can differ a lot from one another in how their brains are impacted by glucose,” co-senior author Laura Germine, PhD, director of the Laboratory for Brain and Cognitive Health Technology, McLean Hospital, Boston, said in a news release. “We found that minimizing glucose fluctuations in daily life is important for optimizing processing speed, and this is especially true for people who are older or have other diabetes-related health conditions.”
SOURCE:
Zoë Hawks, PhD, research investigator, McLean Hospital, Boston, was the lead and corresponding author on the study. It was published online on March 18 in Digital Medicine.
LIMITATIONS:
The researchers required 24-hour access to a smartphone with reliable Internet access, which might have biased sampling toward people of higher economic status. Moreover, the present sample was predominantly White and non-Hispanic, so findings may not be generalizable to other populations.
DISCLOSURES:
The research was supported by grants from the National Institutes of Health, the Brain and Behavior Research Foundation, and the Alzheimer’s Association. Dr. Hawks received consulting fees from Blueprint Health. The other authors’ disclosures were listed in the original paper.
A version of this article appeared on Medscape.com.
TOPLINE:
Naturally occurring glucose fluctuations affect cognitive function in people with type 1 diabetes, according to a new study. It matters less whether glucose is considerably higher or lower than the patient’s usual glucose level. Rather,
METHODOLOGY:
- The investigators used continuous glucose monitoring (CGM) digital sensors and smartphone-based cognitive tests (cognitive ecological momentary assessment [EMA]) to collect repeated, high-frequency glucose and cognitive data. Glucose data were collected every 5 minutes; cognitive data were collected three times daily for 15 days as participants went about their daily lives.
- The study included 200 participants (mean [standard deviation] age, 47.5 [15.6] years; 53.5% female; 86% White; mean A1c, 7.5 mmol/mol [1.3]).
- Using CGM and EMA, the researchers obtained “intensive” longitudinal measurements of glucose as well as cognition (processing speed and sustained attention).
- Hierarchical Bayesian modeling estimated dynamic, within-person associations between glucose and cognition, and data-driven lasso regression identified identify clinical characteristics that predicted differences from person to person in cognitive vulnerability to glucose fluctuations.
TAKEAWAY:
- Cognitive performance was reduced both at low and high glucose levels, “reflecting vulnerability to glucose fluctuations.”
- Large glucose fluctuations were associated with slower as well as less accurate processing speed, although slight glucose elevations (relative to the individual’s own means) were associated with faster processing speed, regardless of the absolute level (eg, euglycemic vs hyperglycemic) of those means.
- By contrast, glucose fluctuations were unrelated to sustained attention.
- The researchers identified seven clinical characteristics that predicted individual differences in cognitive vulnerability to glucose fluctuations: Older age, time in hypoglycemia, lifetime severe hypoglycemic events, microvascular complications, glucose variability, fatigue, and larger neck circumference.
IN PRACTICE:
“Our results demonstrate that people can differ a lot from one another in how their brains are impacted by glucose,” co-senior author Laura Germine, PhD, director of the Laboratory for Brain and Cognitive Health Technology, McLean Hospital, Boston, said in a news release. “We found that minimizing glucose fluctuations in daily life is important for optimizing processing speed, and this is especially true for people who are older or have other diabetes-related health conditions.”
SOURCE:
Zoë Hawks, PhD, research investigator, McLean Hospital, Boston, was the lead and corresponding author on the study. It was published online on March 18 in Digital Medicine.
LIMITATIONS:
The researchers required 24-hour access to a smartphone with reliable Internet access, which might have biased sampling toward people of higher economic status. Moreover, the present sample was predominantly White and non-Hispanic, so findings may not be generalizable to other populations.
DISCLOSURES:
The research was supported by grants from the National Institutes of Health, the Brain and Behavior Research Foundation, and the Alzheimer’s Association. Dr. Hawks received consulting fees from Blueprint Health. The other authors’ disclosures were listed in the original paper.
A version of this article appeared on Medscape.com.
Sleep Apnea Is Hard on the Brain
, results from a large study showed.
Data from a representative sample of US adults show that those who reported sleep apnea symptoms were about 50% more likely to also report cognitive issues versus their counterparts without such symptoms.
“For clinicians, these findings suggest a potential benefit of considering sleep apnea as a possible contributing or exacerbating factor in individuals experiencing memory or cognitive problems. This could prompt further evaluation for sleep apnea, particularly in at-risk individuals,” said study investigator Dominique Low, MD, MPH, Department of Neurology, Boston Medical Center.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Need to Raise Awareness
The findings are based on 4257 adults who participated in the 2017-2018 National Health and Nutrition Examination Survey and completed questionnaires covering sleep, memory, cognition, and decision-making abilities.
Those who reported snorting, gasping, or breathing pauses during sleep were categorized as experiencing sleep apnea symptoms. Those who reported memory trouble, periods of confusion, difficulty concentrating, or decision-making problems were classified as having memory or cognitive symptoms.
Overall, 1079 participants reported symptoms of sleep apnea. Compared with people without sleep apnea, those with symptoms were more likely to have cognitive problems (33% vs 20%) and have greater odds of having memory or cognitive symptoms, even after adjusting for age, gender, race, and education (adjusted odds ratio, 2.02; P < .001).
“While the study did not establish a cause-and-effect relationship, the findings suggest the importance of raising awareness about the potential link between sleep and cognitive function. Early identification and treatment may improve overall health and potentially lead to a better quality of life,” Dr. Low said.
Limitations of the study include self-reported data on sleep apnea symptoms and cognitive issues sourced from one survey.
Consistent Data
Reached for comment, Matthew Pase, PhD, with the Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia, said the results are similar to earlier work that found a link between obstructive sleep apnea and cognition.
For example, in a recent study, the presence of mild to severe OSA, identified using overnight polysomnography in five community-based cohorts with more than 5900 adults, was associated with poorer cognitive test performance, Dr. Pase said.
“These and other results underscore the importance of healthy sleep for optimal brain health. Future research is needed to test if treating OSA and other sleep disorders can reduce the risk of cognitive impairment,” Dr. Pase said.
Yet, in its latest statement on the topic, the US Preventive Services Task Force concluded there remains insufficient evidence to weigh the balance of benefits and harms of screening for OSA among asymptomatic adults and those with unrecognized symptoms.
The study had no specific funding. Dr. Low and Dr. Pase had no relevant disclosures.
A version of this article appeared on Medscape.com.
, results from a large study showed.
Data from a representative sample of US adults show that those who reported sleep apnea symptoms were about 50% more likely to also report cognitive issues versus their counterparts without such symptoms.
“For clinicians, these findings suggest a potential benefit of considering sleep apnea as a possible contributing or exacerbating factor in individuals experiencing memory or cognitive problems. This could prompt further evaluation for sleep apnea, particularly in at-risk individuals,” said study investigator Dominique Low, MD, MPH, Department of Neurology, Boston Medical Center.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Need to Raise Awareness
The findings are based on 4257 adults who participated in the 2017-2018 National Health and Nutrition Examination Survey and completed questionnaires covering sleep, memory, cognition, and decision-making abilities.
Those who reported snorting, gasping, or breathing pauses during sleep were categorized as experiencing sleep apnea symptoms. Those who reported memory trouble, periods of confusion, difficulty concentrating, or decision-making problems were classified as having memory or cognitive symptoms.
Overall, 1079 participants reported symptoms of sleep apnea. Compared with people without sleep apnea, those with symptoms were more likely to have cognitive problems (33% vs 20%) and have greater odds of having memory or cognitive symptoms, even after adjusting for age, gender, race, and education (adjusted odds ratio, 2.02; P < .001).
“While the study did not establish a cause-and-effect relationship, the findings suggest the importance of raising awareness about the potential link between sleep and cognitive function. Early identification and treatment may improve overall health and potentially lead to a better quality of life,” Dr. Low said.
Limitations of the study include self-reported data on sleep apnea symptoms and cognitive issues sourced from one survey.
Consistent Data
Reached for comment, Matthew Pase, PhD, with the Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia, said the results are similar to earlier work that found a link between obstructive sleep apnea and cognition.
For example, in a recent study, the presence of mild to severe OSA, identified using overnight polysomnography in five community-based cohorts with more than 5900 adults, was associated with poorer cognitive test performance, Dr. Pase said.
“These and other results underscore the importance of healthy sleep for optimal brain health. Future research is needed to test if treating OSA and other sleep disorders can reduce the risk of cognitive impairment,” Dr. Pase said.
Yet, in its latest statement on the topic, the US Preventive Services Task Force concluded there remains insufficient evidence to weigh the balance of benefits and harms of screening for OSA among asymptomatic adults and those with unrecognized symptoms.
The study had no specific funding. Dr. Low and Dr. Pase had no relevant disclosures.
A version of this article appeared on Medscape.com.
, results from a large study showed.
Data from a representative sample of US adults show that those who reported sleep apnea symptoms were about 50% more likely to also report cognitive issues versus their counterparts without such symptoms.
“For clinicians, these findings suggest a potential benefit of considering sleep apnea as a possible contributing or exacerbating factor in individuals experiencing memory or cognitive problems. This could prompt further evaluation for sleep apnea, particularly in at-risk individuals,” said study investigator Dominique Low, MD, MPH, Department of Neurology, Boston Medical Center.
The findings were released ahead of the study’s scheduled presentation at the annual meeting of the American Academy of Neurology.
Need to Raise Awareness
The findings are based on 4257 adults who participated in the 2017-2018 National Health and Nutrition Examination Survey and completed questionnaires covering sleep, memory, cognition, and decision-making abilities.
Those who reported snorting, gasping, or breathing pauses during sleep were categorized as experiencing sleep apnea symptoms. Those who reported memory trouble, periods of confusion, difficulty concentrating, or decision-making problems were classified as having memory or cognitive symptoms.
Overall, 1079 participants reported symptoms of sleep apnea. Compared with people without sleep apnea, those with symptoms were more likely to have cognitive problems (33% vs 20%) and have greater odds of having memory or cognitive symptoms, even after adjusting for age, gender, race, and education (adjusted odds ratio, 2.02; P < .001).
“While the study did not establish a cause-and-effect relationship, the findings suggest the importance of raising awareness about the potential link between sleep and cognitive function. Early identification and treatment may improve overall health and potentially lead to a better quality of life,” Dr. Low said.
Limitations of the study include self-reported data on sleep apnea symptoms and cognitive issues sourced from one survey.
Consistent Data
Reached for comment, Matthew Pase, PhD, with the Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia, said the results are similar to earlier work that found a link between obstructive sleep apnea and cognition.
For example, in a recent study, the presence of mild to severe OSA, identified using overnight polysomnography in five community-based cohorts with more than 5900 adults, was associated with poorer cognitive test performance, Dr. Pase said.
“These and other results underscore the importance of healthy sleep for optimal brain health. Future research is needed to test if treating OSA and other sleep disorders can reduce the risk of cognitive impairment,” Dr. Pase said.
Yet, in its latest statement on the topic, the US Preventive Services Task Force concluded there remains insufficient evidence to weigh the balance of benefits and harms of screening for OSA among asymptomatic adults and those with unrecognized symptoms.
The study had no specific funding. Dr. Low and Dr. Pase had no relevant disclosures.
A version of this article appeared on Medscape.com.
FROM AAN 2024
Few Childhood Cancer Survivors Get Recommended Screenings
Among childhood cancer survivors in Ontario, Canada, who faced an elevated risk due to chemotherapy or radiation treatments, 53% followed screening recommendations for cardiomyopathy, 13% met colorectal cancer screening guidelines, and 6% adhered to breast cancer screening guidelines.
“Although over 80% of children newly diagnosed with cancer will become long-term survivors, as many as four out of five of these survivors will develop a serious or life-threatening late effect of their cancer therapy by age 45,” lead author Jennifer Shuldiner, PhD, MPH, a scientist at Women’s College Hospital Institute for Health Systems Solutions and Virtual Care in Toronto, told this news organization.
For instance, the risk for colorectal cancer in childhood cancer survivors is two to three times higher than it is among the general population, and the risk for breast cancer is similar between those who underwent chest radiation and those with a BRCA mutation. As many as 50% of those who received anthracycline chemotherapy or radiation involving the heart later develop cardiotoxicity.
The North American Children’s Oncology Group has published long-term follow-up guidelines for survivors of childhood cancer, yet many survivors don’t follow them because of lack of awareness or other barriers, said Dr. Shuldiner.
“Prior research has shown that many survivors do not complete these recommended tests,” she said. “With better knowledge of this at-risk population, we can design, test, and implement appropriate interventions and supports to tackle the issues.”
The study was published online on March 11 in CMAJ.
Changes in Adherence
The researchers conducted a retrospective population-based cohort study analyzing Ontario healthcare administrative data for adult survivors of childhood cancer diagnosed between 1986 and 2014 who faced an elevated risk for therapy-related colorectal cancer, breast cancer, or cardiomyopathy. The research team then assessed long-term adherence to the North American Children’s Oncology Group guidelines and predictors of adherence.
Among 3241 survivors, 3205 (99%) were at elevated risk for cardiomyopathy, 327 (10%) were at elevated risk for colorectal cancer, and 234 (7%) were at elevated risk for breast cancer. In addition, 2806 (87%) were at risk for one late effect, 345 (11%) were at risk for two late effects, and 90 (3%) were at risk for three late effects.
Overall, 53%, 13%, and 6% were adherent to their recommended surveillance for cardiomyopathy, colorectal cancer, and breast cancer, respectively. Over time, adherence increased for colorectal cancer and cardiomyopathy but decreased for breast cancer.
In addition, patients who were older at diagnosis were more likely to follow screening guidelines for colorectal and breast cancers, whereas those who were younger at diagnosis were more likely to follow screening guidelines for cardiomyopathy.
During a median follow-up of 7.8 years, the proportion of time spent adherent was 43% for cardiomyopathy, 14% for colorectal cancer, and 10% for breast cancer.
Survivors who attended a long-term follow-up clinic in the previous year had low adherence rates as well, though they were higher than in the rest of the cohort. In this group, the proportion of time that was spent adherent was 71% for cardiomyopathy, 27% for colorectal cancer, and 15% for breast cancer.
Shuldiner and colleagues are launching a research trial to determine whether a provincial support system can help childhood cancer survivors receive the recommended surveillance. The support system provides information about screening recommendations to survivors as well as reminders and sends key information to their family doctors.
“We now understand that childhood cancer survivors need help to complete the recommended tests,” said Dr. Shuldiner. “If the trial is successful, we hope it will be implemented in Ontario.”
Survivorship Care Plans
Low screening rates may result from a lack of awareness about screening recommendations and the negative long-term effects of cancer treatments, the study authors wrote. Cancer survivors, caregivers, family physicians, specialists, and survivor support groups can share the responsibility of spreading awareness and adhering to guidelines, they noted. In some cases, a survivorship care plan (SCP) may help.
“SCPs are intended to improve adherence by providing follow-up information and facilitating the transition from cancer treatment to survivorship and from pediatric to adult care,” Adam Yan, MD, a staff oncologist and oncology informatics lead at the Hospital for Sick Children in Toronto, told this news organization.
Dr. Yan, who wasn’t involved with this study, has researched surveillance adherence for secondary cancers and cardiac dysfunction among childhood cancer survivors. He and his colleagues found that screening rates were typically low among survivors who faced high risks for cardiac dysfunction and breast, colorectal, or skin cancers.
However, having a survivorship care plan seemed to help, and survivors treated after 1990 were more likely to have an SCP.
“SCP possession by high-risk survivors was associated with increased breast, skin, and cardiac surveillance,” he said. “It is uncertain whether SCP possession leads to adherence or whether SCP possession is a marker of survivors who are focused on their health and thus likely to adhere to preventive health practices, including surveillance.”
The study was funded by the Canadian Institutes of Health Research and ICES, which receives support from the Ontario Ministry of Health and the Ministry of Long-Term Care. Dr. Shuldiner received a Canadian Institutes of Health Research Health System Impact Postdoctoral Fellowship in support of the work. Dr. Yan disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Among childhood cancer survivors in Ontario, Canada, who faced an elevated risk due to chemotherapy or radiation treatments, 53% followed screening recommendations for cardiomyopathy, 13% met colorectal cancer screening guidelines, and 6% adhered to breast cancer screening guidelines.
“Although over 80% of children newly diagnosed with cancer will become long-term survivors, as many as four out of five of these survivors will develop a serious or life-threatening late effect of their cancer therapy by age 45,” lead author Jennifer Shuldiner, PhD, MPH, a scientist at Women’s College Hospital Institute for Health Systems Solutions and Virtual Care in Toronto, told this news organization.
For instance, the risk for colorectal cancer in childhood cancer survivors is two to three times higher than it is among the general population, and the risk for breast cancer is similar between those who underwent chest radiation and those with a BRCA mutation. As many as 50% of those who received anthracycline chemotherapy or radiation involving the heart later develop cardiotoxicity.
The North American Children’s Oncology Group has published long-term follow-up guidelines for survivors of childhood cancer, yet many survivors don’t follow them because of lack of awareness or other barriers, said Dr. Shuldiner.
“Prior research has shown that many survivors do not complete these recommended tests,” she said. “With better knowledge of this at-risk population, we can design, test, and implement appropriate interventions and supports to tackle the issues.”
The study was published online on March 11 in CMAJ.
Changes in Adherence
The researchers conducted a retrospective population-based cohort study analyzing Ontario healthcare administrative data for adult survivors of childhood cancer diagnosed between 1986 and 2014 who faced an elevated risk for therapy-related colorectal cancer, breast cancer, or cardiomyopathy. The research team then assessed long-term adherence to the North American Children’s Oncology Group guidelines and predictors of adherence.
Among 3241 survivors, 3205 (99%) were at elevated risk for cardiomyopathy, 327 (10%) were at elevated risk for colorectal cancer, and 234 (7%) were at elevated risk for breast cancer. In addition, 2806 (87%) were at risk for one late effect, 345 (11%) were at risk for two late effects, and 90 (3%) were at risk for three late effects.
Overall, 53%, 13%, and 6% were adherent to their recommended surveillance for cardiomyopathy, colorectal cancer, and breast cancer, respectively. Over time, adherence increased for colorectal cancer and cardiomyopathy but decreased for breast cancer.
In addition, patients who were older at diagnosis were more likely to follow screening guidelines for colorectal and breast cancers, whereas those who were younger at diagnosis were more likely to follow screening guidelines for cardiomyopathy.
During a median follow-up of 7.8 years, the proportion of time spent adherent was 43% for cardiomyopathy, 14% for colorectal cancer, and 10% for breast cancer.
Survivors who attended a long-term follow-up clinic in the previous year had low adherence rates as well, though they were higher than in the rest of the cohort. In this group, the proportion of time that was spent adherent was 71% for cardiomyopathy, 27% for colorectal cancer, and 15% for breast cancer.
Shuldiner and colleagues are launching a research trial to determine whether a provincial support system can help childhood cancer survivors receive the recommended surveillance. The support system provides information about screening recommendations to survivors as well as reminders and sends key information to their family doctors.
“We now understand that childhood cancer survivors need help to complete the recommended tests,” said Dr. Shuldiner. “If the trial is successful, we hope it will be implemented in Ontario.”
Survivorship Care Plans
Low screening rates may result from a lack of awareness about screening recommendations and the negative long-term effects of cancer treatments, the study authors wrote. Cancer survivors, caregivers, family physicians, specialists, and survivor support groups can share the responsibility of spreading awareness and adhering to guidelines, they noted. In some cases, a survivorship care plan (SCP) may help.
“SCPs are intended to improve adherence by providing follow-up information and facilitating the transition from cancer treatment to survivorship and from pediatric to adult care,” Adam Yan, MD, a staff oncologist and oncology informatics lead at the Hospital for Sick Children in Toronto, told this news organization.
Dr. Yan, who wasn’t involved with this study, has researched surveillance adherence for secondary cancers and cardiac dysfunction among childhood cancer survivors. He and his colleagues found that screening rates were typically low among survivors who faced high risks for cardiac dysfunction and breast, colorectal, or skin cancers.
However, having a survivorship care plan seemed to help, and survivors treated after 1990 were more likely to have an SCP.
“SCP possession by high-risk survivors was associated with increased breast, skin, and cardiac surveillance,” he said. “It is uncertain whether SCP possession leads to adherence or whether SCP possession is a marker of survivors who are focused on their health and thus likely to adhere to preventive health practices, including surveillance.”
The study was funded by the Canadian Institutes of Health Research and ICES, which receives support from the Ontario Ministry of Health and the Ministry of Long-Term Care. Dr. Shuldiner received a Canadian Institutes of Health Research Health System Impact Postdoctoral Fellowship in support of the work. Dr. Yan disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Among childhood cancer survivors in Ontario, Canada, who faced an elevated risk due to chemotherapy or radiation treatments, 53% followed screening recommendations for cardiomyopathy, 13% met colorectal cancer screening guidelines, and 6% adhered to breast cancer screening guidelines.
“Although over 80% of children newly diagnosed with cancer will become long-term survivors, as many as four out of five of these survivors will develop a serious or life-threatening late effect of their cancer therapy by age 45,” lead author Jennifer Shuldiner, PhD, MPH, a scientist at Women’s College Hospital Institute for Health Systems Solutions and Virtual Care in Toronto, told this news organization.
For instance, the risk for colorectal cancer in childhood cancer survivors is two to three times higher than it is among the general population, and the risk for breast cancer is similar between those who underwent chest radiation and those with a BRCA mutation. As many as 50% of those who received anthracycline chemotherapy or radiation involving the heart later develop cardiotoxicity.
The North American Children’s Oncology Group has published long-term follow-up guidelines for survivors of childhood cancer, yet many survivors don’t follow them because of lack of awareness or other barriers, said Dr. Shuldiner.
“Prior research has shown that many survivors do not complete these recommended tests,” she said. “With better knowledge of this at-risk population, we can design, test, and implement appropriate interventions and supports to tackle the issues.”
The study was published online on March 11 in CMAJ.
Changes in Adherence
The researchers conducted a retrospective population-based cohort study analyzing Ontario healthcare administrative data for adult survivors of childhood cancer diagnosed between 1986 and 2014 who faced an elevated risk for therapy-related colorectal cancer, breast cancer, or cardiomyopathy. The research team then assessed long-term adherence to the North American Children’s Oncology Group guidelines and predictors of adherence.
Among 3241 survivors, 3205 (99%) were at elevated risk for cardiomyopathy, 327 (10%) were at elevated risk for colorectal cancer, and 234 (7%) were at elevated risk for breast cancer. In addition, 2806 (87%) were at risk for one late effect, 345 (11%) were at risk for two late effects, and 90 (3%) were at risk for three late effects.
Overall, 53%, 13%, and 6% were adherent to their recommended surveillance for cardiomyopathy, colorectal cancer, and breast cancer, respectively. Over time, adherence increased for colorectal cancer and cardiomyopathy but decreased for breast cancer.
In addition, patients who were older at diagnosis were more likely to follow screening guidelines for colorectal and breast cancers, whereas those who were younger at diagnosis were more likely to follow screening guidelines for cardiomyopathy.
During a median follow-up of 7.8 years, the proportion of time spent adherent was 43% for cardiomyopathy, 14% for colorectal cancer, and 10% for breast cancer.
Survivors who attended a long-term follow-up clinic in the previous year had low adherence rates as well, though they were higher than in the rest of the cohort. In this group, the proportion of time that was spent adherent was 71% for cardiomyopathy, 27% for colorectal cancer, and 15% for breast cancer.
Shuldiner and colleagues are launching a research trial to determine whether a provincial support system can help childhood cancer survivors receive the recommended surveillance. The support system provides information about screening recommendations to survivors as well as reminders and sends key information to their family doctors.
“We now understand that childhood cancer survivors need help to complete the recommended tests,” said Dr. Shuldiner. “If the trial is successful, we hope it will be implemented in Ontario.”
Survivorship Care Plans
Low screening rates may result from a lack of awareness about screening recommendations and the negative long-term effects of cancer treatments, the study authors wrote. Cancer survivors, caregivers, family physicians, specialists, and survivor support groups can share the responsibility of spreading awareness and adhering to guidelines, they noted. In some cases, a survivorship care plan (SCP) may help.
“SCPs are intended to improve adherence by providing follow-up information and facilitating the transition from cancer treatment to survivorship and from pediatric to adult care,” Adam Yan, MD, a staff oncologist and oncology informatics lead at the Hospital for Sick Children in Toronto, told this news organization.
Dr. Yan, who wasn’t involved with this study, has researched surveillance adherence for secondary cancers and cardiac dysfunction among childhood cancer survivors. He and his colleagues found that screening rates were typically low among survivors who faced high risks for cardiac dysfunction and breast, colorectal, or skin cancers.
However, having a survivorship care plan seemed to help, and survivors treated after 1990 were more likely to have an SCP.
“SCP possession by high-risk survivors was associated with increased breast, skin, and cardiac surveillance,” he said. “It is uncertain whether SCP possession leads to adherence or whether SCP possession is a marker of survivors who are focused on their health and thus likely to adhere to preventive health practices, including surveillance.”
The study was funded by the Canadian Institutes of Health Research and ICES, which receives support from the Ontario Ministry of Health and the Ministry of Long-Term Care. Dr. Shuldiner received a Canadian Institutes of Health Research Health System Impact Postdoctoral Fellowship in support of the work. Dr. Yan disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Multiple Sclerosis Highlights From ACTRIMS 2024
Andrew Solomon, MD, from the University of Vermont in Burlington, highlights key findings presented at the Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS) Forum 2024.
Dr Solomon begins by discussing a study on the potential benefits of antipyretics to manage overheating associated with exercise, a common symptom among MS patients. Results showed that MS patients who took aspirin or acetaminophen had less increase in body temperature after a maximal exercise test than those who took placebo.
He next reports on a study that examined whether a combination of two imaging biomarkers specific for MS, namely the central vein sign and the paramagnetic rim lesion, could improve diagnostic specificity. This study found that the presence of at least one of the signs contributed to improved diagnosis.
Dr Solomon then discusses a post hoc analysis of the ULTIMATE I and II trials which reconsidered how to confirm relapses of MS. The study found that follow-up MRI could distinguish relapse from pseudoexacerbations.
Finally, he reports on a study that examined the feasibility and tolerability of low-field brain MRI in MS. The equipment is smaller, portable, and more cost-effective than standard MRI and has high acceptability from patients. Although the precision of these devices needs further testing, Dr Solomon suggests that portable MRI could make MS diagnosis and monitoring available to broader populations.
--
Andrew J. Solomon, MD, Professor, Neurological Sciences, Larner College of Medicine, University of Vermont; Division Chief, Multiple Sclerosis, University Health Center, Burlington, Vermont
Andrew J. Solomon, MD, has disclosed the following relevant financial relationships: Received research grant from: Bristol Myers Squibb
Andrew Solomon, MD, from the University of Vermont in Burlington, highlights key findings presented at the Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS) Forum 2024.
Dr Solomon begins by discussing a study on the potential benefits of antipyretics to manage overheating associated with exercise, a common symptom among MS patients. Results showed that MS patients who took aspirin or acetaminophen had less increase in body temperature after a maximal exercise test than those who took placebo.
He next reports on a study that examined whether a combination of two imaging biomarkers specific for MS, namely the central vein sign and the paramagnetic rim lesion, could improve diagnostic specificity. This study found that the presence of at least one of the signs contributed to improved diagnosis.
Dr Solomon then discusses a post hoc analysis of the ULTIMATE I and II trials which reconsidered how to confirm relapses of MS. The study found that follow-up MRI could distinguish relapse from pseudoexacerbations.
Finally, he reports on a study that examined the feasibility and tolerability of low-field brain MRI in MS. The equipment is smaller, portable, and more cost-effective than standard MRI and has high acceptability from patients. Although the precision of these devices needs further testing, Dr Solomon suggests that portable MRI could make MS diagnosis and monitoring available to broader populations.
--
Andrew J. Solomon, MD, Professor, Neurological Sciences, Larner College of Medicine, University of Vermont; Division Chief, Multiple Sclerosis, University Health Center, Burlington, Vermont
Andrew J. Solomon, MD, has disclosed the following relevant financial relationships: Received research grant from: Bristol Myers Squibb
Andrew Solomon, MD, from the University of Vermont in Burlington, highlights key findings presented at the Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS) Forum 2024.
Dr Solomon begins by discussing a study on the potential benefits of antipyretics to manage overheating associated with exercise, a common symptom among MS patients. Results showed that MS patients who took aspirin or acetaminophen had less increase in body temperature after a maximal exercise test than those who took placebo.
He next reports on a study that examined whether a combination of two imaging biomarkers specific for MS, namely the central vein sign and the paramagnetic rim lesion, could improve diagnostic specificity. This study found that the presence of at least one of the signs contributed to improved diagnosis.
Dr Solomon then discusses a post hoc analysis of the ULTIMATE I and II trials which reconsidered how to confirm relapses of MS. The study found that follow-up MRI could distinguish relapse from pseudoexacerbations.
Finally, he reports on a study that examined the feasibility and tolerability of low-field brain MRI in MS. The equipment is smaller, portable, and more cost-effective than standard MRI and has high acceptability from patients. Although the precision of these devices needs further testing, Dr Solomon suggests that portable MRI could make MS diagnosis and monitoring available to broader populations.
--
Andrew J. Solomon, MD, Professor, Neurological Sciences, Larner College of Medicine, University of Vermont; Division Chief, Multiple Sclerosis, University Health Center, Burlington, Vermont
Andrew J. Solomon, MD, has disclosed the following relevant financial relationships: Received research grant from: Bristol Myers Squibb

Disadvantaged Neighborhoods Tied to Higher Dementia Risk, Brain Aging
Living in a disadvantaged neighborhood is associated with accelerated brain aging and a higher risk for early dementia, regardless of income level or education, new research suggested.
“If you want to prevent dementia and you’re not asking someone about their neighborhood, you’re missing information that’s important to know,” lead author Aaron Reuben, PhD, postdoctoral scholar in neuropsychology and environmental health at Duke University, Durham, North Carolina, said in a news release.
The study was published online in Alzheimer’s & Dementia.
Higher Risk in Men
Few interventions exist to halt or delay the progression of Alzheimer’s disease and related dementias (ADRD), which has increasingly led to a focus on primary prevention.
Although previous research pointed to a link between socioeconomically disadvantaged neighborhoods and a greater risk for cognitive deficits, mild cognitive impairment, dementia, and poor brain health, the timeline for the emergence of that risk is unknown.
To fill in the gaps, investigators studied data on all 1.4 million New Zealand residents, dividing neighborhoods into quintiles based on level of disadvantage (assessed by the New Zealand Index of Deprivation) to see whether dementia diagnoses followed neighborhood socioeconomic gradients.
After adjusting for covariates, they found that overall, those living in disadvantaged areas were slightly more likely to develop dementia across the 20-year study period (adjusted hazard ratio [HR], 1.09; 95% CI, 1.08-1.10).
The more disadvantaged the neighborhood, the higher the dementia risk, with a 43% higher risk for ADRD among those in the highest quintile than among those in the lowest quintile (HR, 1.43; 95% CI, 1.36-1.49).
The effect was larger in men than in women and in younger vs older individuals, with the youngest age group showing 21% greater risk in women and 26% greater risk in men vs the oldest age group.
Dementia Prevention Starts Early
Researchers then turned to the Dunedin Study, a cohort of 938 New Zealanders (50% female) followed from birth to age 45 to track their psychological, social, and physiological health with brain scans, memory tests, and cognitive self-assessments.
The analysis suggested that by age 45, those living in more disadvantaged neighborhoods across adulthood had accumulated a significantly greater number of midlife risk factors for later ADRD.
They also had worse structural brain integrity, with each standard deviation increase in neighborhood disadvantage resulting in a thinner cortex, greater white matter hyperintensities volume, and older brain age.
Those living in poorer areas had lower cognitive test scores, reported more issues with everyday cognitive function, and showed a greater reduction in IQ from childhood to midlife. Analysis of brain scans also revealed mean brain ages 2.98 years older than those living in the least disadvantaged areas (P = .001).
Limitations included the study’s observational design, which could not establish causation, and the fact that the researchers did not have access to individual-level socioeconomic information for the entire population. Additionally, brain-integrity measures in the Dunedin Study were largely cross-sectional.
“If you want to truly prevent dementia, you’ve got to start early because 20 years before anyone will get a diagnosis, we’re seeing dementia’s emergence,” Dr. Reuben said. “And it could be even earlier.”
Funding for the study was provided by the National Institutes for Health; UK Medical Research Council; Health Research Council of New Zealand; Brain Research New Zealand; New Zealand Ministry of Business, Innovation, & Employment; and the Duke University and the University of North Carolina Alzheimer’s Disease Research Center. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
Living in a disadvantaged neighborhood is associated with accelerated brain aging and a higher risk for early dementia, regardless of income level or education, new research suggested.
“If you want to prevent dementia and you’re not asking someone about their neighborhood, you’re missing information that’s important to know,” lead author Aaron Reuben, PhD, postdoctoral scholar in neuropsychology and environmental health at Duke University, Durham, North Carolina, said in a news release.
The study was published online in Alzheimer’s & Dementia.
Higher Risk in Men
Few interventions exist to halt or delay the progression of Alzheimer’s disease and related dementias (ADRD), which has increasingly led to a focus on primary prevention.
Although previous research pointed to a link between socioeconomically disadvantaged neighborhoods and a greater risk for cognitive deficits, mild cognitive impairment, dementia, and poor brain health, the timeline for the emergence of that risk is unknown.
To fill in the gaps, investigators studied data on all 1.4 million New Zealand residents, dividing neighborhoods into quintiles based on level of disadvantage (assessed by the New Zealand Index of Deprivation) to see whether dementia diagnoses followed neighborhood socioeconomic gradients.
After adjusting for covariates, they found that overall, those living in disadvantaged areas were slightly more likely to develop dementia across the 20-year study period (adjusted hazard ratio [HR], 1.09; 95% CI, 1.08-1.10).
The more disadvantaged the neighborhood, the higher the dementia risk, with a 43% higher risk for ADRD among those in the highest quintile than among those in the lowest quintile (HR, 1.43; 95% CI, 1.36-1.49).
The effect was larger in men than in women and in younger vs older individuals, with the youngest age group showing 21% greater risk in women and 26% greater risk in men vs the oldest age group.
Dementia Prevention Starts Early
Researchers then turned to the Dunedin Study, a cohort of 938 New Zealanders (50% female) followed from birth to age 45 to track their psychological, social, and physiological health with brain scans, memory tests, and cognitive self-assessments.
The analysis suggested that by age 45, those living in more disadvantaged neighborhoods across adulthood had accumulated a significantly greater number of midlife risk factors for later ADRD.
They also had worse structural brain integrity, with each standard deviation increase in neighborhood disadvantage resulting in a thinner cortex, greater white matter hyperintensities volume, and older brain age.
Those living in poorer areas had lower cognitive test scores, reported more issues with everyday cognitive function, and showed a greater reduction in IQ from childhood to midlife. Analysis of brain scans also revealed mean brain ages 2.98 years older than those living in the least disadvantaged areas (P = .001).
Limitations included the study’s observational design, which could not establish causation, and the fact that the researchers did not have access to individual-level socioeconomic information for the entire population. Additionally, brain-integrity measures in the Dunedin Study were largely cross-sectional.
“If you want to truly prevent dementia, you’ve got to start early because 20 years before anyone will get a diagnosis, we’re seeing dementia’s emergence,” Dr. Reuben said. “And it could be even earlier.”
Funding for the study was provided by the National Institutes for Health; UK Medical Research Council; Health Research Council of New Zealand; Brain Research New Zealand; New Zealand Ministry of Business, Innovation, & Employment; and the Duke University and the University of North Carolina Alzheimer’s Disease Research Center. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
Living in a disadvantaged neighborhood is associated with accelerated brain aging and a higher risk for early dementia, regardless of income level or education, new research suggested.
“If you want to prevent dementia and you’re not asking someone about their neighborhood, you’re missing information that’s important to know,” lead author Aaron Reuben, PhD, postdoctoral scholar in neuropsychology and environmental health at Duke University, Durham, North Carolina, said in a news release.
The study was published online in Alzheimer’s & Dementia.
Higher Risk in Men
Few interventions exist to halt or delay the progression of Alzheimer’s disease and related dementias (ADRD), which has increasingly led to a focus on primary prevention.
Although previous research pointed to a link between socioeconomically disadvantaged neighborhoods and a greater risk for cognitive deficits, mild cognitive impairment, dementia, and poor brain health, the timeline for the emergence of that risk is unknown.
To fill in the gaps, investigators studied data on all 1.4 million New Zealand residents, dividing neighborhoods into quintiles based on level of disadvantage (assessed by the New Zealand Index of Deprivation) to see whether dementia diagnoses followed neighborhood socioeconomic gradients.
After adjusting for covariates, they found that overall, those living in disadvantaged areas were slightly more likely to develop dementia across the 20-year study period (adjusted hazard ratio [HR], 1.09; 95% CI, 1.08-1.10).
The more disadvantaged the neighborhood, the higher the dementia risk, with a 43% higher risk for ADRD among those in the highest quintile than among those in the lowest quintile (HR, 1.43; 95% CI, 1.36-1.49).
The effect was larger in men than in women and in younger vs older individuals, with the youngest age group showing 21% greater risk in women and 26% greater risk in men vs the oldest age group.
Dementia Prevention Starts Early
Researchers then turned to the Dunedin Study, a cohort of 938 New Zealanders (50% female) followed from birth to age 45 to track their psychological, social, and physiological health with brain scans, memory tests, and cognitive self-assessments.
The analysis suggested that by age 45, those living in more disadvantaged neighborhoods across adulthood had accumulated a significantly greater number of midlife risk factors for later ADRD.
They also had worse structural brain integrity, with each standard deviation increase in neighborhood disadvantage resulting in a thinner cortex, greater white matter hyperintensities volume, and older brain age.
Those living in poorer areas had lower cognitive test scores, reported more issues with everyday cognitive function, and showed a greater reduction in IQ from childhood to midlife. Analysis of brain scans also revealed mean brain ages 2.98 years older than those living in the least disadvantaged areas (P = .001).
Limitations included the study’s observational design, which could not establish causation, and the fact that the researchers did not have access to individual-level socioeconomic information for the entire population. Additionally, brain-integrity measures in the Dunedin Study were largely cross-sectional.
“If you want to truly prevent dementia, you’ve got to start early because 20 years before anyone will get a diagnosis, we’re seeing dementia’s emergence,” Dr. Reuben said. “And it could be even earlier.”
Funding for the study was provided by the National Institutes for Health; UK Medical Research Council; Health Research Council of New Zealand; Brain Research New Zealand; New Zealand Ministry of Business, Innovation, & Employment; and the Duke University and the University of North Carolina Alzheimer’s Disease Research Center. The authors declared no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM ALZHEIMER’S AND DEMENTIA
Billions Spent on DMD Meds Despite Scant Proof of Efficacy
Three genetically targeted drugs for Duchenne muscular dystrophy (DMD) — eteplirsen, golodirsen, and casimersen — cost the US health care system more than $3 billion between 2016 and 2022, despite a lack of confirmatory efficacy data, a new analysis showed.
“We were certainly surprised to see how much was spent on these drugs during the period when we were still waiting for evidence to confirm whether or not they were effective,” study investigator Benjamin Rome, MD, MPH, with the Program on Regulation, Therapeutics, and Law, Harvard Medical School and Brigham and Women’s Hospital, Boston, told this news organization.
“With these drugs often costing over $1 million a year, these results show how spending can add up even for drugs that treat a rare disease,” Dr. Rome added.
The study was published online March 11, 2024, in JAMA.
No Confirmatory Research
Investigators estimated public and private spending on eteplirsen, golodirsen, and casimersen for DMD during 2016 and 2022 — years in which these drugs were marketed without the required confirmatory studies completed.
Annual net sales, which include rebates and statutory discounts to Medicaid or 340B entities, for the three drugs totaled $3.1 billion during the study period. Estimated Medicaid and Medicare spending accounted for $1.2 billion of that total. Of this total, Medicaid programs spent $1.1 billion (34% of US net sales) and Medicare spent $104 million (3% of US net sales).
Overall sales for the drugs increased from $7 million in 2016 to $879 million in 2022, while Medicaid and Medicare spending rose from $25 million in 2017 to $327 million in 2022.
Most of the spending on these therapies was for eteplirsen ($2.6 billion [82%]), “the efficacy of which has yet to be determined in a confirmatory trial more than 7 years after the drug’s accelerated approval,” the authors noted.
Of the total amount spent on the three drugs, US payers spent an estimated $301 million (10%) on casimersen and $263 million (8%) on golodirsen.
The findings point to the importance of follow up on drugs that are approved with preliminary evidence, Rome said.
“Congress and the US Food and Drug Administration have already made some important changes to the accelerated approval pathway, so hopefully we won’t see cases of multi-year delays in the future,” he said.
“Payers, including public payers like Medicare and Medicaid, need tools to financially encourage companies to complete the follow-up trials, such as paying less for drugs with accelerated approval or engaging in outcomes-based contracts to ensure they don’t pay billions of dollars for drugs that ultimately turn out not to be effective,” Dr. Rome added.
Reached for comment, Adam C. Powell, PhD, president, Payer+Provider Syndicate, noted that when a condition impacts a small population, as is the case with muscular dystrophy, there are fewer people over which to spread the cost of treatment development.
Dr. Powell pointed to a recent report that showed the average cost of developing a new drug exceeds $2 billion. The finding in the current study, that three DMD treatments had combined net sales of $3.1 billion over a 7-year period, “suggests that their developers may not have yet recouped their development costs,” Dr. Powell told this news organization.
“Unless the cost of drug development can be lessened through innovations in artificial intelligence or other means, high spending per patient for drugs addressing uncommon conditions is to be expected,” noted Dr. Powell, who was not part of the study.
“That said, it is concerning when substantial funds are being spent by public payers on treatments that do not work,” he added. “As the authors suggest, one option is to tie reimbursement to efficacy. While patients living with deadly conditions cannot indefinitely wait for treatments to be validated, clawing back payments in the event of inefficacy is always an option.”
The study was funded by Arnold Ventures. Dr. Rome reported receiving grants from the Elevance Health Public Policy Institute, the National Academy for State Health Policy, and several state prescription drug affordability boards outside the submitted work. Powell had no relevant disclosures.
A version of this article appeared on Medscape.com .
Three genetically targeted drugs for Duchenne muscular dystrophy (DMD) — eteplirsen, golodirsen, and casimersen — cost the US health care system more than $3 billion between 2016 and 2022, despite a lack of confirmatory efficacy data, a new analysis showed.
“We were certainly surprised to see how much was spent on these drugs during the period when we were still waiting for evidence to confirm whether or not they were effective,” study investigator Benjamin Rome, MD, MPH, with the Program on Regulation, Therapeutics, and Law, Harvard Medical School and Brigham and Women’s Hospital, Boston, told this news organization.
“With these drugs often costing over $1 million a year, these results show how spending can add up even for drugs that treat a rare disease,” Dr. Rome added.
The study was published online March 11, 2024, in JAMA.
No Confirmatory Research
Investigators estimated public and private spending on eteplirsen, golodirsen, and casimersen for DMD during 2016 and 2022 — years in which these drugs were marketed without the required confirmatory studies completed.
Annual net sales, which include rebates and statutory discounts to Medicaid or 340B entities, for the three drugs totaled $3.1 billion during the study period. Estimated Medicaid and Medicare spending accounted for $1.2 billion of that total. Of this total, Medicaid programs spent $1.1 billion (34% of US net sales) and Medicare spent $104 million (3% of US net sales).
Overall sales for the drugs increased from $7 million in 2016 to $879 million in 2022, while Medicaid and Medicare spending rose from $25 million in 2017 to $327 million in 2022.
Most of the spending on these therapies was for eteplirsen ($2.6 billion [82%]), “the efficacy of which has yet to be determined in a confirmatory trial more than 7 years after the drug’s accelerated approval,” the authors noted.
Of the total amount spent on the three drugs, US payers spent an estimated $301 million (10%) on casimersen and $263 million (8%) on golodirsen.
The findings point to the importance of follow up on drugs that are approved with preliminary evidence, Rome said.
“Congress and the US Food and Drug Administration have already made some important changes to the accelerated approval pathway, so hopefully we won’t see cases of multi-year delays in the future,” he said.
“Payers, including public payers like Medicare and Medicaid, need tools to financially encourage companies to complete the follow-up trials, such as paying less for drugs with accelerated approval or engaging in outcomes-based contracts to ensure they don’t pay billions of dollars for drugs that ultimately turn out not to be effective,” Dr. Rome added.
Reached for comment, Adam C. Powell, PhD, president, Payer+Provider Syndicate, noted that when a condition impacts a small population, as is the case with muscular dystrophy, there are fewer people over which to spread the cost of treatment development.
Dr. Powell pointed to a recent report that showed the average cost of developing a new drug exceeds $2 billion. The finding in the current study, that three DMD treatments had combined net sales of $3.1 billion over a 7-year period, “suggests that their developers may not have yet recouped their development costs,” Dr. Powell told this news organization.
“Unless the cost of drug development can be lessened through innovations in artificial intelligence or other means, high spending per patient for drugs addressing uncommon conditions is to be expected,” noted Dr. Powell, who was not part of the study.
“That said, it is concerning when substantial funds are being spent by public payers on treatments that do not work,” he added. “As the authors suggest, one option is to tie reimbursement to efficacy. While patients living with deadly conditions cannot indefinitely wait for treatments to be validated, clawing back payments in the event of inefficacy is always an option.”
The study was funded by Arnold Ventures. Dr. Rome reported receiving grants from the Elevance Health Public Policy Institute, the National Academy for State Health Policy, and several state prescription drug affordability boards outside the submitted work. Powell had no relevant disclosures.
A version of this article appeared on Medscape.com .
Three genetically targeted drugs for Duchenne muscular dystrophy (DMD) — eteplirsen, golodirsen, and casimersen — cost the US health care system more than $3 billion between 2016 and 2022, despite a lack of confirmatory efficacy data, a new analysis showed.
“We were certainly surprised to see how much was spent on these drugs during the period when we were still waiting for evidence to confirm whether or not they were effective,” study investigator Benjamin Rome, MD, MPH, with the Program on Regulation, Therapeutics, and Law, Harvard Medical School and Brigham and Women’s Hospital, Boston, told this news organization.
“With these drugs often costing over $1 million a year, these results show how spending can add up even for drugs that treat a rare disease,” Dr. Rome added.
The study was published online March 11, 2024, in JAMA.
No Confirmatory Research
Investigators estimated public and private spending on eteplirsen, golodirsen, and casimersen for DMD during 2016 and 2022 — years in which these drugs were marketed without the required confirmatory studies completed.
Annual net sales, which include rebates and statutory discounts to Medicaid or 340B entities, for the three drugs totaled $3.1 billion during the study period. Estimated Medicaid and Medicare spending accounted for $1.2 billion of that total. Of this total, Medicaid programs spent $1.1 billion (34% of US net sales) and Medicare spent $104 million (3% of US net sales).
Overall sales for the drugs increased from $7 million in 2016 to $879 million in 2022, while Medicaid and Medicare spending rose from $25 million in 2017 to $327 million in 2022.
Most of the spending on these therapies was for eteplirsen ($2.6 billion [82%]), “the efficacy of which has yet to be determined in a confirmatory trial more than 7 years after the drug’s accelerated approval,” the authors noted.
Of the total amount spent on the three drugs, US payers spent an estimated $301 million (10%) on casimersen and $263 million (8%) on golodirsen.
The findings point to the importance of follow up on drugs that are approved with preliminary evidence, Rome said.
“Congress and the US Food and Drug Administration have already made some important changes to the accelerated approval pathway, so hopefully we won’t see cases of multi-year delays in the future,” he said.
“Payers, including public payers like Medicare and Medicaid, need tools to financially encourage companies to complete the follow-up trials, such as paying less for drugs with accelerated approval or engaging in outcomes-based contracts to ensure they don’t pay billions of dollars for drugs that ultimately turn out not to be effective,” Dr. Rome added.
Reached for comment, Adam C. Powell, PhD, president, Payer+Provider Syndicate, noted that when a condition impacts a small population, as is the case with muscular dystrophy, there are fewer people over which to spread the cost of treatment development.
Dr. Powell pointed to a recent report that showed the average cost of developing a new drug exceeds $2 billion. The finding in the current study, that three DMD treatments had combined net sales of $3.1 billion over a 7-year period, “suggests that their developers may not have yet recouped their development costs,” Dr. Powell told this news organization.
“Unless the cost of drug development can be lessened through innovations in artificial intelligence or other means, high spending per patient for drugs addressing uncommon conditions is to be expected,” noted Dr. Powell, who was not part of the study.
“That said, it is concerning when substantial funds are being spent by public payers on treatments that do not work,” he added. “As the authors suggest, one option is to tie reimbursement to efficacy. While patients living with deadly conditions cannot indefinitely wait for treatments to be validated, clawing back payments in the event of inefficacy is always an option.”
The study was funded by Arnold Ventures. Dr. Rome reported receiving grants from the Elevance Health Public Policy Institute, the National Academy for State Health Policy, and several state prescription drug affordability boards outside the submitted work. Powell had no relevant disclosures.
A version of this article appeared on Medscape.com .
AI May Help Docs Reply to Patients’ Portal Messages
Among the potential uses envisioned for artificial intelligence (AI) in healthcare is decreasing provider burden by using the technology to help respond to patients’ questions submitted through portals.
Easing the burden on providers of responding to each question is a target ripe for solutions as during the COVID pandemic, such messages increased 157% from prepandemic levels, say authors of a paper published online in JAMA Network Open. Each additional message added 2.3 minutes to time spent on the electronic health record (EHR) per day.
Researchers at Stanford Health Care, led by Patricia Garcia, MD, with the department of medicine, conducted a 5-week, prospective, single-group quality improvement study from July 10 through August 13, 2023, at Stanford to test an AI response system.
Large Language Model Used
All attending physicians, advanced practice providers, clinic nurses, and clinical pharmacists from the divisions of primary care and gastroenterology and hepatology were enrolled in a pilot program that offered the option to answer patients’ questions with drafts that were generated by a Health Insurance Portability and Accountability Act–compliant large language model integrated into EHRs. Drafts were then reviewed by the provider.
The study primarily tested whether providers (162 were included) would use the AI-generated drafts. Secondary outcomes included whether using such a system saved time or improved the clinician experience.
Participants received survey emails before and after the pilot period and answered questions on areas including task load, EHR burden, usability, work exhaustion, burnout, and satisfaction.
Researchers found that the overall average utilization rate per clinician was 20% but there were significant between-group differences. For example, in gastroenterology and hepatology, nurses used the AI tool the most at 29% and physicians/APPs had a 24% usage rate, whereas clinical pharmacists had the highest use rate for primary care at 44% compared with physician use at 15%.
Burden Improved, But Didn’t Save Time
AI did not appear to save time but did improve task load scores and work exhaustion scores. The report states that there was no change in reply action time, write time, or read time between the prepilot and pilot periods. However, there were significant reductions in the physician task load score derivative (mean [SD], 61.31 [17.23] pre survey vs 47.26 [17.11] post survey; paired difference, −13.87; 95% CI, −17.38 to −9.50; P < .001) and work exhaustion scores decreased by a third (mean [SD], 1.95 [0.79] pre survey vs 1.62 [0.68] post survey; paired difference, −0.33; 95% CI, −0.50 to −0.17; P < .001)
The authors wrote that improvements in task load and emotional exhaustion scores suggest that generated replies have the potential to lessen cognitive burden and burnout. Though the AI tool didn’t save time, editing responses may be less cognitively taxing than writing responses for providers, the authors suggest.
Quality of AI Responses
Comments about AI response message voice and/or tone were the most common and had the highest absolute number of negative comments (10 positive, 2 neutral, and 14 negative). The most negative comments were about length (too long or too short) of the draft message (1 positive, 2 neutral, and 8 negative).
Comments on accuracy of the draft response were fairly even — 4 positive and 5 negative — but there were no adverse safety signals, the authors report.
The providers had high expectations about use and quality of the tool that “were either met or exceeded at the end of the pilot,” Dr. Garcia and coauthors write. “Given the evidence that burnout is associated with turnover, reductions in clinical activity, and quality, even a modest improvement may have a substantial impact.”
One coauthor reported grants from Google, Omada Health, and PredictaMed outside the submitted work. Another coauthor reported having a patent for Well-being Index Instruments and Mayo Leadership Impact Index, with royalties paid from Mayo Clinic, and receiving honoraria for presenting grand rounds, keynote lectures, and advising health care organizations on clinician well-being. No other disclosures were reported.
Among the potential uses envisioned for artificial intelligence (AI) in healthcare is decreasing provider burden by using the technology to help respond to patients’ questions submitted through portals.
Easing the burden on providers of responding to each question is a target ripe for solutions as during the COVID pandemic, such messages increased 157% from prepandemic levels, say authors of a paper published online in JAMA Network Open. Each additional message added 2.3 minutes to time spent on the electronic health record (EHR) per day.
Researchers at Stanford Health Care, led by Patricia Garcia, MD, with the department of medicine, conducted a 5-week, prospective, single-group quality improvement study from July 10 through August 13, 2023, at Stanford to test an AI response system.
Large Language Model Used
All attending physicians, advanced practice providers, clinic nurses, and clinical pharmacists from the divisions of primary care and gastroenterology and hepatology were enrolled in a pilot program that offered the option to answer patients’ questions with drafts that were generated by a Health Insurance Portability and Accountability Act–compliant large language model integrated into EHRs. Drafts were then reviewed by the provider.
The study primarily tested whether providers (162 were included) would use the AI-generated drafts. Secondary outcomes included whether using such a system saved time or improved the clinician experience.
Participants received survey emails before and after the pilot period and answered questions on areas including task load, EHR burden, usability, work exhaustion, burnout, and satisfaction.
Researchers found that the overall average utilization rate per clinician was 20% but there were significant between-group differences. For example, in gastroenterology and hepatology, nurses used the AI tool the most at 29% and physicians/APPs had a 24% usage rate, whereas clinical pharmacists had the highest use rate for primary care at 44% compared with physician use at 15%.
Burden Improved, But Didn’t Save Time
AI did not appear to save time but did improve task load scores and work exhaustion scores. The report states that there was no change in reply action time, write time, or read time between the prepilot and pilot periods. However, there were significant reductions in the physician task load score derivative (mean [SD], 61.31 [17.23] pre survey vs 47.26 [17.11] post survey; paired difference, −13.87; 95% CI, −17.38 to −9.50; P < .001) and work exhaustion scores decreased by a third (mean [SD], 1.95 [0.79] pre survey vs 1.62 [0.68] post survey; paired difference, −0.33; 95% CI, −0.50 to −0.17; P < .001)
The authors wrote that improvements in task load and emotional exhaustion scores suggest that generated replies have the potential to lessen cognitive burden and burnout. Though the AI tool didn’t save time, editing responses may be less cognitively taxing than writing responses for providers, the authors suggest.
Quality of AI Responses
Comments about AI response message voice and/or tone were the most common and had the highest absolute number of negative comments (10 positive, 2 neutral, and 14 negative). The most negative comments were about length (too long or too short) of the draft message (1 positive, 2 neutral, and 8 negative).
Comments on accuracy of the draft response were fairly even — 4 positive and 5 negative — but there were no adverse safety signals, the authors report.
The providers had high expectations about use and quality of the tool that “were either met or exceeded at the end of the pilot,” Dr. Garcia and coauthors write. “Given the evidence that burnout is associated with turnover, reductions in clinical activity, and quality, even a modest improvement may have a substantial impact.”
One coauthor reported grants from Google, Omada Health, and PredictaMed outside the submitted work. Another coauthor reported having a patent for Well-being Index Instruments and Mayo Leadership Impact Index, with royalties paid from Mayo Clinic, and receiving honoraria for presenting grand rounds, keynote lectures, and advising health care organizations on clinician well-being. No other disclosures were reported.
Among the potential uses envisioned for artificial intelligence (AI) in healthcare is decreasing provider burden by using the technology to help respond to patients’ questions submitted through portals.
Easing the burden on providers of responding to each question is a target ripe for solutions as during the COVID pandemic, such messages increased 157% from prepandemic levels, say authors of a paper published online in JAMA Network Open. Each additional message added 2.3 minutes to time spent on the electronic health record (EHR) per day.
Researchers at Stanford Health Care, led by Patricia Garcia, MD, with the department of medicine, conducted a 5-week, prospective, single-group quality improvement study from July 10 through August 13, 2023, at Stanford to test an AI response system.
Large Language Model Used
All attending physicians, advanced practice providers, clinic nurses, and clinical pharmacists from the divisions of primary care and gastroenterology and hepatology were enrolled in a pilot program that offered the option to answer patients’ questions with drafts that were generated by a Health Insurance Portability and Accountability Act–compliant large language model integrated into EHRs. Drafts were then reviewed by the provider.
The study primarily tested whether providers (162 were included) would use the AI-generated drafts. Secondary outcomes included whether using such a system saved time or improved the clinician experience.
Participants received survey emails before and after the pilot period and answered questions on areas including task load, EHR burden, usability, work exhaustion, burnout, and satisfaction.
Researchers found that the overall average utilization rate per clinician was 20% but there were significant between-group differences. For example, in gastroenterology and hepatology, nurses used the AI tool the most at 29% and physicians/APPs had a 24% usage rate, whereas clinical pharmacists had the highest use rate for primary care at 44% compared with physician use at 15%.
Burden Improved, But Didn’t Save Time
AI did not appear to save time but did improve task load scores and work exhaustion scores. The report states that there was no change in reply action time, write time, or read time between the prepilot and pilot periods. However, there were significant reductions in the physician task load score derivative (mean [SD], 61.31 [17.23] pre survey vs 47.26 [17.11] post survey; paired difference, −13.87; 95% CI, −17.38 to −9.50; P < .001) and work exhaustion scores decreased by a third (mean [SD], 1.95 [0.79] pre survey vs 1.62 [0.68] post survey; paired difference, −0.33; 95% CI, −0.50 to −0.17; P < .001)
The authors wrote that improvements in task load and emotional exhaustion scores suggest that generated replies have the potential to lessen cognitive burden and burnout. Though the AI tool didn’t save time, editing responses may be less cognitively taxing than writing responses for providers, the authors suggest.
Quality of AI Responses
Comments about AI response message voice and/or tone were the most common and had the highest absolute number of negative comments (10 positive, 2 neutral, and 14 negative). The most negative comments were about length (too long or too short) of the draft message (1 positive, 2 neutral, and 8 negative).
Comments on accuracy of the draft response were fairly even — 4 positive and 5 negative — but there were no adverse safety signals, the authors report.
The providers had high expectations about use and quality of the tool that “were either met or exceeded at the end of the pilot,” Dr. Garcia and coauthors write. “Given the evidence that burnout is associated with turnover, reductions in clinical activity, and quality, even a modest improvement may have a substantial impact.”
One coauthor reported grants from Google, Omada Health, and PredictaMed outside the submitted work. Another coauthor reported having a patent for Well-being Index Instruments and Mayo Leadership Impact Index, with royalties paid from Mayo Clinic, and receiving honoraria for presenting grand rounds, keynote lectures, and advising health care organizations on clinician well-being. No other disclosures were reported.
FROM JAMA NETWORK OPEN
Most Cancer Trial Centers Located Closer to White, Affluent Populations
This inequity may be potentiating the underrepresentation of racially minoritized and socioeconomically disadvantaged populations in clinical trials, suggesting that employment of satellite hospitals is needed to expand access to investigational therapies, reported lead author Hassal Lee, MD, PhD, of Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, and colleagues.
“Minoritized and socioeconomically disadvantaged populations are underrepresented in clinical trials,” the investigators wrote in JAMA Oncology. “This may reduce the generalizability of trial results and propagate health disparities. Contributors to inequitable trial participation include individual-level factors and structural factors.”
Specifically, travel time to trial centers, as well as socioeconomic deprivation, can reduce likelihood of trial participation.
“Data on these parameters and population data on self-identified race exist, but their interrelation with clinical research facilities has not been systematically analyzed,” they wrote.
To try to draw comparisons between the distribution of patients of different races and socioeconomic statuses and the locations of clinical research facilities, Dr. Lee and colleagues aggregated data from the US Census, National Trial registry, Nature Index of Cancer Research Health Institutions, OpenStreetMap, National Cancer Institute–designated Cancer Centers list, and National Homeland Infrastructure Foundation. They then characterized catchment population demographics within 30-, 60-, and 120-minute driving commute times of all US hospitals, along with a more focused look at centers capable of conducting phase 1, phase 2, and phase 3 trials.
These efforts revealed broad geographic inequity.The 78 major centers that conduct 94% of all US cancer trials are located within 30 minutes of populations that have a 10.1% higher proportion of self-identified White individuals than the average US county, and a median income $18,900 higher than average (unpaired mean differences).
The publication also includes several maps characterizing racial and socioeconomic demographics within various catchment areas. For example, centers in New York City, Houston, and Chicago have the most diverse catchment populations within a 30-minute commute. Maps of all cities in the United States with populations greater than 500,000 are available in a supplementary index.
“This study indicates that geographical population distributions may present barriers to equitable clinical trial access and that data are available to proactively strategize about reduction of such barriers,” Dr. Lee and colleagues wrote.
The findings call attention to modifiable socioeconomic factors associated with trial participation, they added, like financial toxicity and affordable transportation, noting that ethnic and racial groups consent to trials at similar rates after controlling for income.
In addition, Dr. Lee and colleagues advised clinical trial designers to enlist satellite hospitals to increase participant diversity, since long commutes exacerbate “socioeconomic burdens associated with clinical trial participation,” with trial participation decreasing as commute time increases.
“Existing clinical trial centers may build collaborative efforts with nearby hospitals closer to underrepresented populations or set up community centers to support new collaborative networks to improve geographical access equity,” they wrote. “Methodologically, our approach is transferable to any country, region, or global effort with sufficient source data and can inform decision-making along the continuum of cancer care, from screening to implementing specialist care.”
A coauthor disclosed relationships with Flagship Therapeutics, Leidos Holding Ltd, Pershing Square Foundation, and others.
This inequity may be potentiating the underrepresentation of racially minoritized and socioeconomically disadvantaged populations in clinical trials, suggesting that employment of satellite hospitals is needed to expand access to investigational therapies, reported lead author Hassal Lee, MD, PhD, of Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, and colleagues.
“Minoritized and socioeconomically disadvantaged populations are underrepresented in clinical trials,” the investigators wrote in JAMA Oncology. “This may reduce the generalizability of trial results and propagate health disparities. Contributors to inequitable trial participation include individual-level factors and structural factors.”
Specifically, travel time to trial centers, as well as socioeconomic deprivation, can reduce likelihood of trial participation.
“Data on these parameters and population data on self-identified race exist, but their interrelation with clinical research facilities has not been systematically analyzed,” they wrote.
To try to draw comparisons between the distribution of patients of different races and socioeconomic statuses and the locations of clinical research facilities, Dr. Lee and colleagues aggregated data from the US Census, National Trial registry, Nature Index of Cancer Research Health Institutions, OpenStreetMap, National Cancer Institute–designated Cancer Centers list, and National Homeland Infrastructure Foundation. They then characterized catchment population demographics within 30-, 60-, and 120-minute driving commute times of all US hospitals, along with a more focused look at centers capable of conducting phase 1, phase 2, and phase 3 trials.
These efforts revealed broad geographic inequity.The 78 major centers that conduct 94% of all US cancer trials are located within 30 minutes of populations that have a 10.1% higher proportion of self-identified White individuals than the average US county, and a median income $18,900 higher than average (unpaired mean differences).
The publication also includes several maps characterizing racial and socioeconomic demographics within various catchment areas. For example, centers in New York City, Houston, and Chicago have the most diverse catchment populations within a 30-minute commute. Maps of all cities in the United States with populations greater than 500,000 are available in a supplementary index.
“This study indicates that geographical population distributions may present barriers to equitable clinical trial access and that data are available to proactively strategize about reduction of such barriers,” Dr. Lee and colleagues wrote.
The findings call attention to modifiable socioeconomic factors associated with trial participation, they added, like financial toxicity and affordable transportation, noting that ethnic and racial groups consent to trials at similar rates after controlling for income.
In addition, Dr. Lee and colleagues advised clinical trial designers to enlist satellite hospitals to increase participant diversity, since long commutes exacerbate “socioeconomic burdens associated with clinical trial participation,” with trial participation decreasing as commute time increases.
“Existing clinical trial centers may build collaborative efforts with nearby hospitals closer to underrepresented populations or set up community centers to support new collaborative networks to improve geographical access equity,” they wrote. “Methodologically, our approach is transferable to any country, region, or global effort with sufficient source data and can inform decision-making along the continuum of cancer care, from screening to implementing specialist care.”
A coauthor disclosed relationships with Flagship Therapeutics, Leidos Holding Ltd, Pershing Square Foundation, and others.
This inequity may be potentiating the underrepresentation of racially minoritized and socioeconomically disadvantaged populations in clinical trials, suggesting that employment of satellite hospitals is needed to expand access to investigational therapies, reported lead author Hassal Lee, MD, PhD, of Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, and colleagues.
“Minoritized and socioeconomically disadvantaged populations are underrepresented in clinical trials,” the investigators wrote in JAMA Oncology. “This may reduce the generalizability of trial results and propagate health disparities. Contributors to inequitable trial participation include individual-level factors and structural factors.”
Specifically, travel time to trial centers, as well as socioeconomic deprivation, can reduce likelihood of trial participation.
“Data on these parameters and population data on self-identified race exist, but their interrelation with clinical research facilities has not been systematically analyzed,” they wrote.
To try to draw comparisons between the distribution of patients of different races and socioeconomic statuses and the locations of clinical research facilities, Dr. Lee and colleagues aggregated data from the US Census, National Trial registry, Nature Index of Cancer Research Health Institutions, OpenStreetMap, National Cancer Institute–designated Cancer Centers list, and National Homeland Infrastructure Foundation. They then characterized catchment population demographics within 30-, 60-, and 120-minute driving commute times of all US hospitals, along with a more focused look at centers capable of conducting phase 1, phase 2, and phase 3 trials.
These efforts revealed broad geographic inequity.The 78 major centers that conduct 94% of all US cancer trials are located within 30 minutes of populations that have a 10.1% higher proportion of self-identified White individuals than the average US county, and a median income $18,900 higher than average (unpaired mean differences).
The publication also includes several maps characterizing racial and socioeconomic demographics within various catchment areas. For example, centers in New York City, Houston, and Chicago have the most diverse catchment populations within a 30-minute commute. Maps of all cities in the United States with populations greater than 500,000 are available in a supplementary index.
“This study indicates that geographical population distributions may present barriers to equitable clinical trial access and that data are available to proactively strategize about reduction of such barriers,” Dr. Lee and colleagues wrote.
The findings call attention to modifiable socioeconomic factors associated with trial participation, they added, like financial toxicity and affordable transportation, noting that ethnic and racial groups consent to trials at similar rates after controlling for income.
In addition, Dr. Lee and colleagues advised clinical trial designers to enlist satellite hospitals to increase participant diversity, since long commutes exacerbate “socioeconomic burdens associated with clinical trial participation,” with trial participation decreasing as commute time increases.
“Existing clinical trial centers may build collaborative efforts with nearby hospitals closer to underrepresented populations or set up community centers to support new collaborative networks to improve geographical access equity,” they wrote. “Methodologically, our approach is transferable to any country, region, or global effort with sufficient source data and can inform decision-making along the continuum of cancer care, from screening to implementing specialist care.”
A coauthor disclosed relationships with Flagship Therapeutics, Leidos Holding Ltd, Pershing Square Foundation, and others.
FROM JAMA ONCOLOGY