User login
‘Robust evidence’ that exercise cuts Parkinson’s risk in women
Investigators found that among almost 99,000 women participating in the ongoing E3N study, those who exercised the most frequently had up to a 25% lower risk for PD than their less-active counterparts.
The results highlight the importance of exercising early in mid-life to prevent PD later on, study investigator Alexis Elbaz, MD, PhD, research director, French Institute of Health and Medical Research (Inserm), Paris, said in an interview.
This is especially critical since there is no cure nor disease-modifying treatments. The medications that are available are aimed at symptom reduction.
“Finding ways to prevent or delay the onset of Parkinson’s is really important, and physical activity seems to be one of the possible strategies to reduce the risk,” Dr. Elbaz said.
The study was published online in Neurology.
Direct protective effect?
Results from previous research examining the relationship physical activity and PD has been inconsistent. One meta-analysis showed a statistically significant association among men but a nonsignificant link in women.
The investigators noted that some of the findings from previous studies may have been affected by reverse causation. As nonmotor symptoms such as constipation and subtle motor signs such as tremor and balance issues can present years before a PD diagnosis, patients may reduce their physical activity because of such symptoms.
To address this potential confounder, the researchers used “lag” analyses, where data on physical activity levels in the years close to a PD diagnosis are omitted.
The study relied on data from the E3N, an ongoing cohort study of 98,995 women, born between 1925 and 1950 and recruited in 1990, who were affiliated with a French national health insurance plan that primarily covers teachers. Participants completed a questionnaire on lifestyle and medical history at baseline and follow-up questionnaires every 2-3 years.
In six of the questionnaires, participants provided details about various recreational, sports, and household activities – for example, walking, climbing stairs, gardening, and cleaning. The authors attributed metabolic equivalent of task (MET) values to each activity and multiplied METs by their frequency and duration to obtain a physical activity score.
Definite and probable PD cases were determined through self-reported physician diagnoses, anti-parkinsonian drug claims, and medical records, with diagnoses verified by an expert panel.
Researchers investigated the relationship between physical activity and PD onset in a nested-case control study that included 25,075 women (1,196 PD cases and 23,879 controls) with a mean age of 71.9 years. They found physical activity was significantly lower in cases than in controls throughout follow-up.
The difference between cases and controls began to increase at 10 years before diagnosis (P-interaction = .003). “When we looked at the trajectories of physical activity in PD patients and in controls, we saw that in the 10 years before the diagnosis, physical activity declined at a steeper rate in controls. We think this is because those subtle prodromal symptoms cause people to exercise less,” said Dr. Elbaz.
In the main analysis, which had a 10-year lag, 1,074 women developed incident PD during a mean follow-up of 17.2 years. Those in the highest quartile of physical activity had a 25% lower risk for PD vs. those in the lowest quartile (adjusted hazard ratio [HR], 0.75, 95% confidence interval [CI], 0.63-0.89).
The risk for PD decreased with increasing levels of physical activity in a linear fashion, noted Dr. Elbaz. “So doing even a little bit of physical activity is better than doing nothing at all.”
Analyses that included 15-year and 20-year lag times had similar findings.
Sensitivity analyses that adjusted for the Mediterranean diet and caffeine and dairy intake also yielded comparable results. This was also true for analyses that adjusted for comorbidities such as body mass index, hypertension, hypercholesterolemia, diabetes, and cardiovascular disease, all of which can affect PD risk.
“This gives weight to the idea that diabetes or cardiovascular diseases do not explain the relationship between physical activity and PD, which means the most likely hypothesis is that physical activity has a direct protective effect on the brain,” said Dr. Elbaz.
Studies have shown that physical activity affects brain plasticity and can reduce oxidative stress in the brain – a key mechanism involved in PD, he added.
Physical activity is a low-risk, inexpensive, and accessible intervention. But the study was not designed to determine the types of physical activity that are most protective against PD.
The study’s main limitation is that it used self-reported physical activity rather than objective measures such as accelerometers. In addition, the participants were not necessarily representative of the general population.
Robust evidence
In an accompanying editorial, Lana M. Chahine, MD, associate professor in the department of neurology at the University of Pittsburgh, and Sirwan K. L. Darweesh, MD, PhD, Radboud University Medical Center, Donders Institute for Brain, Cognition and Behaviour, Center of Expertise for Parkinson and Movement Disorders, Nijmegen, the Netherlands, said the study “provides robust evidence” that physical activity reduces risk for PD in women.
“These results show that the field is moving in the right direction and provide a clear rationale for exercise trials to prevent or delay the onset of manifest PD in at-risk individuals” they wrote.
The study highlights “gaps” in knowledge that merit closer attention and that “further insight is warranted on how much the effects on PD vary by type, intensity, frequency, and duration of physical activity,” the editorialists noted.
Another gap is how the accuracy of assessment of physical activity can be improved beyond self-report. “Wearable sensor technology now offers the potential to assess physical activity remotely and objectively in prevention trials,” they added.
Other areas that need exploring relate to mechanisms by which physical activity reduces PD risk, and to what extent effects of physical activity vary between individuals, Dr. Chahine and Dr. Darweesh noted.
Commenting for this article, Michael S. Okun, MD, executive director of the Fixel Institute for Neurological Diseases at University of Florida Health, and medical adviser for the Parkinson’s Foundation, said the findings are “significant and important.”
Based on only a handful of previous studies, it was assumed that physical activity was associated with reduced Parkinson’s diagnosis only in men, said Dr. Okun. “The current dataset was larger and included longer-term outcomes, and it informs the field that exercise may be important for reducing the risk of Parkinson’s disease in men as well as in women.”
The investigators, the editorialists, and Dr. Okun reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Investigators found that among almost 99,000 women participating in the ongoing E3N study, those who exercised the most frequently had up to a 25% lower risk for PD than their less-active counterparts.
The results highlight the importance of exercising early in mid-life to prevent PD later on, study investigator Alexis Elbaz, MD, PhD, research director, French Institute of Health and Medical Research (Inserm), Paris, said in an interview.
This is especially critical since there is no cure nor disease-modifying treatments. The medications that are available are aimed at symptom reduction.
“Finding ways to prevent or delay the onset of Parkinson’s is really important, and physical activity seems to be one of the possible strategies to reduce the risk,” Dr. Elbaz said.
The study was published online in Neurology.
Direct protective effect?
Results from previous research examining the relationship physical activity and PD has been inconsistent. One meta-analysis showed a statistically significant association among men but a nonsignificant link in women.
The investigators noted that some of the findings from previous studies may have been affected by reverse causation. As nonmotor symptoms such as constipation and subtle motor signs such as tremor and balance issues can present years before a PD diagnosis, patients may reduce their physical activity because of such symptoms.
To address this potential confounder, the researchers used “lag” analyses, where data on physical activity levels in the years close to a PD diagnosis are omitted.
The study relied on data from the E3N, an ongoing cohort study of 98,995 women, born between 1925 and 1950 and recruited in 1990, who were affiliated with a French national health insurance plan that primarily covers teachers. Participants completed a questionnaire on lifestyle and medical history at baseline and follow-up questionnaires every 2-3 years.
In six of the questionnaires, participants provided details about various recreational, sports, and household activities – for example, walking, climbing stairs, gardening, and cleaning. The authors attributed metabolic equivalent of task (MET) values to each activity and multiplied METs by their frequency and duration to obtain a physical activity score.
Definite and probable PD cases were determined through self-reported physician diagnoses, anti-parkinsonian drug claims, and medical records, with diagnoses verified by an expert panel.
Researchers investigated the relationship between physical activity and PD onset in a nested-case control study that included 25,075 women (1,196 PD cases and 23,879 controls) with a mean age of 71.9 years. They found physical activity was significantly lower in cases than in controls throughout follow-up.
The difference between cases and controls began to increase at 10 years before diagnosis (P-interaction = .003). “When we looked at the trajectories of physical activity in PD patients and in controls, we saw that in the 10 years before the diagnosis, physical activity declined at a steeper rate in controls. We think this is because those subtle prodromal symptoms cause people to exercise less,” said Dr. Elbaz.
In the main analysis, which had a 10-year lag, 1,074 women developed incident PD during a mean follow-up of 17.2 years. Those in the highest quartile of physical activity had a 25% lower risk for PD vs. those in the lowest quartile (adjusted hazard ratio [HR], 0.75, 95% confidence interval [CI], 0.63-0.89).
The risk for PD decreased with increasing levels of physical activity in a linear fashion, noted Dr. Elbaz. “So doing even a little bit of physical activity is better than doing nothing at all.”
Analyses that included 15-year and 20-year lag times had similar findings.
Sensitivity analyses that adjusted for the Mediterranean diet and caffeine and dairy intake also yielded comparable results. This was also true for analyses that adjusted for comorbidities such as body mass index, hypertension, hypercholesterolemia, diabetes, and cardiovascular disease, all of which can affect PD risk.
“This gives weight to the idea that diabetes or cardiovascular diseases do not explain the relationship between physical activity and PD, which means the most likely hypothesis is that physical activity has a direct protective effect on the brain,” said Dr. Elbaz.
Studies have shown that physical activity affects brain plasticity and can reduce oxidative stress in the brain – a key mechanism involved in PD, he added.
Physical activity is a low-risk, inexpensive, and accessible intervention. But the study was not designed to determine the types of physical activity that are most protective against PD.
The study’s main limitation is that it used self-reported physical activity rather than objective measures such as accelerometers. In addition, the participants were not necessarily representative of the general population.
Robust evidence
In an accompanying editorial, Lana M. Chahine, MD, associate professor in the department of neurology at the University of Pittsburgh, and Sirwan K. L. Darweesh, MD, PhD, Radboud University Medical Center, Donders Institute for Brain, Cognition and Behaviour, Center of Expertise for Parkinson and Movement Disorders, Nijmegen, the Netherlands, said the study “provides robust evidence” that physical activity reduces risk for PD in women.
“These results show that the field is moving in the right direction and provide a clear rationale for exercise trials to prevent or delay the onset of manifest PD in at-risk individuals” they wrote.
The study highlights “gaps” in knowledge that merit closer attention and that “further insight is warranted on how much the effects on PD vary by type, intensity, frequency, and duration of physical activity,” the editorialists noted.
Another gap is how the accuracy of assessment of physical activity can be improved beyond self-report. “Wearable sensor technology now offers the potential to assess physical activity remotely and objectively in prevention trials,” they added.
Other areas that need exploring relate to mechanisms by which physical activity reduces PD risk, and to what extent effects of physical activity vary between individuals, Dr. Chahine and Dr. Darweesh noted.
Commenting for this article, Michael S. Okun, MD, executive director of the Fixel Institute for Neurological Diseases at University of Florida Health, and medical adviser for the Parkinson’s Foundation, said the findings are “significant and important.”
Based on only a handful of previous studies, it was assumed that physical activity was associated with reduced Parkinson’s diagnosis only in men, said Dr. Okun. “The current dataset was larger and included longer-term outcomes, and it informs the field that exercise may be important for reducing the risk of Parkinson’s disease in men as well as in women.”
The investigators, the editorialists, and Dr. Okun reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Investigators found that among almost 99,000 women participating in the ongoing E3N study, those who exercised the most frequently had up to a 25% lower risk for PD than their less-active counterparts.
The results highlight the importance of exercising early in mid-life to prevent PD later on, study investigator Alexis Elbaz, MD, PhD, research director, French Institute of Health and Medical Research (Inserm), Paris, said in an interview.
This is especially critical since there is no cure nor disease-modifying treatments. The medications that are available are aimed at symptom reduction.
“Finding ways to prevent or delay the onset of Parkinson’s is really important, and physical activity seems to be one of the possible strategies to reduce the risk,” Dr. Elbaz said.
The study was published online in Neurology.
Direct protective effect?
Results from previous research examining the relationship physical activity and PD has been inconsistent. One meta-analysis showed a statistically significant association among men but a nonsignificant link in women.
The investigators noted that some of the findings from previous studies may have been affected by reverse causation. As nonmotor symptoms such as constipation and subtle motor signs such as tremor and balance issues can present years before a PD diagnosis, patients may reduce their physical activity because of such symptoms.
To address this potential confounder, the researchers used “lag” analyses, where data on physical activity levels in the years close to a PD diagnosis are omitted.
The study relied on data from the E3N, an ongoing cohort study of 98,995 women, born between 1925 and 1950 and recruited in 1990, who were affiliated with a French national health insurance plan that primarily covers teachers. Participants completed a questionnaire on lifestyle and medical history at baseline and follow-up questionnaires every 2-3 years.
In six of the questionnaires, participants provided details about various recreational, sports, and household activities – for example, walking, climbing stairs, gardening, and cleaning. The authors attributed metabolic equivalent of task (MET) values to each activity and multiplied METs by their frequency and duration to obtain a physical activity score.
Definite and probable PD cases were determined through self-reported physician diagnoses, anti-parkinsonian drug claims, and medical records, with diagnoses verified by an expert panel.
Researchers investigated the relationship between physical activity and PD onset in a nested-case control study that included 25,075 women (1,196 PD cases and 23,879 controls) with a mean age of 71.9 years. They found physical activity was significantly lower in cases than in controls throughout follow-up.
The difference between cases and controls began to increase at 10 years before diagnosis (P-interaction = .003). “When we looked at the trajectories of physical activity in PD patients and in controls, we saw that in the 10 years before the diagnosis, physical activity declined at a steeper rate in controls. We think this is because those subtle prodromal symptoms cause people to exercise less,” said Dr. Elbaz.
In the main analysis, which had a 10-year lag, 1,074 women developed incident PD during a mean follow-up of 17.2 years. Those in the highest quartile of physical activity had a 25% lower risk for PD vs. those in the lowest quartile (adjusted hazard ratio [HR], 0.75, 95% confidence interval [CI], 0.63-0.89).
The risk for PD decreased with increasing levels of physical activity in a linear fashion, noted Dr. Elbaz. “So doing even a little bit of physical activity is better than doing nothing at all.”
Analyses that included 15-year and 20-year lag times had similar findings.
Sensitivity analyses that adjusted for the Mediterranean diet and caffeine and dairy intake also yielded comparable results. This was also true for analyses that adjusted for comorbidities such as body mass index, hypertension, hypercholesterolemia, diabetes, and cardiovascular disease, all of which can affect PD risk.
“This gives weight to the idea that diabetes or cardiovascular diseases do not explain the relationship between physical activity and PD, which means the most likely hypothesis is that physical activity has a direct protective effect on the brain,” said Dr. Elbaz.
Studies have shown that physical activity affects brain plasticity and can reduce oxidative stress in the brain – a key mechanism involved in PD, he added.
Physical activity is a low-risk, inexpensive, and accessible intervention. But the study was not designed to determine the types of physical activity that are most protective against PD.
The study’s main limitation is that it used self-reported physical activity rather than objective measures such as accelerometers. In addition, the participants were not necessarily representative of the general population.
Robust evidence
In an accompanying editorial, Lana M. Chahine, MD, associate professor in the department of neurology at the University of Pittsburgh, and Sirwan K. L. Darweesh, MD, PhD, Radboud University Medical Center, Donders Institute for Brain, Cognition and Behaviour, Center of Expertise for Parkinson and Movement Disorders, Nijmegen, the Netherlands, said the study “provides robust evidence” that physical activity reduces risk for PD in women.
“These results show that the field is moving in the right direction and provide a clear rationale for exercise trials to prevent or delay the onset of manifest PD in at-risk individuals” they wrote.
The study highlights “gaps” in knowledge that merit closer attention and that “further insight is warranted on how much the effects on PD vary by type, intensity, frequency, and duration of physical activity,” the editorialists noted.
Another gap is how the accuracy of assessment of physical activity can be improved beyond self-report. “Wearable sensor technology now offers the potential to assess physical activity remotely and objectively in prevention trials,” they added.
Other areas that need exploring relate to mechanisms by which physical activity reduces PD risk, and to what extent effects of physical activity vary between individuals, Dr. Chahine and Dr. Darweesh noted.
Commenting for this article, Michael S. Okun, MD, executive director of the Fixel Institute for Neurological Diseases at University of Florida Health, and medical adviser for the Parkinson’s Foundation, said the findings are “significant and important.”
Based on only a handful of previous studies, it was assumed that physical activity was associated with reduced Parkinson’s diagnosis only in men, said Dr. Okun. “The current dataset was larger and included longer-term outcomes, and it informs the field that exercise may be important for reducing the risk of Parkinson’s disease in men as well as in women.”
The investigators, the editorialists, and Dr. Okun reported no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM NEUROLOGY
Deep sleep may mitigate the impact of Alzheimer’s pathology
Investigators found that deep sleep, also known as non-REM (NREM) slow-wave sleep, can protect memory function in cognitively normal adults with a high beta-amyloid burden.
“Think of deep sleep almost like a life raft that keeps memory afloat, rather than memory getting dragged down by the weight of Alzheimer’s disease pathology,” senior investigator Matthew Walker, PhD, professor of neuroscience and psychology, University of California, Berkeley, said in a news release.
The study was published online in BMC Medicine.
Resilience factor
Studying resilience to existing brain pathology is “an exciting new research direction,” lead author Zsófia Zavecz, PhD, with the Center for Human Sleep Science at the University of California, Berkeley, said in an interview.
“That is, what factors explain the individual differences in cognitive function despite the same level of brain pathology, and how do some people with significant pathology have largely preserved memory?” she added.
The study included 62 cognitively normal older adults from the Berkeley Aging Cohort Study.
Sleep EEG recordings were obtained over 2 nights in a sleep lab and PET scans were used to quantify beta-amyloid. Half of the participants had high beta-amyloid burden and half were beta-amyloid negative.
After the sleep studies, all participants completed a memory task involving matching names to faces.
The results suggest that deep NREM slow-wave sleep significantly moderates the effect of beta-amyloid status on memory function.
Specifically, NREM slow-wave activity selectively supported superior memory function in adults with high beta-amyloid burden, who are most in need of cognitive reserve (B = 2.694, P = .019), the researchers report.
In contrast, adults without significant beta-amyloid pathological burden – and thus without the same need for cognitive reserve – did not similarly benefit from NREM slow-wave activity (B = –0.115, P = .876).
The findings remained significant after adjusting for age, sex, body mass index, gray matter atrophy, and previously identified cognitive reserve factors, such as education and physical activity.
Dr. Zavecz said there are several potential reasons why deep sleep may support cognitive reserve.
One is that during deep sleep specifically, memories are replayed in the brain, and this results in a “neural reorganization” that helps stabilize the memory and make it more permanent.
“Other explanations include deep sleep’s role in maintaining homeostasis in the brain’s capacity to form new neural connections and providing an optimal brain state for the clearance of toxins interfering with healthy brain functioning,” she noted.
“The extent to which sleep could offer a protective buffer against severe cognitive impairment remains to be tested. However, this study is the first step in hopefully a series of new research that will investigate sleep as a cognitive reserve factor,” said Dr. Zavecz.
Encouraging data
Reached for comment, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said although the study sample is small, the results are “encouraging because sleep is a modifiable factor and can therefore be targeted.”
“More work is needed in a larger population before we can fully leverage this stage of sleep to reduce the risk of developing cognitive decline,” Dr. Griffin said.
Also weighing in on this research, Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, said the study is “exciting on two fronts – we may have an additional marker for the development of Alzheimer’s disease to predict risk and track disease, but also targets for early intervention with sleep architecture–enhancing therapies, be they drug, device, or digital.”
“For the sake of our brain health, we all must get very familiar with the concept of cognitive or brain reserve,” said Dr. Lakhan, who was not involved in the study.
“Brain reserve refers to our ability to buttress against the threat of dementia and classically it’s been associated with ongoing brain stimulation (i.e., higher education, cognitively demanding job),” he noted.
“This line of research now opens the door that optimal sleep health – especially deep NREM slow wave sleep – correlates with greater brain reserve against Alzheimer’s disease,” Dr. Lakhan said.
The study was supported by the National Institutes of Health and the University of California, Berkeley. Dr. Walker serves as an advisor to and has equity interest in Bryte, Shuni, Oura, and StimScience. Dr. Zavecz and Dr. Lakhan report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Investigators found that deep sleep, also known as non-REM (NREM) slow-wave sleep, can protect memory function in cognitively normal adults with a high beta-amyloid burden.
“Think of deep sleep almost like a life raft that keeps memory afloat, rather than memory getting dragged down by the weight of Alzheimer’s disease pathology,” senior investigator Matthew Walker, PhD, professor of neuroscience and psychology, University of California, Berkeley, said in a news release.
The study was published online in BMC Medicine.
Resilience factor
Studying resilience to existing brain pathology is “an exciting new research direction,” lead author Zsófia Zavecz, PhD, with the Center for Human Sleep Science at the University of California, Berkeley, said in an interview.
“That is, what factors explain the individual differences in cognitive function despite the same level of brain pathology, and how do some people with significant pathology have largely preserved memory?” she added.
The study included 62 cognitively normal older adults from the Berkeley Aging Cohort Study.
Sleep EEG recordings were obtained over 2 nights in a sleep lab and PET scans were used to quantify beta-amyloid. Half of the participants had high beta-amyloid burden and half were beta-amyloid negative.
After the sleep studies, all participants completed a memory task involving matching names to faces.
The results suggest that deep NREM slow-wave sleep significantly moderates the effect of beta-amyloid status on memory function.
Specifically, NREM slow-wave activity selectively supported superior memory function in adults with high beta-amyloid burden, who are most in need of cognitive reserve (B = 2.694, P = .019), the researchers report.
In contrast, adults without significant beta-amyloid pathological burden – and thus without the same need for cognitive reserve – did not similarly benefit from NREM slow-wave activity (B = –0.115, P = .876).
The findings remained significant after adjusting for age, sex, body mass index, gray matter atrophy, and previously identified cognitive reserve factors, such as education and physical activity.
Dr. Zavecz said there are several potential reasons why deep sleep may support cognitive reserve.
One is that during deep sleep specifically, memories are replayed in the brain, and this results in a “neural reorganization” that helps stabilize the memory and make it more permanent.
“Other explanations include deep sleep’s role in maintaining homeostasis in the brain’s capacity to form new neural connections and providing an optimal brain state for the clearance of toxins interfering with healthy brain functioning,” she noted.
“The extent to which sleep could offer a protective buffer against severe cognitive impairment remains to be tested. However, this study is the first step in hopefully a series of new research that will investigate sleep as a cognitive reserve factor,” said Dr. Zavecz.
Encouraging data
Reached for comment, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said although the study sample is small, the results are “encouraging because sleep is a modifiable factor and can therefore be targeted.”
“More work is needed in a larger population before we can fully leverage this stage of sleep to reduce the risk of developing cognitive decline,” Dr. Griffin said.
Also weighing in on this research, Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, said the study is “exciting on two fronts – we may have an additional marker for the development of Alzheimer’s disease to predict risk and track disease, but also targets for early intervention with sleep architecture–enhancing therapies, be they drug, device, or digital.”
“For the sake of our brain health, we all must get very familiar with the concept of cognitive or brain reserve,” said Dr. Lakhan, who was not involved in the study.
“Brain reserve refers to our ability to buttress against the threat of dementia and classically it’s been associated with ongoing brain stimulation (i.e., higher education, cognitively demanding job),” he noted.
“This line of research now opens the door that optimal sleep health – especially deep NREM slow wave sleep – correlates with greater brain reserve against Alzheimer’s disease,” Dr. Lakhan said.
The study was supported by the National Institutes of Health and the University of California, Berkeley. Dr. Walker serves as an advisor to and has equity interest in Bryte, Shuni, Oura, and StimScience. Dr. Zavecz and Dr. Lakhan report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
Investigators found that deep sleep, also known as non-REM (NREM) slow-wave sleep, can protect memory function in cognitively normal adults with a high beta-amyloid burden.
“Think of deep sleep almost like a life raft that keeps memory afloat, rather than memory getting dragged down by the weight of Alzheimer’s disease pathology,” senior investigator Matthew Walker, PhD, professor of neuroscience and psychology, University of California, Berkeley, said in a news release.
The study was published online in BMC Medicine.
Resilience factor
Studying resilience to existing brain pathology is “an exciting new research direction,” lead author Zsófia Zavecz, PhD, with the Center for Human Sleep Science at the University of California, Berkeley, said in an interview.
“That is, what factors explain the individual differences in cognitive function despite the same level of brain pathology, and how do some people with significant pathology have largely preserved memory?” she added.
The study included 62 cognitively normal older adults from the Berkeley Aging Cohort Study.
Sleep EEG recordings were obtained over 2 nights in a sleep lab and PET scans were used to quantify beta-amyloid. Half of the participants had high beta-amyloid burden and half were beta-amyloid negative.
After the sleep studies, all participants completed a memory task involving matching names to faces.
The results suggest that deep NREM slow-wave sleep significantly moderates the effect of beta-amyloid status on memory function.
Specifically, NREM slow-wave activity selectively supported superior memory function in adults with high beta-amyloid burden, who are most in need of cognitive reserve (B = 2.694, P = .019), the researchers report.
In contrast, adults without significant beta-amyloid pathological burden – and thus without the same need for cognitive reserve – did not similarly benefit from NREM slow-wave activity (B = –0.115, P = .876).
The findings remained significant after adjusting for age, sex, body mass index, gray matter atrophy, and previously identified cognitive reserve factors, such as education and physical activity.
Dr. Zavecz said there are several potential reasons why deep sleep may support cognitive reserve.
One is that during deep sleep specifically, memories are replayed in the brain, and this results in a “neural reorganization” that helps stabilize the memory and make it more permanent.
“Other explanations include deep sleep’s role in maintaining homeostasis in the brain’s capacity to form new neural connections and providing an optimal brain state for the clearance of toxins interfering with healthy brain functioning,” she noted.
“The extent to which sleep could offer a protective buffer against severe cognitive impairment remains to be tested. However, this study is the first step in hopefully a series of new research that will investigate sleep as a cognitive reserve factor,” said Dr. Zavecz.
Encouraging data
Reached for comment, Percy Griffin, PhD, Alzheimer’s Association director of scientific engagement, said although the study sample is small, the results are “encouraging because sleep is a modifiable factor and can therefore be targeted.”
“More work is needed in a larger population before we can fully leverage this stage of sleep to reduce the risk of developing cognitive decline,” Dr. Griffin said.
Also weighing in on this research, Shaheen Lakhan, MD, PhD, a neurologist and researcher in Boston, said the study is “exciting on two fronts – we may have an additional marker for the development of Alzheimer’s disease to predict risk and track disease, but also targets for early intervention with sleep architecture–enhancing therapies, be they drug, device, or digital.”
“For the sake of our brain health, we all must get very familiar with the concept of cognitive or brain reserve,” said Dr. Lakhan, who was not involved in the study.
“Brain reserve refers to our ability to buttress against the threat of dementia and classically it’s been associated with ongoing brain stimulation (i.e., higher education, cognitively demanding job),” he noted.
“This line of research now opens the door that optimal sleep health – especially deep NREM slow wave sleep – correlates with greater brain reserve against Alzheimer’s disease,” Dr. Lakhan said.
The study was supported by the National Institutes of Health and the University of California, Berkeley. Dr. Walker serves as an advisor to and has equity interest in Bryte, Shuni, Oura, and StimScience. Dr. Zavecz and Dr. Lakhan report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
FROM BMC MEDICINE
Parkinson’s in Marines linked to toxic drinking water at Camp Lejeune
in Jacksonville, N.C.
In one of the best-documented, large-scale contaminations in U.S. history, the drinking water at the Marine Corps base was contaminated with TCE and other volatile organic compounds from about 1953 to 1987.
The new study of more than 340,000 service members found the risk of PD was 70% higher in Marines stationed at Camp Lejeune in North Carolina during the years 1975-1985, compared with Marines stationed at Camp Pendleton in Oceanside, Calif.
“This is by far the largest study to look at the association of TCE and PD and the evidence is pretty strong,” lead investigator Samuel M. Goldman, MD, MPH, with University of California, San Francisco, said in an interview.
The link is supported by animal models that show that TCE can induce a neurodegenerative syndrome that is “very similar pathologically to what we see in PD,” Dr. Goldman said.
The study was published online in JAMA Neurology.
‘Hundreds of thousands’ at risk
At Camp Lejeune during the years 1975-1985, the period of maximal contamination, the estimated monthly median TCE level was more than 70-fold the Environmental Protection Agency maximum contaminant level. Maximum contaminant levels were also exceeded for perchloroethylene (PCE) and vinyl chloride.
Dr. Goldman and colleagues had health data on 158,122 veterans – 84,824 from Camp Lejeune and 73,298 from Camp Pendleton – who served for at least 3 months between 1975 and 1985, with follow up from Jan. 1, 1997, to Feb. 17, 2021.
Demographic characteristics were similar between the two groups; most were White men with an average age of 59 years.
A total of 430 veterans had PD: 279 from Camp Lejeune (prevalence, 0.33%) and 151 from Camp Pendleton (prevalence, 0.21%).
In multivariable models, Camp Lejeune veterans had a 70% higher risk for PD (odds ratio, 1.70; 95% confidence interval, 1.39-2.07; P < .001).
“Remarkably,” the researchers noted, among veterans without PD, residence at Camp Lejeune was also associated with a significantly higher risk of having several well-established prodromal features of PD, including tremor, suggesting they may be in a prediagnostic phase of evolving PD pathology.
Importantly, they added, in addition to the exposed service members, “hundreds of thousands of family members and civilian workers exposed to contaminated water at Camp Lejeune may also be at increased risk of PD, cancers, and other health consequences. Continued prospective follow-up of this population is essential.”
‘An unreasonable risk’
The new study supports a prior, and much smaller, study by Dr. Goldman and colleagues showing TCE exposure was associated with a sixfold increased risk for PD.
TCE is a ubiquitous environmental contaminant. The EPA Toxics Release Inventory estimates 2.05 million pounds of TCE was released into the environment from industrial sites in 2017.
In an accompanying editorial, E. Ray Dorsey, MD, with the University of Rochester (N.Y.) and coauthors noted the work of Dr. Goldman and colleagues “increases the certainty” that environmental exposure to TCE and the similar compound PCE “contribute importantly to the cause of the world’s fastest-growing brain disease.”
In December, the EPA found that PCE posed “an unreasonable risk” to human health, and 1 month later, it reached the same conclusion for TCE.
“These actions could lay the foundation for increased regulation and possibly a ban of these two chemicals that have contributed to immeasurable death and disability for generations,” Dr. Dorsey and colleagues noted.
“A U.S. ban would be a step forward but would not address the tens of thousands of TCE/PCE-contaminated sites in the U.S. and around the world or the rising global use of the toxic solvents,” they added.
This research was supported by Department of Veterans Affairs. Dr. Goldman reported no relevant financial relationships. Dr. Dorsey has received personal fees from organizations including the American Neurological Association, Elsevier, International Parkinson and Movement Disorder Society, Massachusetts Medical Society, Michael J. Fox Foundation, National Institutes of Health, and WebMD, as well as numerous pharmaceutical companies.
A version of this article originally appeared on Medscape.com.
in Jacksonville, N.C.
In one of the best-documented, large-scale contaminations in U.S. history, the drinking water at the Marine Corps base was contaminated with TCE and other volatile organic compounds from about 1953 to 1987.
The new study of more than 340,000 service members found the risk of PD was 70% higher in Marines stationed at Camp Lejeune in North Carolina during the years 1975-1985, compared with Marines stationed at Camp Pendleton in Oceanside, Calif.
“This is by far the largest study to look at the association of TCE and PD and the evidence is pretty strong,” lead investigator Samuel M. Goldman, MD, MPH, with University of California, San Francisco, said in an interview.
The link is supported by animal models that show that TCE can induce a neurodegenerative syndrome that is “very similar pathologically to what we see in PD,” Dr. Goldman said.
The study was published online in JAMA Neurology.
‘Hundreds of thousands’ at risk
At Camp Lejeune during the years 1975-1985, the period of maximal contamination, the estimated monthly median TCE level was more than 70-fold the Environmental Protection Agency maximum contaminant level. Maximum contaminant levels were also exceeded for perchloroethylene (PCE) and vinyl chloride.
Dr. Goldman and colleagues had health data on 158,122 veterans – 84,824 from Camp Lejeune and 73,298 from Camp Pendleton – who served for at least 3 months between 1975 and 1985, with follow up from Jan. 1, 1997, to Feb. 17, 2021.
Demographic characteristics were similar between the two groups; most were White men with an average age of 59 years.
A total of 430 veterans had PD: 279 from Camp Lejeune (prevalence, 0.33%) and 151 from Camp Pendleton (prevalence, 0.21%).
In multivariable models, Camp Lejeune veterans had a 70% higher risk for PD (odds ratio, 1.70; 95% confidence interval, 1.39-2.07; P < .001).
“Remarkably,” the researchers noted, among veterans without PD, residence at Camp Lejeune was also associated with a significantly higher risk of having several well-established prodromal features of PD, including tremor, suggesting they may be in a prediagnostic phase of evolving PD pathology.
Importantly, they added, in addition to the exposed service members, “hundreds of thousands of family members and civilian workers exposed to contaminated water at Camp Lejeune may also be at increased risk of PD, cancers, and other health consequences. Continued prospective follow-up of this population is essential.”
‘An unreasonable risk’
The new study supports a prior, and much smaller, study by Dr. Goldman and colleagues showing TCE exposure was associated with a sixfold increased risk for PD.
TCE is a ubiquitous environmental contaminant. The EPA Toxics Release Inventory estimates 2.05 million pounds of TCE was released into the environment from industrial sites in 2017.
In an accompanying editorial, E. Ray Dorsey, MD, with the University of Rochester (N.Y.) and coauthors noted the work of Dr. Goldman and colleagues “increases the certainty” that environmental exposure to TCE and the similar compound PCE “contribute importantly to the cause of the world’s fastest-growing brain disease.”
In December, the EPA found that PCE posed “an unreasonable risk” to human health, and 1 month later, it reached the same conclusion for TCE.
“These actions could lay the foundation for increased regulation and possibly a ban of these two chemicals that have contributed to immeasurable death and disability for generations,” Dr. Dorsey and colleagues noted.
“A U.S. ban would be a step forward but would not address the tens of thousands of TCE/PCE-contaminated sites in the U.S. and around the world or the rising global use of the toxic solvents,” they added.
This research was supported by Department of Veterans Affairs. Dr. Goldman reported no relevant financial relationships. Dr. Dorsey has received personal fees from organizations including the American Neurological Association, Elsevier, International Parkinson and Movement Disorder Society, Massachusetts Medical Society, Michael J. Fox Foundation, National Institutes of Health, and WebMD, as well as numerous pharmaceutical companies.
A version of this article originally appeared on Medscape.com.
in Jacksonville, N.C.
In one of the best-documented, large-scale contaminations in U.S. history, the drinking water at the Marine Corps base was contaminated with TCE and other volatile organic compounds from about 1953 to 1987.
The new study of more than 340,000 service members found the risk of PD was 70% higher in Marines stationed at Camp Lejeune in North Carolina during the years 1975-1985, compared with Marines stationed at Camp Pendleton in Oceanside, Calif.
“This is by far the largest study to look at the association of TCE and PD and the evidence is pretty strong,” lead investigator Samuel M. Goldman, MD, MPH, with University of California, San Francisco, said in an interview.
The link is supported by animal models that show that TCE can induce a neurodegenerative syndrome that is “very similar pathologically to what we see in PD,” Dr. Goldman said.
The study was published online in JAMA Neurology.
‘Hundreds of thousands’ at risk
At Camp Lejeune during the years 1975-1985, the period of maximal contamination, the estimated monthly median TCE level was more than 70-fold the Environmental Protection Agency maximum contaminant level. Maximum contaminant levels were also exceeded for perchloroethylene (PCE) and vinyl chloride.
Dr. Goldman and colleagues had health data on 158,122 veterans – 84,824 from Camp Lejeune and 73,298 from Camp Pendleton – who served for at least 3 months between 1975 and 1985, with follow up from Jan. 1, 1997, to Feb. 17, 2021.
Demographic characteristics were similar between the two groups; most were White men with an average age of 59 years.
A total of 430 veterans had PD: 279 from Camp Lejeune (prevalence, 0.33%) and 151 from Camp Pendleton (prevalence, 0.21%).
In multivariable models, Camp Lejeune veterans had a 70% higher risk for PD (odds ratio, 1.70; 95% confidence interval, 1.39-2.07; P < .001).
“Remarkably,” the researchers noted, among veterans without PD, residence at Camp Lejeune was also associated with a significantly higher risk of having several well-established prodromal features of PD, including tremor, suggesting they may be in a prediagnostic phase of evolving PD pathology.
Importantly, they added, in addition to the exposed service members, “hundreds of thousands of family members and civilian workers exposed to contaminated water at Camp Lejeune may also be at increased risk of PD, cancers, and other health consequences. Continued prospective follow-up of this population is essential.”
‘An unreasonable risk’
The new study supports a prior, and much smaller, study by Dr. Goldman and colleagues showing TCE exposure was associated with a sixfold increased risk for PD.
TCE is a ubiquitous environmental contaminant. The EPA Toxics Release Inventory estimates 2.05 million pounds of TCE was released into the environment from industrial sites in 2017.
In an accompanying editorial, E. Ray Dorsey, MD, with the University of Rochester (N.Y.) and coauthors noted the work of Dr. Goldman and colleagues “increases the certainty” that environmental exposure to TCE and the similar compound PCE “contribute importantly to the cause of the world’s fastest-growing brain disease.”
In December, the EPA found that PCE posed “an unreasonable risk” to human health, and 1 month later, it reached the same conclusion for TCE.
“These actions could lay the foundation for increased regulation and possibly a ban of these two chemicals that have contributed to immeasurable death and disability for generations,” Dr. Dorsey and colleagues noted.
“A U.S. ban would be a step forward but would not address the tens of thousands of TCE/PCE-contaminated sites in the U.S. and around the world or the rising global use of the toxic solvents,” they added.
This research was supported by Department of Veterans Affairs. Dr. Goldman reported no relevant financial relationships. Dr. Dorsey has received personal fees from organizations including the American Neurological Association, Elsevier, International Parkinson and Movement Disorder Society, Massachusetts Medical Society, Michael J. Fox Foundation, National Institutes of Health, and WebMD, as well as numerous pharmaceutical companies.
A version of this article originally appeared on Medscape.com.
FROM JAMA NEUROLOGY
Common gut bacteria linked to Parkinson’s disease
, a small study suggests.
Environmental factors as well as genetics are also suspected to play a role in PD etiology, although the exact cause remains unknown.
“Our findings indicate that specific strains of Desulfovibrio bacteria are likely to cause Parkinson’s disease,” study investigator Per Erik Saris, PhD, from the University of Helsinki, Finland, says in a news release.
The study was published online in Frontiers in Cellular and Infection Microbiology.
Screen and treat?
It builds on earlier work by the researchers that showed that Desulfovibrio bacteria were more prevalent and more abundant in quantity in patients with PD, especially patients with more severe disease, than in healthy individuals.
Desulfovibrio is a genus of gram-negative bacteria commonly found in aquatic environments in which levels of organic material are elevated, as well as in waterlogged soils.
In their latest study, Dr. Saris and colleagues looked for Desulfovibrio species in fecal samples from 10 patients with PD and their healthy spouses. Isolated Desulfovibrio strains were fed to a strain of Caenorhabditis elegans roundworms that expressed human alpha-syn fused with yellow fluorescent protein.
They found that worms fed Desulfovibrio bacteria from patients with PD harbored significantly more (P < .001) and larger alpha-syn aggregates (P < .001) than worms fed Desulfovibrio bacteria from healthy individuals or worms fed Escherichia coli strains.
In addition, worms fed Desulfovibrio strains from patients with PD died in significantly higher quantities than worms fed E. coli bacteria (P < .01).
Desulfovibrio strains isolated from patients with PD and strains isolated from healthy individuals appear to have different traits. Comparative genomics studies are needed to identify genetic differences and pathogenic genes from Desulfovibrio strains from patients with PD, the researchers note.
“Taking into account that aggregation of alpha-syn is a hallmark of PD, the ability of Desulfovibrio bacteria to induce alpha-syn aggregation in large numbers and sizes, as demonstrated in the present study, provides further evidence for the pathogenic role of Desulfovibrio bacteria in PD, as previously suggested,” they add.
The findings highlight the potential for screening and targeted removal of harmful Desulfovibrio bacteria, Dr. Saris suggests in the news release.
No clinical implications
In a comment, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, cautioned that “this research is in a very early stage, uses a nonvertebrate animal model, and the number of participants is small.
“Understanding the role of the gut microbiome in influencing PD is in its infancy. These are important steps to determining what – if any – link may be between gut bacteria and PD,” Dr. Beck said.
“Right now, there are no implications for the screening/treatment of carriers,” Dr. Beck said.
“It seems that a lot of people, whether with PD or not, harbor Desulfovibrio bacteria in their gut. More research is needed to understand what is different between the Desulfovibrio bacteria of people with PD vs. those who do not have PD,” Dr. Beck added.
The study was supported by the Magnus Ehrnrooth Foundation and the Jane and Aatos Erkko Foundation. Dr. Saris and Dr. Beck have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, a small study suggests.
Environmental factors as well as genetics are also suspected to play a role in PD etiology, although the exact cause remains unknown.
“Our findings indicate that specific strains of Desulfovibrio bacteria are likely to cause Parkinson’s disease,” study investigator Per Erik Saris, PhD, from the University of Helsinki, Finland, says in a news release.
The study was published online in Frontiers in Cellular and Infection Microbiology.
Screen and treat?
It builds on earlier work by the researchers that showed that Desulfovibrio bacteria were more prevalent and more abundant in quantity in patients with PD, especially patients with more severe disease, than in healthy individuals.
Desulfovibrio is a genus of gram-negative bacteria commonly found in aquatic environments in which levels of organic material are elevated, as well as in waterlogged soils.
In their latest study, Dr. Saris and colleagues looked for Desulfovibrio species in fecal samples from 10 patients with PD and their healthy spouses. Isolated Desulfovibrio strains were fed to a strain of Caenorhabditis elegans roundworms that expressed human alpha-syn fused with yellow fluorescent protein.
They found that worms fed Desulfovibrio bacteria from patients with PD harbored significantly more (P < .001) and larger alpha-syn aggregates (P < .001) than worms fed Desulfovibrio bacteria from healthy individuals or worms fed Escherichia coli strains.
In addition, worms fed Desulfovibrio strains from patients with PD died in significantly higher quantities than worms fed E. coli bacteria (P < .01).
Desulfovibrio strains isolated from patients with PD and strains isolated from healthy individuals appear to have different traits. Comparative genomics studies are needed to identify genetic differences and pathogenic genes from Desulfovibrio strains from patients with PD, the researchers note.
“Taking into account that aggregation of alpha-syn is a hallmark of PD, the ability of Desulfovibrio bacteria to induce alpha-syn aggregation in large numbers and sizes, as demonstrated in the present study, provides further evidence for the pathogenic role of Desulfovibrio bacteria in PD, as previously suggested,” they add.
The findings highlight the potential for screening and targeted removal of harmful Desulfovibrio bacteria, Dr. Saris suggests in the news release.
No clinical implications
In a comment, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, cautioned that “this research is in a very early stage, uses a nonvertebrate animal model, and the number of participants is small.
“Understanding the role of the gut microbiome in influencing PD is in its infancy. These are important steps to determining what – if any – link may be between gut bacteria and PD,” Dr. Beck said.
“Right now, there are no implications for the screening/treatment of carriers,” Dr. Beck said.
“It seems that a lot of people, whether with PD or not, harbor Desulfovibrio bacteria in their gut. More research is needed to understand what is different between the Desulfovibrio bacteria of people with PD vs. those who do not have PD,” Dr. Beck added.
The study was supported by the Magnus Ehrnrooth Foundation and the Jane and Aatos Erkko Foundation. Dr. Saris and Dr. Beck have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, a small study suggests.
Environmental factors as well as genetics are also suspected to play a role in PD etiology, although the exact cause remains unknown.
“Our findings indicate that specific strains of Desulfovibrio bacteria are likely to cause Parkinson’s disease,” study investigator Per Erik Saris, PhD, from the University of Helsinki, Finland, says in a news release.
The study was published online in Frontiers in Cellular and Infection Microbiology.
Screen and treat?
It builds on earlier work by the researchers that showed that Desulfovibrio bacteria were more prevalent and more abundant in quantity in patients with PD, especially patients with more severe disease, than in healthy individuals.
Desulfovibrio is a genus of gram-negative bacteria commonly found in aquatic environments in which levels of organic material are elevated, as well as in waterlogged soils.
In their latest study, Dr. Saris and colleagues looked for Desulfovibrio species in fecal samples from 10 patients with PD and their healthy spouses. Isolated Desulfovibrio strains were fed to a strain of Caenorhabditis elegans roundworms that expressed human alpha-syn fused with yellow fluorescent protein.
They found that worms fed Desulfovibrio bacteria from patients with PD harbored significantly more (P < .001) and larger alpha-syn aggregates (P < .001) than worms fed Desulfovibrio bacteria from healthy individuals or worms fed Escherichia coli strains.
In addition, worms fed Desulfovibrio strains from patients with PD died in significantly higher quantities than worms fed E. coli bacteria (P < .01).
Desulfovibrio strains isolated from patients with PD and strains isolated from healthy individuals appear to have different traits. Comparative genomics studies are needed to identify genetic differences and pathogenic genes from Desulfovibrio strains from patients with PD, the researchers note.
“Taking into account that aggregation of alpha-syn is a hallmark of PD, the ability of Desulfovibrio bacteria to induce alpha-syn aggregation in large numbers and sizes, as demonstrated in the present study, provides further evidence for the pathogenic role of Desulfovibrio bacteria in PD, as previously suggested,” they add.
The findings highlight the potential for screening and targeted removal of harmful Desulfovibrio bacteria, Dr. Saris suggests in the news release.
No clinical implications
In a comment, James Beck, PhD, chief scientific officer at the Parkinson’s Foundation, cautioned that “this research is in a very early stage, uses a nonvertebrate animal model, and the number of participants is small.
“Understanding the role of the gut microbiome in influencing PD is in its infancy. These are important steps to determining what – if any – link may be between gut bacteria and PD,” Dr. Beck said.
“Right now, there are no implications for the screening/treatment of carriers,” Dr. Beck said.
“It seems that a lot of people, whether with PD or not, harbor Desulfovibrio bacteria in their gut. More research is needed to understand what is different between the Desulfovibrio bacteria of people with PD vs. those who do not have PD,” Dr. Beck added.
The study was supported by the Magnus Ehrnrooth Foundation and the Jane and Aatos Erkko Foundation. Dr. Saris and Dr. Beck have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM FRONTIERS IN CELLULAR AND INFECTION MICROBIOLOGY
CGM completes picture of A1c in type 2 diabetes
SWITCH PRO clinical trial.
in a post hoc analysis of theTIR was inversely related to A1c, with the strongest correlation following treatment intensification.
However, “there was a wide scatter of data, indicating that TIR (and other metrics) provides information about glycemic control that cannot be discerned from A1c alone, and which at least complements it,” Ronald M. Goldenberg, MD, from LMC Diabetes & Endocrinology in Thornhill, Ont., and colleagues write in their article published in Diabetes Therapy.
Other work has shown that more than a third of people with type 2 diabetes are not achieving the internationally recommended A1c target of < 7% to 8.5%, they note.
When used with A1c, CGM data – such as TIR, time below range (TBR), and time above range (TAR) – “provide a more complete picture of glucose levels throughout the day and night,” they write.
“This may help empower people with diabetes to better manage their condition, giving them practical insights into the factors driving daily fluctuations in glucose levels, such as diet, exercise, insulin dosage, and insulin timing,” they add. “These metrics may also be used to inform treatment decisions by health care professionals.”
“Ultimately,” the researchers conclude, “it is hoped that the use of these new metrics should lead to an improved quality of glycemic control and, in turn, to a reduction in the number of diabetes-related complications.”
‘Important study’
Invited to comment, Celeste C. Thomas, MD, who was not involved with the research, said: “This study is important because it is consistent with previous analyses that found a correlation between TIR and A1c.”
But, “I was surprised by the scatter plots which identified participants with TIR of 70% that also had A1c > 9%,” she added. “This highlights the importance of using multiple glycemic metrics to understand an individual’s risk for diabetes complications and to be aware of the limitations of the metrics.”
Dr. Thomas, from the University of Chicago, also noted that CGM is used in endocrinology clinics and increasingly in primary care clinics, “often to determine glycemic patterns to optimize therapeutic management but also to review TIR and, importantly, time below range to reduce the incidence of hypoglycemia.”
And people with type 2 diabetes are using CGM, Dr. Thomas noted, to understand their individual responses to medications, food choices, sleep quality and duration, exercise, and other day-to-day variables that affect glucose levels. “In my clinical practice, the information provided by personal CGM is empowering,” she said.
Effective April 4, 2023, Medicare “allows for the coverage of CGM in patients [with type 2 diabetes] treated with one injection of insulin daily and those not taking insulin but with a history of hypoglycemia,” Dr. Thomas noted, whereas “previously, patients needed to be prescribed at least three injections of insulin daily. Other insurers will hopefully soon follow.”
“I foresee CGM and TIR being widely used in clinical practice for people living with type 2 diabetes,” she said, “especially those who have ever had an A1c over 8%, those with a history of hypoglycemia, and those treated with medications that are known to cause hypoglycemia.”
How does TIR compare with A1c?
Dr. Goldenberg and colleagues set out to better understand how the emerging TIR metric compares with the traditional A1c value.
They performed a post-hoc analysis of data from the phase 4 SWITCH PRO study of basal insulin–treated patients with type 2 diabetes with at least one risk factor for hypoglycemia.
The patients were treated with insulin degludec or glargine 100 during a 16-week titration and 2-week maintenance phase, and then crossed over to the other treatment for the same time periods.
Glycemic control was evaluated using a blinded professional CGM (Abbott Freestyle Libro Pro). The primary outcome was TIR, which was defined as the percentage of time spent in the blood glucose range of 70-180 mg/dL.
There were 419 participants in the full analysis. Patients were a mean age of 63 and 48% were men. They had a mean body mass index of 32 kg/m2 and had diabetes for a mean of 15 years.
There was a moderate inverse linear correlation between TIR and A1c at baseline, which became stronger following treatment intensification during the maintenance periods in the full cohort, and in a subgroup of patients with median A1c ≥ 7.5% (212 patients).
This correlation between TIR and A1c was poorer in the subgroup of patients with baseline median A1c < 7.5% (307 patients).
The data were widely scattered, “supporting the premise that A1c and TIR can be relatively crude surrogates of each other when it comes to individual patients,” Dr. Goldenberg and colleagues note.
Where individual patients have both low A1c and low TIR values, this might indicate frequent episodes of hypoglycemia.
A few individual patients had TIR > 70% but A1c approaching 9%. These patients may have different red blood cell physiology whereby A1c does not reflect average glycemic values, the researchers suggest.
The study was sponsored by Novo Nordisk and several authors are Novo Nordisk employees. The complete author disclosures are listed with the article. Dr. Thomas has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
SWITCH PRO clinical trial.
in a post hoc analysis of theTIR was inversely related to A1c, with the strongest correlation following treatment intensification.
However, “there was a wide scatter of data, indicating that TIR (and other metrics) provides information about glycemic control that cannot be discerned from A1c alone, and which at least complements it,” Ronald M. Goldenberg, MD, from LMC Diabetes & Endocrinology in Thornhill, Ont., and colleagues write in their article published in Diabetes Therapy.
Other work has shown that more than a third of people with type 2 diabetes are not achieving the internationally recommended A1c target of < 7% to 8.5%, they note.
When used with A1c, CGM data – such as TIR, time below range (TBR), and time above range (TAR) – “provide a more complete picture of glucose levels throughout the day and night,” they write.
“This may help empower people with diabetes to better manage their condition, giving them practical insights into the factors driving daily fluctuations in glucose levels, such as diet, exercise, insulin dosage, and insulin timing,” they add. “These metrics may also be used to inform treatment decisions by health care professionals.”
“Ultimately,” the researchers conclude, “it is hoped that the use of these new metrics should lead to an improved quality of glycemic control and, in turn, to a reduction in the number of diabetes-related complications.”
‘Important study’
Invited to comment, Celeste C. Thomas, MD, who was not involved with the research, said: “This study is important because it is consistent with previous analyses that found a correlation between TIR and A1c.”
But, “I was surprised by the scatter plots which identified participants with TIR of 70% that also had A1c > 9%,” she added. “This highlights the importance of using multiple glycemic metrics to understand an individual’s risk for diabetes complications and to be aware of the limitations of the metrics.”
Dr. Thomas, from the University of Chicago, also noted that CGM is used in endocrinology clinics and increasingly in primary care clinics, “often to determine glycemic patterns to optimize therapeutic management but also to review TIR and, importantly, time below range to reduce the incidence of hypoglycemia.”
And people with type 2 diabetes are using CGM, Dr. Thomas noted, to understand their individual responses to medications, food choices, sleep quality and duration, exercise, and other day-to-day variables that affect glucose levels. “In my clinical practice, the information provided by personal CGM is empowering,” she said.
Effective April 4, 2023, Medicare “allows for the coverage of CGM in patients [with type 2 diabetes] treated with one injection of insulin daily and those not taking insulin but with a history of hypoglycemia,” Dr. Thomas noted, whereas “previously, patients needed to be prescribed at least three injections of insulin daily. Other insurers will hopefully soon follow.”
“I foresee CGM and TIR being widely used in clinical practice for people living with type 2 diabetes,” she said, “especially those who have ever had an A1c over 8%, those with a history of hypoglycemia, and those treated with medications that are known to cause hypoglycemia.”
How does TIR compare with A1c?
Dr. Goldenberg and colleagues set out to better understand how the emerging TIR metric compares with the traditional A1c value.
They performed a post-hoc analysis of data from the phase 4 SWITCH PRO study of basal insulin–treated patients with type 2 diabetes with at least one risk factor for hypoglycemia.
The patients were treated with insulin degludec or glargine 100 during a 16-week titration and 2-week maintenance phase, and then crossed over to the other treatment for the same time periods.
Glycemic control was evaluated using a blinded professional CGM (Abbott Freestyle Libro Pro). The primary outcome was TIR, which was defined as the percentage of time spent in the blood glucose range of 70-180 mg/dL.
There were 419 participants in the full analysis. Patients were a mean age of 63 and 48% were men. They had a mean body mass index of 32 kg/m2 and had diabetes for a mean of 15 years.
There was a moderate inverse linear correlation between TIR and A1c at baseline, which became stronger following treatment intensification during the maintenance periods in the full cohort, and in a subgroup of patients with median A1c ≥ 7.5% (212 patients).
This correlation between TIR and A1c was poorer in the subgroup of patients with baseline median A1c < 7.5% (307 patients).
The data were widely scattered, “supporting the premise that A1c and TIR can be relatively crude surrogates of each other when it comes to individual patients,” Dr. Goldenberg and colleagues note.
Where individual patients have both low A1c and low TIR values, this might indicate frequent episodes of hypoglycemia.
A few individual patients had TIR > 70% but A1c approaching 9%. These patients may have different red blood cell physiology whereby A1c does not reflect average glycemic values, the researchers suggest.
The study was sponsored by Novo Nordisk and several authors are Novo Nordisk employees. The complete author disclosures are listed with the article. Dr. Thomas has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
SWITCH PRO clinical trial.
in a post hoc analysis of theTIR was inversely related to A1c, with the strongest correlation following treatment intensification.
However, “there was a wide scatter of data, indicating that TIR (and other metrics) provides information about glycemic control that cannot be discerned from A1c alone, and which at least complements it,” Ronald M. Goldenberg, MD, from LMC Diabetes & Endocrinology in Thornhill, Ont., and colleagues write in their article published in Diabetes Therapy.
Other work has shown that more than a third of people with type 2 diabetes are not achieving the internationally recommended A1c target of < 7% to 8.5%, they note.
When used with A1c, CGM data – such as TIR, time below range (TBR), and time above range (TAR) – “provide a more complete picture of glucose levels throughout the day and night,” they write.
“This may help empower people with diabetes to better manage their condition, giving them practical insights into the factors driving daily fluctuations in glucose levels, such as diet, exercise, insulin dosage, and insulin timing,” they add. “These metrics may also be used to inform treatment decisions by health care professionals.”
“Ultimately,” the researchers conclude, “it is hoped that the use of these new metrics should lead to an improved quality of glycemic control and, in turn, to a reduction in the number of diabetes-related complications.”
‘Important study’
Invited to comment, Celeste C. Thomas, MD, who was not involved with the research, said: “This study is important because it is consistent with previous analyses that found a correlation between TIR and A1c.”
But, “I was surprised by the scatter plots which identified participants with TIR of 70% that also had A1c > 9%,” she added. “This highlights the importance of using multiple glycemic metrics to understand an individual’s risk for diabetes complications and to be aware of the limitations of the metrics.”
Dr. Thomas, from the University of Chicago, also noted that CGM is used in endocrinology clinics and increasingly in primary care clinics, “often to determine glycemic patterns to optimize therapeutic management but also to review TIR and, importantly, time below range to reduce the incidence of hypoglycemia.”
And people with type 2 diabetes are using CGM, Dr. Thomas noted, to understand their individual responses to medications, food choices, sleep quality and duration, exercise, and other day-to-day variables that affect glucose levels. “In my clinical practice, the information provided by personal CGM is empowering,” she said.
Effective April 4, 2023, Medicare “allows for the coverage of CGM in patients [with type 2 diabetes] treated with one injection of insulin daily and those not taking insulin but with a history of hypoglycemia,” Dr. Thomas noted, whereas “previously, patients needed to be prescribed at least three injections of insulin daily. Other insurers will hopefully soon follow.”
“I foresee CGM and TIR being widely used in clinical practice for people living with type 2 diabetes,” she said, “especially those who have ever had an A1c over 8%, those with a history of hypoglycemia, and those treated with medications that are known to cause hypoglycemia.”
How does TIR compare with A1c?
Dr. Goldenberg and colleagues set out to better understand how the emerging TIR metric compares with the traditional A1c value.
They performed a post-hoc analysis of data from the phase 4 SWITCH PRO study of basal insulin–treated patients with type 2 diabetes with at least one risk factor for hypoglycemia.
The patients were treated with insulin degludec or glargine 100 during a 16-week titration and 2-week maintenance phase, and then crossed over to the other treatment for the same time periods.
Glycemic control was evaluated using a blinded professional CGM (Abbott Freestyle Libro Pro). The primary outcome was TIR, which was defined as the percentage of time spent in the blood glucose range of 70-180 mg/dL.
There were 419 participants in the full analysis. Patients were a mean age of 63 and 48% were men. They had a mean body mass index of 32 kg/m2 and had diabetes for a mean of 15 years.
There was a moderate inverse linear correlation between TIR and A1c at baseline, which became stronger following treatment intensification during the maintenance periods in the full cohort, and in a subgroup of patients with median A1c ≥ 7.5% (212 patients).
This correlation between TIR and A1c was poorer in the subgroup of patients with baseline median A1c < 7.5% (307 patients).
The data were widely scattered, “supporting the premise that A1c and TIR can be relatively crude surrogates of each other when it comes to individual patients,” Dr. Goldenberg and colleagues note.
Where individual patients have both low A1c and low TIR values, this might indicate frequent episodes of hypoglycemia.
A few individual patients had TIR > 70% but A1c approaching 9%. These patients may have different red blood cell physiology whereby A1c does not reflect average glycemic values, the researchers suggest.
The study was sponsored by Novo Nordisk and several authors are Novo Nordisk employees. The complete author disclosures are listed with the article. Dr. Thomas has reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM DIABETES THERAPY
One in five brain injury trials shows errors, signs of spin
LOS ANGELES –
“This is concerning result,” said general physician Lucas Piason F. Martins, MD, of the Bahiana School of Medicine and Public Health, Salvador, Brazil. “Many of these trials have been included in clinical guidelines and cited extensively in systematic reviews and meta-analyses, especially those related to hypothermia therapy.”
Dr. Martins presented the findings at the annual meeting of the American Association of Neurological Surgeons.
Defining spin
In recent years, medical researchers have sought to define and identify spin in medical literature. According to a 2017 report in PLOS Biology, “spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favorable light.”
Any spin can be dangerous, Dr. Martins said, because it “can potentially mislead readers and affect the interpretation of study results, which in turn can impact clinical decision-making.”
For the new report, a systematic review, Dr. Martins and colleagues examined 150 studies published in 18 top-ranked journals including the Journal of Neurotrauma (26%), the Journal of Neurosurgery (15%), Critical Care Medicine (9%), and the New England Journal of Medicine (8%).
Studies were published between 1960 and 2020. The review protocol was previously published in BMJ Open.
According to the report, most of the 32 studies with spin (75%) had a “focus on statistically significant results not based on primary outcome.”
For example, Dr. Martins said in an interview that the abstract for a study about drug treatment of brain contusions highlighted a secondary result instead of the main finding that the medication had no effect. Another study of treatment for severe closed head injuries focused on a subgroup outcome.
As Dr. Martins noted, it’s potentially problematic for studies to have several outcomes, measure outcomes in different ways, and have multiple time points without a predefined primary outcome. “A positive finding based on such strategies could potentially be explained by chance alone,” he said.
The researchers also reported that 65% of the studies with spin highlighted “the beneficial effect of the treatment despite statistically nonsignificant results” and that 9% had incorrect statistical analysis.
The findings are especially noteworthy because “the trials we analyzed were deemed to have the highest quality of methodology,” Dr. Martins said.
The researchers didn’t identify specific studies that they deemed to have spin, and they won’t do so, Dr. Martins said. The authors do plan to reveal which journals were most spin-heavy but only when these findings are published.
Were the study authors trying to mislead readers? Not necessarily. Researchers “may search for positive results to confirm their beliefs, although with good intentions,” Dr. Martins said, adding that the researchers found that “positive research tends to be more cited.”
They also reported that studies with smaller sample sizes were more likely to have spin (P = .04).
At 21%, the percentage of studies with spin was lower than that found in some previous reports that analyzed medical literature in other specialties.
A 2019 study of 93 randomized clinical studies in cardiology, for example, found spin in 57% of abstracts and 67% of full texts. The lower number in the new study may be due to its especially conservative definition of spin, Dr. Martins said.
Appropriate methodology
Cardiologist Richard Krasuski, MD, of Duke University Medical Center, Durham, N.C., who coauthored the 2019 study into spin in cardiology studies, told this news organization that the new analysis follows appropriate methodology and appears to be valid.
It makes sense, he said, that smaller studies had more spin: “It is much harder to show statistical significance in small studies and softer endpoints can be harder to predict. Small neutral trials are also much harder to publish in high-level journals. This all increases the tendency to spin the results so the reviewer and eventually the reader is more captivated.”
Why is there so much spin in medical research? “As an investigator, you always hope to positively impact patient health and outcomes, so there is a tendency to look at secondary analyses to have something good to emphasize,” he said. “This is an inherent trait in most of us, to find something good we can focus on. I do believe that much of this is subconscious and perhaps with noble intent.”
Dr. Krasuski said that he advises trainees to look at the methodology of studies, not just the abstract or discussion sections. “You don’t have to be a trained statistician to identify how well the findings match the author’s interpretation.
“Always try to identify what the primary outcome of the study was at the time of the design and whether the investigators achieved their objective. As a reviewer, my own personal experience in research into spin makes me more cognizant of its existence, and I generally require authors to reword and tone down their message if it is not supported by the data.”
What’s next? The investigators want to look for spin in the wider neurosurgery literature, Dr. Martins said, with an eye toward developing “practical strategies to assess spin and give pragmatic recommendations for good practice in clinical research.”
No study funding is reported. Dr. Martins has no disclosures, and several study authors reported funding from the UK National Institute for Health Research. Dr. Krasuski has no disclosures.
A version of this article first appeared on Medscape.com.
LOS ANGELES –
“This is concerning result,” said general physician Lucas Piason F. Martins, MD, of the Bahiana School of Medicine and Public Health, Salvador, Brazil. “Many of these trials have been included in clinical guidelines and cited extensively in systematic reviews and meta-analyses, especially those related to hypothermia therapy.”
Dr. Martins presented the findings at the annual meeting of the American Association of Neurological Surgeons.
Defining spin
In recent years, medical researchers have sought to define and identify spin in medical literature. According to a 2017 report in PLOS Biology, “spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favorable light.”
Any spin can be dangerous, Dr. Martins said, because it “can potentially mislead readers and affect the interpretation of study results, which in turn can impact clinical decision-making.”
For the new report, a systematic review, Dr. Martins and colleagues examined 150 studies published in 18 top-ranked journals including the Journal of Neurotrauma (26%), the Journal of Neurosurgery (15%), Critical Care Medicine (9%), and the New England Journal of Medicine (8%).
Studies were published between 1960 and 2020. The review protocol was previously published in BMJ Open.
According to the report, most of the 32 studies with spin (75%) had a “focus on statistically significant results not based on primary outcome.”
For example, Dr. Martins said in an interview that the abstract for a study about drug treatment of brain contusions highlighted a secondary result instead of the main finding that the medication had no effect. Another study of treatment for severe closed head injuries focused on a subgroup outcome.
As Dr. Martins noted, it’s potentially problematic for studies to have several outcomes, measure outcomes in different ways, and have multiple time points without a predefined primary outcome. “A positive finding based on such strategies could potentially be explained by chance alone,” he said.
The researchers also reported that 65% of the studies with spin highlighted “the beneficial effect of the treatment despite statistically nonsignificant results” and that 9% had incorrect statistical analysis.
The findings are especially noteworthy because “the trials we analyzed were deemed to have the highest quality of methodology,” Dr. Martins said.
The researchers didn’t identify specific studies that they deemed to have spin, and they won’t do so, Dr. Martins said. The authors do plan to reveal which journals were most spin-heavy but only when these findings are published.
Were the study authors trying to mislead readers? Not necessarily. Researchers “may search for positive results to confirm their beliefs, although with good intentions,” Dr. Martins said, adding that the researchers found that “positive research tends to be more cited.”
They also reported that studies with smaller sample sizes were more likely to have spin (P = .04).
At 21%, the percentage of studies with spin was lower than that found in some previous reports that analyzed medical literature in other specialties.
A 2019 study of 93 randomized clinical studies in cardiology, for example, found spin in 57% of abstracts and 67% of full texts. The lower number in the new study may be due to its especially conservative definition of spin, Dr. Martins said.
Appropriate methodology
Cardiologist Richard Krasuski, MD, of Duke University Medical Center, Durham, N.C., who coauthored the 2019 study into spin in cardiology studies, told this news organization that the new analysis follows appropriate methodology and appears to be valid.
It makes sense, he said, that smaller studies had more spin: “It is much harder to show statistical significance in small studies and softer endpoints can be harder to predict. Small neutral trials are also much harder to publish in high-level journals. This all increases the tendency to spin the results so the reviewer and eventually the reader is more captivated.”
Why is there so much spin in medical research? “As an investigator, you always hope to positively impact patient health and outcomes, so there is a tendency to look at secondary analyses to have something good to emphasize,” he said. “This is an inherent trait in most of us, to find something good we can focus on. I do believe that much of this is subconscious and perhaps with noble intent.”
Dr. Krasuski said that he advises trainees to look at the methodology of studies, not just the abstract or discussion sections. “You don’t have to be a trained statistician to identify how well the findings match the author’s interpretation.
“Always try to identify what the primary outcome of the study was at the time of the design and whether the investigators achieved their objective. As a reviewer, my own personal experience in research into spin makes me more cognizant of its existence, and I generally require authors to reword and tone down their message if it is not supported by the data.”
What’s next? The investigators want to look for spin in the wider neurosurgery literature, Dr. Martins said, with an eye toward developing “practical strategies to assess spin and give pragmatic recommendations for good practice in clinical research.”
No study funding is reported. Dr. Martins has no disclosures, and several study authors reported funding from the UK National Institute for Health Research. Dr. Krasuski has no disclosures.
A version of this article first appeared on Medscape.com.
LOS ANGELES –
“This is concerning result,” said general physician Lucas Piason F. Martins, MD, of the Bahiana School of Medicine and Public Health, Salvador, Brazil. “Many of these trials have been included in clinical guidelines and cited extensively in systematic reviews and meta-analyses, especially those related to hypothermia therapy.”
Dr. Martins presented the findings at the annual meeting of the American Association of Neurological Surgeons.
Defining spin
In recent years, medical researchers have sought to define and identify spin in medical literature. According to a 2017 report in PLOS Biology, “spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favorable light.”
Any spin can be dangerous, Dr. Martins said, because it “can potentially mislead readers and affect the interpretation of study results, which in turn can impact clinical decision-making.”
For the new report, a systematic review, Dr. Martins and colleagues examined 150 studies published in 18 top-ranked journals including the Journal of Neurotrauma (26%), the Journal of Neurosurgery (15%), Critical Care Medicine (9%), and the New England Journal of Medicine (8%).
Studies were published between 1960 and 2020. The review protocol was previously published in BMJ Open.
According to the report, most of the 32 studies with spin (75%) had a “focus on statistically significant results not based on primary outcome.”
For example, Dr. Martins said in an interview that the abstract for a study about drug treatment of brain contusions highlighted a secondary result instead of the main finding that the medication had no effect. Another study of treatment for severe closed head injuries focused on a subgroup outcome.
As Dr. Martins noted, it’s potentially problematic for studies to have several outcomes, measure outcomes in different ways, and have multiple time points without a predefined primary outcome. “A positive finding based on such strategies could potentially be explained by chance alone,” he said.
The researchers also reported that 65% of the studies with spin highlighted “the beneficial effect of the treatment despite statistically nonsignificant results” and that 9% had incorrect statistical analysis.
The findings are especially noteworthy because “the trials we analyzed were deemed to have the highest quality of methodology,” Dr. Martins said.
The researchers didn’t identify specific studies that they deemed to have spin, and they won’t do so, Dr. Martins said. The authors do plan to reveal which journals were most spin-heavy but only when these findings are published.
Were the study authors trying to mislead readers? Not necessarily. Researchers “may search for positive results to confirm their beliefs, although with good intentions,” Dr. Martins said, adding that the researchers found that “positive research tends to be more cited.”
They also reported that studies with smaller sample sizes were more likely to have spin (P = .04).
At 21%, the percentage of studies with spin was lower than that found in some previous reports that analyzed medical literature in other specialties.
A 2019 study of 93 randomized clinical studies in cardiology, for example, found spin in 57% of abstracts and 67% of full texts. The lower number in the new study may be due to its especially conservative definition of spin, Dr. Martins said.
Appropriate methodology
Cardiologist Richard Krasuski, MD, of Duke University Medical Center, Durham, N.C., who coauthored the 2019 study into spin in cardiology studies, told this news organization that the new analysis follows appropriate methodology and appears to be valid.
It makes sense, he said, that smaller studies had more spin: “It is much harder to show statistical significance in small studies and softer endpoints can be harder to predict. Small neutral trials are also much harder to publish in high-level journals. This all increases the tendency to spin the results so the reviewer and eventually the reader is more captivated.”
Why is there so much spin in medical research? “As an investigator, you always hope to positively impact patient health and outcomes, so there is a tendency to look at secondary analyses to have something good to emphasize,” he said. “This is an inherent trait in most of us, to find something good we can focus on. I do believe that much of this is subconscious and perhaps with noble intent.”
Dr. Krasuski said that he advises trainees to look at the methodology of studies, not just the abstract or discussion sections. “You don’t have to be a trained statistician to identify how well the findings match the author’s interpretation.
“Always try to identify what the primary outcome of the study was at the time of the design and whether the investigators achieved their objective. As a reviewer, my own personal experience in research into spin makes me more cognizant of its existence, and I generally require authors to reword and tone down their message if it is not supported by the data.”
What’s next? The investigators want to look for spin in the wider neurosurgery literature, Dr. Martins said, with an eye toward developing “practical strategies to assess spin and give pragmatic recommendations for good practice in clinical research.”
No study funding is reported. Dr. Martins has no disclosures, and several study authors reported funding from the UK National Institute for Health Research. Dr. Krasuski has no disclosures.
A version of this article first appeared on Medscape.com.
FROM AANS 2023
Hearing aids are a ‘powerful’ tool for reducing dementia risk
, new research confirms. A large observational study from the United Kingdom showed a 42% increased risk for dementia in people with hearing loss compared with their peers with no hearing trouble. In addition, there was no increased risk in those with hearing loss who used hearing aids.
“The evidence is building that hearing loss may be the most impactful modifiable risk factor for dementia in mid-life, but the effectiveness of hearing aid use on reducing the risk of dementia in the real world has remained unclear,” Dongshan Zhu, PhD, with Shandong University, Jinan, China, said in a news release.
“Our study provides the best evidence to date to suggest that hearing aids could be a minimally invasive, cost-effective treatment to mitigate the potential impact of hearing loss on dementia,” Dr. Zhu said.
The study, which was published online in Lancet Public Health, comes on the heels of the 2020 Lancet Commission report on dementia, which suggested hearing loss may be linked to approximately 8% of worldwide dementia cases.
‘Compelling’ evidence
For the study, investigators analyzed longitudinal data on 437,704 individuals, most of whom were White, from the UK Biobank (54% female; mean age at baseline, 56 years). Roughly three quarters of the cohort had no hearing loss and one quarter had some level of hearing loss, with 12% of these individuals using hearing aids.
After the researchers controlled for relevant cofactors, compared with people without hearing loss, those with hearing loss who were not using hearing aids had an increased risk for all-cause dementia (hazard ratio [HR], 1.42; 95% confidence interval [CI], 1.29-1.56).
No increased risk was seen in people with hearing loss who were using hearing aids (HR, 1.04; 95% CI, 0.98-1.10).
The positive association of hearing aid use was observed in all-cause dementia and cause-specific dementia subtypes, including Alzheimer’s disease, vascular dementia, and non–Alzheimer’s disease nonvascular dementia.
The data also suggest that the protection against dementia conferred by hearing aid use most likely stems from direct effects from hearing aids rather than indirect mediators, such as social isolation, loneliness, and low mood.
Dr. Zhu said the findings highlight the “urgent need” for the early use of hearing aids when an individual starts having trouble hearing.
“A group effort from across society is necessary, including raising awareness of hearing loss and the potential links with dementia; increasing accessibility to hearing aids by reducing cost; and more support for primary care workers to screen for hearing impairment, raise awareness, and deliver treatment such as fitting hearing aids,” Dr. Zhu said.
Writing in a linked comment, Gill Livingston, MD, and Sergi Costafreda, MD, PhD, with University College London, noted that with addition of this study, “the evidence that hearing aids are a powerful tool to reduce the risk of dementia in people with hearing loss, is as good as possible without randomized controlled trials, which might not be practically possible or ethical because people with hearing loss should not be stopped from using effective treatments.”
“The evidence is compelling that treating hearing loss is a promising way of reducing dementia risk. This is the time to increase awareness of and detection of hearing loss, as well as the acceptability and usability of hearing aids,” Dr. Livingston and Dr. Costafreda added.
High-quality evidence – with caveats
Several experts offered perspective on the analysis in a statement from the U.K.-based nonprofit Science Media Centre, which was not involved with the conduct of this study. Charles Marshall, MRCP, PhD, with Queen Mary University of London, said that the study provides “high-quality evidence” that those with hearing loss who use hearing aids are at lower risk for dementia than are those with hearing loss who do not use hearing aids.
“This raises the possibility that a proportion of dementia cases could be prevented by using hearing aids to correct hearing loss. However, the observational nature of this study makes it difficult to be sure that hearing aids are actually causing the reduced risk of dementia,” Dr. Marshall added.
“Hearing aids produce slightly distorted sound, and the brain has to adapt to this in order for hearing aids to be helpful,” he said. “People who are at risk of developing dementia in the future may have early changes in their brain that impair this adaptation, and this may lead to them choosing to not use hearing aids. This would confound the association, creating the appearance that hearing aids were reducing dementia risk, when actually their use was just identifying people with relatively healthy brains,” Dr. Marshall added.
Tara Spires-Jones, PhD, with the University of Edinburgh, said this “well-conducted” study confirms previous similar studies showing an association between hearing loss and dementia risk.
Echoing Dr. Marshall, Dr. Spires-Jones noted that this type of study cannot prove conclusively that hearing loss causes dementia.
“For example,” she said, “it is possible that people who are already in the very early stages of disease are less likely to seek help for hearing loss. However, on balance, this study and the rest of the data in the field indicate that keeping your brain healthy and engaged reduces dementia risk.”
Dr. Spires-Jones said that she agrees with the investigators that it’s “important to help people with hearing loss to get effective hearing aids to help keep their brains engaged through allowing richer social interactions.”
This study was funded by the National Natural Science Foundation of China and Shandong Province, Taishan Scholars Project, China Medical Board, and China Postdoctoral Science Foundation. Dr. Zhu, Dr. Livingston, Dr. Costafreda, Dr. Marshall, and Dr. Spires-Jones have no relevant disclosures.
A version of this article originally appeared on Medscape.com.
, new research confirms. A large observational study from the United Kingdom showed a 42% increased risk for dementia in people with hearing loss compared with their peers with no hearing trouble. In addition, there was no increased risk in those with hearing loss who used hearing aids.
“The evidence is building that hearing loss may be the most impactful modifiable risk factor for dementia in mid-life, but the effectiveness of hearing aid use on reducing the risk of dementia in the real world has remained unclear,” Dongshan Zhu, PhD, with Shandong University, Jinan, China, said in a news release.
“Our study provides the best evidence to date to suggest that hearing aids could be a minimally invasive, cost-effective treatment to mitigate the potential impact of hearing loss on dementia,” Dr. Zhu said.
The study, which was published online in Lancet Public Health, comes on the heels of the 2020 Lancet Commission report on dementia, which suggested hearing loss may be linked to approximately 8% of worldwide dementia cases.
‘Compelling’ evidence
For the study, investigators analyzed longitudinal data on 437,704 individuals, most of whom were White, from the UK Biobank (54% female; mean age at baseline, 56 years). Roughly three quarters of the cohort had no hearing loss and one quarter had some level of hearing loss, with 12% of these individuals using hearing aids.
After the researchers controlled for relevant cofactors, compared with people without hearing loss, those with hearing loss who were not using hearing aids had an increased risk for all-cause dementia (hazard ratio [HR], 1.42; 95% confidence interval [CI], 1.29-1.56).
No increased risk was seen in people with hearing loss who were using hearing aids (HR, 1.04; 95% CI, 0.98-1.10).
The positive association of hearing aid use was observed in all-cause dementia and cause-specific dementia subtypes, including Alzheimer’s disease, vascular dementia, and non–Alzheimer’s disease nonvascular dementia.
The data also suggest that the protection against dementia conferred by hearing aid use most likely stems from direct effects from hearing aids rather than indirect mediators, such as social isolation, loneliness, and low mood.
Dr. Zhu said the findings highlight the “urgent need” for the early use of hearing aids when an individual starts having trouble hearing.
“A group effort from across society is necessary, including raising awareness of hearing loss and the potential links with dementia; increasing accessibility to hearing aids by reducing cost; and more support for primary care workers to screen for hearing impairment, raise awareness, and deliver treatment such as fitting hearing aids,” Dr. Zhu said.
Writing in a linked comment, Gill Livingston, MD, and Sergi Costafreda, MD, PhD, with University College London, noted that with addition of this study, “the evidence that hearing aids are a powerful tool to reduce the risk of dementia in people with hearing loss, is as good as possible without randomized controlled trials, which might not be practically possible or ethical because people with hearing loss should not be stopped from using effective treatments.”
“The evidence is compelling that treating hearing loss is a promising way of reducing dementia risk. This is the time to increase awareness of and detection of hearing loss, as well as the acceptability and usability of hearing aids,” Dr. Livingston and Dr. Costafreda added.
High-quality evidence – with caveats
Several experts offered perspective on the analysis in a statement from the U.K.-based nonprofit Science Media Centre, which was not involved with the conduct of this study. Charles Marshall, MRCP, PhD, with Queen Mary University of London, said that the study provides “high-quality evidence” that those with hearing loss who use hearing aids are at lower risk for dementia than are those with hearing loss who do not use hearing aids.
“This raises the possibility that a proportion of dementia cases could be prevented by using hearing aids to correct hearing loss. However, the observational nature of this study makes it difficult to be sure that hearing aids are actually causing the reduced risk of dementia,” Dr. Marshall added.
“Hearing aids produce slightly distorted sound, and the brain has to adapt to this in order for hearing aids to be helpful,” he said. “People who are at risk of developing dementia in the future may have early changes in their brain that impair this adaptation, and this may lead to them choosing to not use hearing aids. This would confound the association, creating the appearance that hearing aids were reducing dementia risk, when actually their use was just identifying people with relatively healthy brains,” Dr. Marshall added.
Tara Spires-Jones, PhD, with the University of Edinburgh, said this “well-conducted” study confirms previous similar studies showing an association between hearing loss and dementia risk.
Echoing Dr. Marshall, Dr. Spires-Jones noted that this type of study cannot prove conclusively that hearing loss causes dementia.
“For example,” she said, “it is possible that people who are already in the very early stages of disease are less likely to seek help for hearing loss. However, on balance, this study and the rest of the data in the field indicate that keeping your brain healthy and engaged reduces dementia risk.”
Dr. Spires-Jones said that she agrees with the investigators that it’s “important to help people with hearing loss to get effective hearing aids to help keep their brains engaged through allowing richer social interactions.”
This study was funded by the National Natural Science Foundation of China and Shandong Province, Taishan Scholars Project, China Medical Board, and China Postdoctoral Science Foundation. Dr. Zhu, Dr. Livingston, Dr. Costafreda, Dr. Marshall, and Dr. Spires-Jones have no relevant disclosures.
A version of this article originally appeared on Medscape.com.
, new research confirms. A large observational study from the United Kingdom showed a 42% increased risk for dementia in people with hearing loss compared with their peers with no hearing trouble. In addition, there was no increased risk in those with hearing loss who used hearing aids.
“The evidence is building that hearing loss may be the most impactful modifiable risk factor for dementia in mid-life, but the effectiveness of hearing aid use on reducing the risk of dementia in the real world has remained unclear,” Dongshan Zhu, PhD, with Shandong University, Jinan, China, said in a news release.
“Our study provides the best evidence to date to suggest that hearing aids could be a minimally invasive, cost-effective treatment to mitigate the potential impact of hearing loss on dementia,” Dr. Zhu said.
The study, which was published online in Lancet Public Health, comes on the heels of the 2020 Lancet Commission report on dementia, which suggested hearing loss may be linked to approximately 8% of worldwide dementia cases.
‘Compelling’ evidence
For the study, investigators analyzed longitudinal data on 437,704 individuals, most of whom were White, from the UK Biobank (54% female; mean age at baseline, 56 years). Roughly three quarters of the cohort had no hearing loss and one quarter had some level of hearing loss, with 12% of these individuals using hearing aids.
After the researchers controlled for relevant cofactors, compared with people without hearing loss, those with hearing loss who were not using hearing aids had an increased risk for all-cause dementia (hazard ratio [HR], 1.42; 95% confidence interval [CI], 1.29-1.56).
No increased risk was seen in people with hearing loss who were using hearing aids (HR, 1.04; 95% CI, 0.98-1.10).
The positive association of hearing aid use was observed in all-cause dementia and cause-specific dementia subtypes, including Alzheimer’s disease, vascular dementia, and non–Alzheimer’s disease nonvascular dementia.
The data also suggest that the protection against dementia conferred by hearing aid use most likely stems from direct effects from hearing aids rather than indirect mediators, such as social isolation, loneliness, and low mood.
Dr. Zhu said the findings highlight the “urgent need” for the early use of hearing aids when an individual starts having trouble hearing.
“A group effort from across society is necessary, including raising awareness of hearing loss and the potential links with dementia; increasing accessibility to hearing aids by reducing cost; and more support for primary care workers to screen for hearing impairment, raise awareness, and deliver treatment such as fitting hearing aids,” Dr. Zhu said.
Writing in a linked comment, Gill Livingston, MD, and Sergi Costafreda, MD, PhD, with University College London, noted that with addition of this study, “the evidence that hearing aids are a powerful tool to reduce the risk of dementia in people with hearing loss, is as good as possible without randomized controlled trials, which might not be practically possible or ethical because people with hearing loss should not be stopped from using effective treatments.”
“The evidence is compelling that treating hearing loss is a promising way of reducing dementia risk. This is the time to increase awareness of and detection of hearing loss, as well as the acceptability and usability of hearing aids,” Dr. Livingston and Dr. Costafreda added.
High-quality evidence – with caveats
Several experts offered perspective on the analysis in a statement from the U.K.-based nonprofit Science Media Centre, which was not involved with the conduct of this study. Charles Marshall, MRCP, PhD, with Queen Mary University of London, said that the study provides “high-quality evidence” that those with hearing loss who use hearing aids are at lower risk for dementia than are those with hearing loss who do not use hearing aids.
“This raises the possibility that a proportion of dementia cases could be prevented by using hearing aids to correct hearing loss. However, the observational nature of this study makes it difficult to be sure that hearing aids are actually causing the reduced risk of dementia,” Dr. Marshall added.
“Hearing aids produce slightly distorted sound, and the brain has to adapt to this in order for hearing aids to be helpful,” he said. “People who are at risk of developing dementia in the future may have early changes in their brain that impair this adaptation, and this may lead to them choosing to not use hearing aids. This would confound the association, creating the appearance that hearing aids were reducing dementia risk, when actually their use was just identifying people with relatively healthy brains,” Dr. Marshall added.
Tara Spires-Jones, PhD, with the University of Edinburgh, said this “well-conducted” study confirms previous similar studies showing an association between hearing loss and dementia risk.
Echoing Dr. Marshall, Dr. Spires-Jones noted that this type of study cannot prove conclusively that hearing loss causes dementia.
“For example,” she said, “it is possible that people who are already in the very early stages of disease are less likely to seek help for hearing loss. However, on balance, this study and the rest of the data in the field indicate that keeping your brain healthy and engaged reduces dementia risk.”
Dr. Spires-Jones said that she agrees with the investigators that it’s “important to help people with hearing loss to get effective hearing aids to help keep their brains engaged through allowing richer social interactions.”
This study was funded by the National Natural Science Foundation of China and Shandong Province, Taishan Scholars Project, China Medical Board, and China Postdoctoral Science Foundation. Dr. Zhu, Dr. Livingston, Dr. Costafreda, Dr. Marshall, and Dr. Spires-Jones have no relevant disclosures.
A version of this article originally appeared on Medscape.com.
What are the healthiest drinks for patients with type 2 diabetes?
The researchers examined data on almost 15,500 participants with type 2 diabetes from two major studies, finding that the highest level of consumption of SSBs was associated with a 20% increased risk of all-cause mortality and a 25% raised risk of cardiovascular disease, compared with consumption of the least amounts of these products.
The research, published in BMJ, also showed that drinking coffee, tea, plain water, and low-fat milk reduced the risk of all-cause death and that switching from SSBs to the other beverages was linked to lower mortality.
“Overall, these results provide additional evidence that emphasizes the importance of beverage choices in maintaining overall health among adults with diabetes,” say senior author Le Ma, PhD, department of nutrition, Harvard School of Public Health, Boston, and colleagues.
“Collectively, these findings all point in the same direction. Lower consumption of SSBs and higher consumption of coffee, tea, plain water, or low-fat milk are optimal for better health outcomes in adults with type 2 diabetes,” Nita G. Forouhi, MD, PhD, emphasizes in an accompanying editorial.
Choice of drink matters
Dr. Forouhi, from the University of Cambridge (England), warned, however, that the findings “cannot be considered cause and effect,” despite the large-scale analysis.
Moreover, “questions remain,” such as the impact of beverage consumption on coronary heart disease and stroke risk, and cancer mortality, with the current study providing “inconclusive” data on the latter.
There was also no data on the addition of sugar to tea or coffee, “so the comparative health effects of unsweetened and sweetened hot beverages remain unclear,” Dr. Forouhi points out. Also unknown is whether the type of tea consumed has a differential effect.
Despite these and other reservations, she says that overall, “Choice of beverage clearly matters.”
“The case for avoiding sugar-sweetened beverages is compelling, and it is supported by various fiscal measures in more than 45 countries. It is reasonable to shift the focus to drinks that are most likely to have positive health impacts: coffee, tea, plain water, and low-fat milk,” she notes.
Dr. Forouhi ends by underlining that the current findings tally with those seen in the general population, so “one important message is that having diabetes does not have to be especially restrictive.”
Expanding the evidence
It was estimated that 537 million adults worldwide had type 2 diabetes in 2021, a figure set to increase to 783 million by 2045, say the authors.
Individuals with type 2 diabetes have an increased risk of cardiovascular disease, among many other comorbidities, as well as premature death. Dietary interventions can play an important role in managing these risks.
Recommendations on the healthiest beverages to drink are largely based on evidence from the general population, and data are limited on the best options for adults with type 2 diabetes, who have altered metabolism, the researchers note.
To expand on this, they examined data from the Nurses’ Health Study, which enrolled female registered nurses aged 30-55 years and was initiated in 1976, and the Health Professionals Follow-Up Study, which included male health professionals aged 40-75 years and was initiated in 1996.
For the current analysis, 11,399 women and 4,087 men with type 2 diabetes were included from the two studies, of whom 2,715 were diagnosed before study entry.
Participants’ average daily beverage intake was assessed using a validated food frequency questionnaire administered every 2-4 years. SSBs included caffeinated and caffeine-free colas, other carbonated SSBs, and noncarbonated SSBs, such as fruit punches, lemonades, or other fruit drinks.
During 285,967 person-years of follow-up, there were 7,638 (49.3%) deaths, and 3,447 (22.3%) cases of incident cardiovascular disease were documented during 248,447 person-years of follow-up.
Fully adjusted multivariate analysis comparing the lowest and highest beverage intake indicated that SSBs were associated with a significant increase in all-cause mortality, at a pooled hazard ratio of 1.20, or 1.08 for each additional serving per day (P = .01).
In contrast, the associations between all-cause mortality and consumption of artificially sweetened beverages, fruit juice, and full-fat milk were not significant, whereas coffee (HR, 0.74), tea (HR, 0.79), plain water (HR, 0.77), and low-fat milk (HR, 0.88) were linked to a reduced risk.
The team reported that there were similar associations between beverage intake and cardiovascular disease incidence, at an HR of 1.25 for SSBs, as well as for cardiovascular disease mortality, at an HR of 1.29.
Participants who increased their tea, coffee, and low-fat milk consumption during the course of the study had lower all-cause mortality than those who did not. Switching from SSBs to other beverages was also associated with lower mortality.
The researchers note, however, that there are “several potential limitations” to their study, including that “individual beverage consumption may be correlated with other dietary and lifestyle risk factors for cardiovascular disease incidence and mortality among adults with [type 2] diabetes.”
The study was sponsored by the National Institutes of Health. Dr. Ma has reported no relevant financial relationships. Disclosures for the other authors are listed with the article. Dr. Forouhi has declared receiving support from the U.K. Medical Research Council Epidemiology Unit and U.K. National Institute for Health and Care Research Biomedical Research Centre Cambridge.
A version of this article first appeared on Medscape.com.
The researchers examined data on almost 15,500 participants with type 2 diabetes from two major studies, finding that the highest level of consumption of SSBs was associated with a 20% increased risk of all-cause mortality and a 25% raised risk of cardiovascular disease, compared with consumption of the least amounts of these products.
The research, published in BMJ, also showed that drinking coffee, tea, plain water, and low-fat milk reduced the risk of all-cause death and that switching from SSBs to the other beverages was linked to lower mortality.
“Overall, these results provide additional evidence that emphasizes the importance of beverage choices in maintaining overall health among adults with diabetes,” say senior author Le Ma, PhD, department of nutrition, Harvard School of Public Health, Boston, and colleagues.
“Collectively, these findings all point in the same direction. Lower consumption of SSBs and higher consumption of coffee, tea, plain water, or low-fat milk are optimal for better health outcomes in adults with type 2 diabetes,” Nita G. Forouhi, MD, PhD, emphasizes in an accompanying editorial.
Choice of drink matters
Dr. Forouhi, from the University of Cambridge (England), warned, however, that the findings “cannot be considered cause and effect,” despite the large-scale analysis.
Moreover, “questions remain,” such as the impact of beverage consumption on coronary heart disease and stroke risk, and cancer mortality, with the current study providing “inconclusive” data on the latter.
There was also no data on the addition of sugar to tea or coffee, “so the comparative health effects of unsweetened and sweetened hot beverages remain unclear,” Dr. Forouhi points out. Also unknown is whether the type of tea consumed has a differential effect.
Despite these and other reservations, she says that overall, “Choice of beverage clearly matters.”
“The case for avoiding sugar-sweetened beverages is compelling, and it is supported by various fiscal measures in more than 45 countries. It is reasonable to shift the focus to drinks that are most likely to have positive health impacts: coffee, tea, plain water, and low-fat milk,” she notes.
Dr. Forouhi ends by underlining that the current findings tally with those seen in the general population, so “one important message is that having diabetes does not have to be especially restrictive.”
Expanding the evidence
It was estimated that 537 million adults worldwide had type 2 diabetes in 2021, a figure set to increase to 783 million by 2045, say the authors.
Individuals with type 2 diabetes have an increased risk of cardiovascular disease, among many other comorbidities, as well as premature death. Dietary interventions can play an important role in managing these risks.
Recommendations on the healthiest beverages to drink are largely based on evidence from the general population, and data are limited on the best options for adults with type 2 diabetes, who have altered metabolism, the researchers note.
To expand on this, they examined data from the Nurses’ Health Study, which enrolled female registered nurses aged 30-55 years and was initiated in 1976, and the Health Professionals Follow-Up Study, which included male health professionals aged 40-75 years and was initiated in 1996.
For the current analysis, 11,399 women and 4,087 men with type 2 diabetes were included from the two studies, of whom 2,715 were diagnosed before study entry.
Participants’ average daily beverage intake was assessed using a validated food frequency questionnaire administered every 2-4 years. SSBs included caffeinated and caffeine-free colas, other carbonated SSBs, and noncarbonated SSBs, such as fruit punches, lemonades, or other fruit drinks.
During 285,967 person-years of follow-up, there were 7,638 (49.3%) deaths, and 3,447 (22.3%) cases of incident cardiovascular disease were documented during 248,447 person-years of follow-up.
Fully adjusted multivariate analysis comparing the lowest and highest beverage intake indicated that SSBs were associated with a significant increase in all-cause mortality, at a pooled hazard ratio of 1.20, or 1.08 for each additional serving per day (P = .01).
In contrast, the associations between all-cause mortality and consumption of artificially sweetened beverages, fruit juice, and full-fat milk were not significant, whereas coffee (HR, 0.74), tea (HR, 0.79), plain water (HR, 0.77), and low-fat milk (HR, 0.88) were linked to a reduced risk.
The team reported that there were similar associations between beverage intake and cardiovascular disease incidence, at an HR of 1.25 for SSBs, as well as for cardiovascular disease mortality, at an HR of 1.29.
Participants who increased their tea, coffee, and low-fat milk consumption during the course of the study had lower all-cause mortality than those who did not. Switching from SSBs to other beverages was also associated with lower mortality.
The researchers note, however, that there are “several potential limitations” to their study, including that “individual beverage consumption may be correlated with other dietary and lifestyle risk factors for cardiovascular disease incidence and mortality among adults with [type 2] diabetes.”
The study was sponsored by the National Institutes of Health. Dr. Ma has reported no relevant financial relationships. Disclosures for the other authors are listed with the article. Dr. Forouhi has declared receiving support from the U.K. Medical Research Council Epidemiology Unit and U.K. National Institute for Health and Care Research Biomedical Research Centre Cambridge.
A version of this article first appeared on Medscape.com.
The researchers examined data on almost 15,500 participants with type 2 diabetes from two major studies, finding that the highest level of consumption of SSBs was associated with a 20% increased risk of all-cause mortality and a 25% raised risk of cardiovascular disease, compared with consumption of the least amounts of these products.
The research, published in BMJ, also showed that drinking coffee, tea, plain water, and low-fat milk reduced the risk of all-cause death and that switching from SSBs to the other beverages was linked to lower mortality.
“Overall, these results provide additional evidence that emphasizes the importance of beverage choices in maintaining overall health among adults with diabetes,” say senior author Le Ma, PhD, department of nutrition, Harvard School of Public Health, Boston, and colleagues.
“Collectively, these findings all point in the same direction. Lower consumption of SSBs and higher consumption of coffee, tea, plain water, or low-fat milk are optimal for better health outcomes in adults with type 2 diabetes,” Nita G. Forouhi, MD, PhD, emphasizes in an accompanying editorial.
Choice of drink matters
Dr. Forouhi, from the University of Cambridge (England), warned, however, that the findings “cannot be considered cause and effect,” despite the large-scale analysis.
Moreover, “questions remain,” such as the impact of beverage consumption on coronary heart disease and stroke risk, and cancer mortality, with the current study providing “inconclusive” data on the latter.
There was also no data on the addition of sugar to tea or coffee, “so the comparative health effects of unsweetened and sweetened hot beverages remain unclear,” Dr. Forouhi points out. Also unknown is whether the type of tea consumed has a differential effect.
Despite these and other reservations, she says that overall, “Choice of beverage clearly matters.”
“The case for avoiding sugar-sweetened beverages is compelling, and it is supported by various fiscal measures in more than 45 countries. It is reasonable to shift the focus to drinks that are most likely to have positive health impacts: coffee, tea, plain water, and low-fat milk,” she notes.
Dr. Forouhi ends by underlining that the current findings tally with those seen in the general population, so “one important message is that having diabetes does not have to be especially restrictive.”
Expanding the evidence
It was estimated that 537 million adults worldwide had type 2 diabetes in 2021, a figure set to increase to 783 million by 2045, say the authors.
Individuals with type 2 diabetes have an increased risk of cardiovascular disease, among many other comorbidities, as well as premature death. Dietary interventions can play an important role in managing these risks.
Recommendations on the healthiest beverages to drink are largely based on evidence from the general population, and data are limited on the best options for adults with type 2 diabetes, who have altered metabolism, the researchers note.
To expand on this, they examined data from the Nurses’ Health Study, which enrolled female registered nurses aged 30-55 years and was initiated in 1976, and the Health Professionals Follow-Up Study, which included male health professionals aged 40-75 years and was initiated in 1996.
For the current analysis, 11,399 women and 4,087 men with type 2 diabetes were included from the two studies, of whom 2,715 were diagnosed before study entry.
Participants’ average daily beverage intake was assessed using a validated food frequency questionnaire administered every 2-4 years. SSBs included caffeinated and caffeine-free colas, other carbonated SSBs, and noncarbonated SSBs, such as fruit punches, lemonades, or other fruit drinks.
During 285,967 person-years of follow-up, there were 7,638 (49.3%) deaths, and 3,447 (22.3%) cases of incident cardiovascular disease were documented during 248,447 person-years of follow-up.
Fully adjusted multivariate analysis comparing the lowest and highest beverage intake indicated that SSBs were associated with a significant increase in all-cause mortality, at a pooled hazard ratio of 1.20, or 1.08 for each additional serving per day (P = .01).
In contrast, the associations between all-cause mortality and consumption of artificially sweetened beverages, fruit juice, and full-fat milk were not significant, whereas coffee (HR, 0.74), tea (HR, 0.79), plain water (HR, 0.77), and low-fat milk (HR, 0.88) were linked to a reduced risk.
The team reported that there were similar associations between beverage intake and cardiovascular disease incidence, at an HR of 1.25 for SSBs, as well as for cardiovascular disease mortality, at an HR of 1.29.
Participants who increased their tea, coffee, and low-fat milk consumption during the course of the study had lower all-cause mortality than those who did not. Switching from SSBs to other beverages was also associated with lower mortality.
The researchers note, however, that there are “several potential limitations” to their study, including that “individual beverage consumption may be correlated with other dietary and lifestyle risk factors for cardiovascular disease incidence and mortality among adults with [type 2] diabetes.”
The study was sponsored by the National Institutes of Health. Dr. Ma has reported no relevant financial relationships. Disclosures for the other authors are listed with the article. Dr. Forouhi has declared receiving support from the U.K. Medical Research Council Epidemiology Unit and U.K. National Institute for Health and Care Research Biomedical Research Centre Cambridge.
A version of this article first appeared on Medscape.com.
FROM THE BMJ
Obstructive sleep apnea linked to early cognitive decline
In a pilot study out of King’s College London, participants with severe OSA experienced worse executive functioning as well as social and emotional recognition versus healthy controls.
Major risk factors for OSA include obesity, high blood pressure, smoking, high cholesterol, and being middle-aged or older. Because some researchers have hypothesized that cognitive deficits could be driven by such comorbidities, the study investigators recruited middle-aged men with no medical comorbidities.
“Traditionally, we were more concerned with sleep apnea’s metabolic and cardiovascular comorbidities, and indeed, when cognitive deficits were demonstrated, most were attributed to them, and yet, our patients and their partners/families commonly tell us differently,” lead investigator Ivana Rosenzweig, MD, PhD, of King’s College London, who is also a consultant in sleep medicine and neuropsychiatry at Guy’s and St Thomas’ Hospital, London, said in an interview.
“Our findings provide a very important first step towards challenging the long-standing dogma that sleep apnea has little to do with the brain – apart from causing sleepiness – and that it is a predominantly nonneuro/psychiatric illness,” added Dr. Rosenzweig.
The findings were published online in Frontiers in Sleep.
Brain changes
The researchers wanted to understand how OSA may be linked to cognitive decline in the absence of cardiovascular and metabolic conditions.
To accomplish this, the investigators studied 27 men between the ages of 35 and 70 with a new diagnosis of mild to severe OSA without any comorbidities (16 with mild OSA and 11 with severe OSA). They also studied a control group of seven men matched for age, body mass index, and education level.
The team tested participants’ cognitive performance using the Cambridge Neuropsychological Test Automated Battery and found that the most significant deficits for the OSA group, compared with controls, were in areas of visual matching ability (P < .0001), short-term visual recognition memory, nonverbal patterns, executive functioning and attentional set-shifting (P < .001), psychomotor functioning, and social cognition and emotional recognition (P < .05).
On the latter two tests, impaired participants were less likely to accurately identify the emotion on computer-generated faces. Those with mild OSA performed better than those with severe OSA on these tasks, but rarely worse than controls.
Dr. Rosenzweig noted that the findings were one-of-a-kind because of the recruitment of patients with OSA who were otherwise healthy and nonobese, “something one rarely sees in the sleep clinic, where we commonly encounter patients with already developed comorbidities.
“In order to truly revolutionize the treatment for our patients, it is important to understand how much the accompanying comorbidities, such as systemic hypertension, obesity, diabetes, hyperlipidemia, and other various serious cardiovascular and metabolic diseases and how much the illness itself may shape the demonstrated cognitive deficits,” she said.
She also said that “it is widely agreed that medical problems in middle age may predispose to increased prevalence of dementia in later years.
Moreover, the very link between sleep apnea and Alzheimer’s, vascular and mixed dementia is increasingly demonstrated,” said Dr. Rosenzweig.
Although women typically have a lower prevalence of OSA than men, Dr. Rosenzweig said women were not included in the study “because we are too complex. As a lifelong feminist it pains me to say this, but to get any authoritative answer on our physiology, we need decent funding in place so that we can take into account all the intricacies of the changes of our sleep, physiology, and metabolism.
“While there is always lots of noise about how important it is to answer these questions, there are only very limited funds available for the sleep research,” she added.
Dr. Rosenzweig’s future research will focus on the potential link between OSA and neuroinflammation.
In a comment, Liza Ashbrook, MD, associate professor of neurology at the University of California, San Francisco, said the findings “add to the growing list of negative health consequences associated with sleep apnea.”
She said that, if the cognitive changes found in the study are, in fact, caused by OSA, it is unclear whether they are the beginning of long-term cognitive changes or a symptom of fragmented sleep that may be reversible.
Dr. Ashbrook said she would be interested in seeing research on understanding the effect of OSA treatment on the affected cognitive domains.
The study was funded by the Wellcome Trust. No relevant financial relationships were reported.
A version of this article originally appeared on Medscape.com.
In a pilot study out of King’s College London, participants with severe OSA experienced worse executive functioning as well as social and emotional recognition versus healthy controls.
Major risk factors for OSA include obesity, high blood pressure, smoking, high cholesterol, and being middle-aged or older. Because some researchers have hypothesized that cognitive deficits could be driven by such comorbidities, the study investigators recruited middle-aged men with no medical comorbidities.
“Traditionally, we were more concerned with sleep apnea’s metabolic and cardiovascular comorbidities, and indeed, when cognitive deficits were demonstrated, most were attributed to them, and yet, our patients and their partners/families commonly tell us differently,” lead investigator Ivana Rosenzweig, MD, PhD, of King’s College London, who is also a consultant in sleep medicine and neuropsychiatry at Guy’s and St Thomas’ Hospital, London, said in an interview.
“Our findings provide a very important first step towards challenging the long-standing dogma that sleep apnea has little to do with the brain – apart from causing sleepiness – and that it is a predominantly nonneuro/psychiatric illness,” added Dr. Rosenzweig.
The findings were published online in Frontiers in Sleep.
Brain changes
The researchers wanted to understand how OSA may be linked to cognitive decline in the absence of cardiovascular and metabolic conditions.
To accomplish this, the investigators studied 27 men between the ages of 35 and 70 with a new diagnosis of mild to severe OSA without any comorbidities (16 with mild OSA and 11 with severe OSA). They also studied a control group of seven men matched for age, body mass index, and education level.
The team tested participants’ cognitive performance using the Cambridge Neuropsychological Test Automated Battery and found that the most significant deficits for the OSA group, compared with controls, were in areas of visual matching ability (P < .0001), short-term visual recognition memory, nonverbal patterns, executive functioning and attentional set-shifting (P < .001), psychomotor functioning, and social cognition and emotional recognition (P < .05).
On the latter two tests, impaired participants were less likely to accurately identify the emotion on computer-generated faces. Those with mild OSA performed better than those with severe OSA on these tasks, but rarely worse than controls.
Dr. Rosenzweig noted that the findings were one-of-a-kind because of the recruitment of patients with OSA who were otherwise healthy and nonobese, “something one rarely sees in the sleep clinic, where we commonly encounter patients with already developed comorbidities.
“In order to truly revolutionize the treatment for our patients, it is important to understand how much the accompanying comorbidities, such as systemic hypertension, obesity, diabetes, hyperlipidemia, and other various serious cardiovascular and metabolic diseases and how much the illness itself may shape the demonstrated cognitive deficits,” she said.
She also said that “it is widely agreed that medical problems in middle age may predispose to increased prevalence of dementia in later years.
Moreover, the very link between sleep apnea and Alzheimer’s, vascular and mixed dementia is increasingly demonstrated,” said Dr. Rosenzweig.
Although women typically have a lower prevalence of OSA than men, Dr. Rosenzweig said women were not included in the study “because we are too complex. As a lifelong feminist it pains me to say this, but to get any authoritative answer on our physiology, we need decent funding in place so that we can take into account all the intricacies of the changes of our sleep, physiology, and metabolism.
“While there is always lots of noise about how important it is to answer these questions, there are only very limited funds available for the sleep research,” she added.
Dr. Rosenzweig’s future research will focus on the potential link between OSA and neuroinflammation.
In a comment, Liza Ashbrook, MD, associate professor of neurology at the University of California, San Francisco, said the findings “add to the growing list of negative health consequences associated with sleep apnea.”
She said that, if the cognitive changes found in the study are, in fact, caused by OSA, it is unclear whether they are the beginning of long-term cognitive changes or a symptom of fragmented sleep that may be reversible.
Dr. Ashbrook said she would be interested in seeing research on understanding the effect of OSA treatment on the affected cognitive domains.
The study was funded by the Wellcome Trust. No relevant financial relationships were reported.
A version of this article originally appeared on Medscape.com.
In a pilot study out of King’s College London, participants with severe OSA experienced worse executive functioning as well as social and emotional recognition versus healthy controls.
Major risk factors for OSA include obesity, high blood pressure, smoking, high cholesterol, and being middle-aged or older. Because some researchers have hypothesized that cognitive deficits could be driven by such comorbidities, the study investigators recruited middle-aged men with no medical comorbidities.
“Traditionally, we were more concerned with sleep apnea’s metabolic and cardiovascular comorbidities, and indeed, when cognitive deficits were demonstrated, most were attributed to them, and yet, our patients and their partners/families commonly tell us differently,” lead investigator Ivana Rosenzweig, MD, PhD, of King’s College London, who is also a consultant in sleep medicine and neuropsychiatry at Guy’s and St Thomas’ Hospital, London, said in an interview.
“Our findings provide a very important first step towards challenging the long-standing dogma that sleep apnea has little to do with the brain – apart from causing sleepiness – and that it is a predominantly nonneuro/psychiatric illness,” added Dr. Rosenzweig.
The findings were published online in Frontiers in Sleep.
Brain changes
The researchers wanted to understand how OSA may be linked to cognitive decline in the absence of cardiovascular and metabolic conditions.
To accomplish this, the investigators studied 27 men between the ages of 35 and 70 with a new diagnosis of mild to severe OSA without any comorbidities (16 with mild OSA and 11 with severe OSA). They also studied a control group of seven men matched for age, body mass index, and education level.
The team tested participants’ cognitive performance using the Cambridge Neuropsychological Test Automated Battery and found that the most significant deficits for the OSA group, compared with controls, were in areas of visual matching ability (P < .0001), short-term visual recognition memory, nonverbal patterns, executive functioning and attentional set-shifting (P < .001), psychomotor functioning, and social cognition and emotional recognition (P < .05).
On the latter two tests, impaired participants were less likely to accurately identify the emotion on computer-generated faces. Those with mild OSA performed better than those with severe OSA on these tasks, but rarely worse than controls.
Dr. Rosenzweig noted that the findings were one-of-a-kind because of the recruitment of patients with OSA who were otherwise healthy and nonobese, “something one rarely sees in the sleep clinic, where we commonly encounter patients with already developed comorbidities.
“In order to truly revolutionize the treatment for our patients, it is important to understand how much the accompanying comorbidities, such as systemic hypertension, obesity, diabetes, hyperlipidemia, and other various serious cardiovascular and metabolic diseases and how much the illness itself may shape the demonstrated cognitive deficits,” she said.
She also said that “it is widely agreed that medical problems in middle age may predispose to increased prevalence of dementia in later years.
Moreover, the very link between sleep apnea and Alzheimer’s, vascular and mixed dementia is increasingly demonstrated,” said Dr. Rosenzweig.
Although women typically have a lower prevalence of OSA than men, Dr. Rosenzweig said women were not included in the study “because we are too complex. As a lifelong feminist it pains me to say this, but to get any authoritative answer on our physiology, we need decent funding in place so that we can take into account all the intricacies of the changes of our sleep, physiology, and metabolism.
“While there is always lots of noise about how important it is to answer these questions, there are only very limited funds available for the sleep research,” she added.
Dr. Rosenzweig’s future research will focus on the potential link between OSA and neuroinflammation.
In a comment, Liza Ashbrook, MD, associate professor of neurology at the University of California, San Francisco, said the findings “add to the growing list of negative health consequences associated with sleep apnea.”
She said that, if the cognitive changes found in the study are, in fact, caused by OSA, it is unclear whether they are the beginning of long-term cognitive changes or a symptom of fragmented sleep that may be reversible.
Dr. Ashbrook said she would be interested in seeing research on understanding the effect of OSA treatment on the affected cognitive domains.
The study was funded by the Wellcome Trust. No relevant financial relationships were reported.
A version of this article originally appeared on Medscape.com.
FROM FRONTIERS IN SLEEP
Intermittent fasting plus early eating may prevent type 2 diabetes
, indicate the results of a randomized controlled trial.
The study involved more than 200 individuals randomized to one of three groups: eat only in the morning (from 8:00 a.m. to noon) followed by 20 hours of fasting 3 days per week and eat as desired on the other days; daily calorie restriction to 70% of requirements; or standard weight loss advice.
The IF plus early time-restricted eating intervention was associated with a significant improvement in a key measure of glucose control versus calorie restriction at 6 months, while both interventions were linked to benefits in terms of cardiovascular risk markers and body composition, compared with the standard weight loss advice.
However, the research, published in Nature Medicine, showed that the additional benefit of IF plus early time-restricted eating did not persist, and less than half of participants were still following the plan at 18 months, compared with almost 80% of those in the calorie-restriction group.
“Following a time-restricted, IF diet could help lower the chances of developing type 2 diabetes,” senior author Leonie K. Heilbronn, PhD, University of Adelaide, South Australia, said in a press release.
This is “the largest study in the world to date, and the first powered to assess how the body processes and uses glucose after eating a meal,” with the latter being a better indicator of diabetes risk than a fasting glucose test, added first author Xiao Tong Teong, a PhD student, also at the University of Adelaide.
“The results of this study add to the growing body of evidence to indicate that meal timing and fasting advice extends the health benefits of a restricted-calorie diet, independently from weight loss, and this may be influential in clinical practice,” Ms. Teong added.
Adherence difficult to IF plus early time-restricted eating
Asked to comment, Krista Varady, PhD, said that the study design “would have been stronger if the time-restricted eating and IF interventions were separated” and compared.
“Time-restricted eating has been shown to naturally reduce calorie intake by 300-500 kcal/day,” she said in an interview, “so I’m not sure why the investigators chose to combine [it] with IF. It ... defeats the point of time-restricted eating.”
Dr. Varady, who recently coauthored a review of the clinical application of IF for weight loss, also doubted whether individuals would adhere to combined early time-restricted eating and IF. “In all honesty, I don’t think anyone would follow this diet for very long,” she said.
She added that the feasibility of this particular approach is “very questionable. In general, people don’t like diets that require them to skip dinner with family/friends on multiple days of the week,” explained Dr. Varady, professor of nutrition at the University of Illinois, Chicago. “These regimens make social eating very difficult, which results in high attrition.
“Indeed, evidence from a recent large-scale observational study of nearly 800,000 adults shows that Americans who engage in time-restricted eating placed their eating window in the afternoon or evening,” she noted.
Dr. Varady therefore suggested that future trials should test “more feasible time-restricted eating approaches,” such as those with later eating windows and without “vigilant calorie monitoring.”
“These types of diets are much easier to follow and are more likely to produce lasting weight and glycemic control in people with obesity and prediabetes,” she observed.
A novel way to cut calories?
The Australian authors say there is growing interest in extending the established health benefits of calorie restriction through new approaches such as timing of meals and prolonged fasting, with IF – defined as fasting interspersed with days of ad libitum eating – gaining in popularity as an alternative to simple calorie restriction.
Time-restricted eating, which emphasizes shorter daily eating windows in alignment with circadian rhythms, has also become popular in recent years, although the authors acknowledge that current evidence suggests any benefits over calorie restriction alone in terms of body composition, blood lipids, or glucose parameters are small.
To examine the combination of IF plus early time-restricted eating, in the DIRECT trial, the team recruited individuals aged 35-75 years who had a score of at least 12 on the Australian Type 2 Diabetes Risk Assessment Tool but did not have a diagnosis of diabetes and had stable weight for more than 6 months prior to study entry.
The participants were randomized to one of three groups:
- IF plus early time-restricted eating, which allowed consumption of 30% of calculated baseline energy requirements between 8:00 a.m. and midday, followed by a 20-hour fast from midday on 3 nonconsecutive days per week. They consumed their regular diet on nonfasting days.
- Calorie restriction, where they consumed 70% of daily calculated baseline energy requirements each day and were given rotating menu plans, but no specific mealtimes.
- Standard care, where they were given a booklet on current guidelines, with no counseling or meal replacement.
There were clinic visits every 2 weeks for the first 6 months of follow-up, and then monthly visits for 12 months. The two intervention groups had one-on-one diet counseling for the first 6 months. All groups were instructed to maintain their usual physical activity levels.
A total of 209 individuals were enrolled between Sept. 26, 2018, and May 4, 2020. Their mean age was 58 years, and 57% were women. Mean body mass index was 34.8 kg/m2.
In all, 40.7% of participants were allocated to IF plus early time-restricted eating, 39.7% to calorie restriction, and the remaining 19.6% to standard care.
The results showed that IF plus early time-restricted eating was associated with a significantly greater improvement in the primary outcome of postprandial glucose area under the curve (AUC) at month 6 compared with calorie restriction, at –10.1 mg/dL/min versus –3.6 mg/dL/min (P = .03).
“To our knowledge, no [prior] studies have been powered for postprandial assessments of glycemia, which are better indicators of diabetes risk than fasting assessment,” the authors underlined.
IF plus early time-restricted eating was also associated with greater reductions in postprandial insulin AUC versus calorie restriction at 6 months (P = .04). However, the differences between the IF plus early time-restricted eating and calorie restriction groups for postmeal insulin did not remain significant at 18 months of follow-up.
Both IF plus early time-restricted eating and calorie restriction were associated with greater reductions in A1c levels at 6 months versus standard care, but there was no significant difference between the two active interventions (P = .46).
Both interventions were also associated with improvements in markers of cardiovascular risk versus standard care, such as systolic blood pressure at 2 months, diastolic blood pressure at 6 months, and fasting triglycerides at both time points, with no significant differences between the two intervention groups.
IF plus early time-restricted eating and calorie restriction were also both associated with greater reductions in BMI and fat mass in the first 6 months, as well as in waist circumference.
Calorie restriction easier to stick to, less likely to cause fatigue
When offered the chance to modify their diet plan at 6 months, 46% of participants in the IF plus early time-restricted eating group said they would maintain 3 days of restrictions per week, while 51% chose to reduce the restrictions to 2 days per week.
In contrast, 97% of those who completed the calorie-restriction plan indicated they would continue with their current diet plan.
At 18 months, 42% of participants in the IF plus early time-restricted eating group said they still undertook 2-3 days of restrictions per week, while 78% of those assigned to calorie restriction reported that they followed a calorie-restricted diet.
Fatigue was more common with IF plus early time-restricted eating, reported by 56% of participants versus 37% of those following calorie restriction, and 35% of those in the standard care group at 6 months. Headaches and constipation were more common in the intervention groups than with standard care.
The study was supported by a National Health and Medical Research Council Project Grant, an Australian Government Research Training Program Scholarship from the University of Adelaide, and a Diabetes Australia Research Program Grant.
No relevant financial relationships were declared.
A version of this article originally appeared on Medscape.com.
, indicate the results of a randomized controlled trial.
The study involved more than 200 individuals randomized to one of three groups: eat only in the morning (from 8:00 a.m. to noon) followed by 20 hours of fasting 3 days per week and eat as desired on the other days; daily calorie restriction to 70% of requirements; or standard weight loss advice.
The IF plus early time-restricted eating intervention was associated with a significant improvement in a key measure of glucose control versus calorie restriction at 6 months, while both interventions were linked to benefits in terms of cardiovascular risk markers and body composition, compared with the standard weight loss advice.
However, the research, published in Nature Medicine, showed that the additional benefit of IF plus early time-restricted eating did not persist, and less than half of participants were still following the plan at 18 months, compared with almost 80% of those in the calorie-restriction group.
“Following a time-restricted, IF diet could help lower the chances of developing type 2 diabetes,” senior author Leonie K. Heilbronn, PhD, University of Adelaide, South Australia, said in a press release.
This is “the largest study in the world to date, and the first powered to assess how the body processes and uses glucose after eating a meal,” with the latter being a better indicator of diabetes risk than a fasting glucose test, added first author Xiao Tong Teong, a PhD student, also at the University of Adelaide.
“The results of this study add to the growing body of evidence to indicate that meal timing and fasting advice extends the health benefits of a restricted-calorie diet, independently from weight loss, and this may be influential in clinical practice,” Ms. Teong added.
Adherence difficult to IF plus early time-restricted eating
Asked to comment, Krista Varady, PhD, said that the study design “would have been stronger if the time-restricted eating and IF interventions were separated” and compared.
“Time-restricted eating has been shown to naturally reduce calorie intake by 300-500 kcal/day,” she said in an interview, “so I’m not sure why the investigators chose to combine [it] with IF. It ... defeats the point of time-restricted eating.”
Dr. Varady, who recently coauthored a review of the clinical application of IF for weight loss, also doubted whether individuals would adhere to combined early time-restricted eating and IF. “In all honesty, I don’t think anyone would follow this diet for very long,” she said.
She added that the feasibility of this particular approach is “very questionable. In general, people don’t like diets that require them to skip dinner with family/friends on multiple days of the week,” explained Dr. Varady, professor of nutrition at the University of Illinois, Chicago. “These regimens make social eating very difficult, which results in high attrition.
“Indeed, evidence from a recent large-scale observational study of nearly 800,000 adults shows that Americans who engage in time-restricted eating placed their eating window in the afternoon or evening,” she noted.
Dr. Varady therefore suggested that future trials should test “more feasible time-restricted eating approaches,” such as those with later eating windows and without “vigilant calorie monitoring.”
“These types of diets are much easier to follow and are more likely to produce lasting weight and glycemic control in people with obesity and prediabetes,” she observed.
A novel way to cut calories?
The Australian authors say there is growing interest in extending the established health benefits of calorie restriction through new approaches such as timing of meals and prolonged fasting, with IF – defined as fasting interspersed with days of ad libitum eating – gaining in popularity as an alternative to simple calorie restriction.
Time-restricted eating, which emphasizes shorter daily eating windows in alignment with circadian rhythms, has also become popular in recent years, although the authors acknowledge that current evidence suggests any benefits over calorie restriction alone in terms of body composition, blood lipids, or glucose parameters are small.
To examine the combination of IF plus early time-restricted eating, in the DIRECT trial, the team recruited individuals aged 35-75 years who had a score of at least 12 on the Australian Type 2 Diabetes Risk Assessment Tool but did not have a diagnosis of diabetes and had stable weight for more than 6 months prior to study entry.
The participants were randomized to one of three groups:
- IF plus early time-restricted eating, which allowed consumption of 30% of calculated baseline energy requirements between 8:00 a.m. and midday, followed by a 20-hour fast from midday on 3 nonconsecutive days per week. They consumed their regular diet on nonfasting days.
- Calorie restriction, where they consumed 70% of daily calculated baseline energy requirements each day and were given rotating menu plans, but no specific mealtimes.
- Standard care, where they were given a booklet on current guidelines, with no counseling or meal replacement.
There were clinic visits every 2 weeks for the first 6 months of follow-up, and then monthly visits for 12 months. The two intervention groups had one-on-one diet counseling for the first 6 months. All groups were instructed to maintain their usual physical activity levels.
A total of 209 individuals were enrolled between Sept. 26, 2018, and May 4, 2020. Their mean age was 58 years, and 57% were women. Mean body mass index was 34.8 kg/m2.
In all, 40.7% of participants were allocated to IF plus early time-restricted eating, 39.7% to calorie restriction, and the remaining 19.6% to standard care.
The results showed that IF plus early time-restricted eating was associated with a significantly greater improvement in the primary outcome of postprandial glucose area under the curve (AUC) at month 6 compared with calorie restriction, at –10.1 mg/dL/min versus –3.6 mg/dL/min (P = .03).
“To our knowledge, no [prior] studies have been powered for postprandial assessments of glycemia, which are better indicators of diabetes risk than fasting assessment,” the authors underlined.
IF plus early time-restricted eating was also associated with greater reductions in postprandial insulin AUC versus calorie restriction at 6 months (P = .04). However, the differences between the IF plus early time-restricted eating and calorie restriction groups for postmeal insulin did not remain significant at 18 months of follow-up.
Both IF plus early time-restricted eating and calorie restriction were associated with greater reductions in A1c levels at 6 months versus standard care, but there was no significant difference between the two active interventions (P = .46).
Both interventions were also associated with improvements in markers of cardiovascular risk versus standard care, such as systolic blood pressure at 2 months, diastolic blood pressure at 6 months, and fasting triglycerides at both time points, with no significant differences between the two intervention groups.
IF plus early time-restricted eating and calorie restriction were also both associated with greater reductions in BMI and fat mass in the first 6 months, as well as in waist circumference.
Calorie restriction easier to stick to, less likely to cause fatigue
When offered the chance to modify their diet plan at 6 months, 46% of participants in the IF plus early time-restricted eating group said they would maintain 3 days of restrictions per week, while 51% chose to reduce the restrictions to 2 days per week.
In contrast, 97% of those who completed the calorie-restriction plan indicated they would continue with their current diet plan.
At 18 months, 42% of participants in the IF plus early time-restricted eating group said they still undertook 2-3 days of restrictions per week, while 78% of those assigned to calorie restriction reported that they followed a calorie-restricted diet.
Fatigue was more common with IF plus early time-restricted eating, reported by 56% of participants versus 37% of those following calorie restriction, and 35% of those in the standard care group at 6 months. Headaches and constipation were more common in the intervention groups than with standard care.
The study was supported by a National Health and Medical Research Council Project Grant, an Australian Government Research Training Program Scholarship from the University of Adelaide, and a Diabetes Australia Research Program Grant.
No relevant financial relationships were declared.
A version of this article originally appeared on Medscape.com.
, indicate the results of a randomized controlled trial.
The study involved more than 200 individuals randomized to one of three groups: eat only in the morning (from 8:00 a.m. to noon) followed by 20 hours of fasting 3 days per week and eat as desired on the other days; daily calorie restriction to 70% of requirements; or standard weight loss advice.
The IF plus early time-restricted eating intervention was associated with a significant improvement in a key measure of glucose control versus calorie restriction at 6 months, while both interventions were linked to benefits in terms of cardiovascular risk markers and body composition, compared with the standard weight loss advice.
However, the research, published in Nature Medicine, showed that the additional benefit of IF plus early time-restricted eating did not persist, and less than half of participants were still following the plan at 18 months, compared with almost 80% of those in the calorie-restriction group.
“Following a time-restricted, IF diet could help lower the chances of developing type 2 diabetes,” senior author Leonie K. Heilbronn, PhD, University of Adelaide, South Australia, said in a press release.
This is “the largest study in the world to date, and the first powered to assess how the body processes and uses glucose after eating a meal,” with the latter being a better indicator of diabetes risk than a fasting glucose test, added first author Xiao Tong Teong, a PhD student, also at the University of Adelaide.
“The results of this study add to the growing body of evidence to indicate that meal timing and fasting advice extends the health benefits of a restricted-calorie diet, independently from weight loss, and this may be influential in clinical practice,” Ms. Teong added.
Adherence difficult to IF plus early time-restricted eating
Asked to comment, Krista Varady, PhD, said that the study design “would have been stronger if the time-restricted eating and IF interventions were separated” and compared.
“Time-restricted eating has been shown to naturally reduce calorie intake by 300-500 kcal/day,” she said in an interview, “so I’m not sure why the investigators chose to combine [it] with IF. It ... defeats the point of time-restricted eating.”
Dr. Varady, who recently coauthored a review of the clinical application of IF for weight loss, also doubted whether individuals would adhere to combined early time-restricted eating and IF. “In all honesty, I don’t think anyone would follow this diet for very long,” she said.
She added that the feasibility of this particular approach is “very questionable. In general, people don’t like diets that require them to skip dinner with family/friends on multiple days of the week,” explained Dr. Varady, professor of nutrition at the University of Illinois, Chicago. “These regimens make social eating very difficult, which results in high attrition.
“Indeed, evidence from a recent large-scale observational study of nearly 800,000 adults shows that Americans who engage in time-restricted eating placed their eating window in the afternoon or evening,” she noted.
Dr. Varady therefore suggested that future trials should test “more feasible time-restricted eating approaches,” such as those with later eating windows and without “vigilant calorie monitoring.”
“These types of diets are much easier to follow and are more likely to produce lasting weight and glycemic control in people with obesity and prediabetes,” she observed.
A novel way to cut calories?
The Australian authors say there is growing interest in extending the established health benefits of calorie restriction through new approaches such as timing of meals and prolonged fasting, with IF – defined as fasting interspersed with days of ad libitum eating – gaining in popularity as an alternative to simple calorie restriction.
Time-restricted eating, which emphasizes shorter daily eating windows in alignment with circadian rhythms, has also become popular in recent years, although the authors acknowledge that current evidence suggests any benefits over calorie restriction alone in terms of body composition, blood lipids, or glucose parameters are small.
To examine the combination of IF plus early time-restricted eating, in the DIRECT trial, the team recruited individuals aged 35-75 years who had a score of at least 12 on the Australian Type 2 Diabetes Risk Assessment Tool but did not have a diagnosis of diabetes and had stable weight for more than 6 months prior to study entry.
The participants were randomized to one of three groups:
- IF plus early time-restricted eating, which allowed consumption of 30% of calculated baseline energy requirements between 8:00 a.m. and midday, followed by a 20-hour fast from midday on 3 nonconsecutive days per week. They consumed their regular diet on nonfasting days.
- Calorie restriction, where they consumed 70% of daily calculated baseline energy requirements each day and were given rotating menu plans, but no specific mealtimes.
- Standard care, where they were given a booklet on current guidelines, with no counseling or meal replacement.
There were clinic visits every 2 weeks for the first 6 months of follow-up, and then monthly visits for 12 months. The two intervention groups had one-on-one diet counseling for the first 6 months. All groups were instructed to maintain their usual physical activity levels.
A total of 209 individuals were enrolled between Sept. 26, 2018, and May 4, 2020. Their mean age was 58 years, and 57% were women. Mean body mass index was 34.8 kg/m2.
In all, 40.7% of participants were allocated to IF plus early time-restricted eating, 39.7% to calorie restriction, and the remaining 19.6% to standard care.
The results showed that IF plus early time-restricted eating was associated with a significantly greater improvement in the primary outcome of postprandial glucose area under the curve (AUC) at month 6 compared with calorie restriction, at –10.1 mg/dL/min versus –3.6 mg/dL/min (P = .03).
“To our knowledge, no [prior] studies have been powered for postprandial assessments of glycemia, which are better indicators of diabetes risk than fasting assessment,” the authors underlined.
IF plus early time-restricted eating was also associated with greater reductions in postprandial insulin AUC versus calorie restriction at 6 months (P = .04). However, the differences between the IF plus early time-restricted eating and calorie restriction groups for postmeal insulin did not remain significant at 18 months of follow-up.
Both IF plus early time-restricted eating and calorie restriction were associated with greater reductions in A1c levels at 6 months versus standard care, but there was no significant difference between the two active interventions (P = .46).
Both interventions were also associated with improvements in markers of cardiovascular risk versus standard care, such as systolic blood pressure at 2 months, diastolic blood pressure at 6 months, and fasting triglycerides at both time points, with no significant differences between the two intervention groups.
IF plus early time-restricted eating and calorie restriction were also both associated with greater reductions in BMI and fat mass in the first 6 months, as well as in waist circumference.
Calorie restriction easier to stick to, less likely to cause fatigue
When offered the chance to modify their diet plan at 6 months, 46% of participants in the IF plus early time-restricted eating group said they would maintain 3 days of restrictions per week, while 51% chose to reduce the restrictions to 2 days per week.
In contrast, 97% of those who completed the calorie-restriction plan indicated they would continue with their current diet plan.
At 18 months, 42% of participants in the IF plus early time-restricted eating group said they still undertook 2-3 days of restrictions per week, while 78% of those assigned to calorie restriction reported that they followed a calorie-restricted diet.
Fatigue was more common with IF plus early time-restricted eating, reported by 56% of participants versus 37% of those following calorie restriction, and 35% of those in the standard care group at 6 months. Headaches and constipation were more common in the intervention groups than with standard care.
The study was supported by a National Health and Medical Research Council Project Grant, an Australian Government Research Training Program Scholarship from the University of Adelaide, and a Diabetes Australia Research Program Grant.
No relevant financial relationships were declared.
A version of this article originally appeared on Medscape.com.
FROM NATURE MEDICINE