User login
Test Accurately Diagnoses Myasthenia Gravis
A test for ocular vestibular evoked myogenic potentials (oVEMP) has a sensitivity of 89% and a specificity of 64% for detecting myasthenia gravis, according to a case–control study of 55 adults published online ahead of print January 20 in Neurology.
"The presence of an oVEMP decrement is a sensitive and specific marker for myasthenia gravis," said Yulia Valko, MD, a resident at University Hospital Zurich in Switzerland, and her associates. "This test allows direct and noninvasive examination of extraocular muscle activity, with similarly good diagnostic accuracy in ocular and generalized myasthenia gravis."
Myasthenia gravis usually manifests first in the eyes, and early diagnosis and treatment can limit generalization. Nearly half of patients remain undiagnosed a year after onset, however, partly because standard tests often fail to detect isolated ocular myasthenia gravis, the researchers noted. The recently developed oVEMP test directly measures the activity of the extraocular inferior oblique muscle in response to repeated bursts of vibratory stimulation to the forehead. A decreased response, or decrement, indicates failed neuromuscular transmission, as with standard repetitive nerve stimulation. The researchers evaluated the test in 13 patients with isolated ocular myasthenia gravis, 14 patients with generalized myasthenia gravis, and 28 healthy controls. They defined the oVEMP decrement as the decrease between the second stimulus and the average of the fifth through ninth stimuli.
A repetition rate of 20 Hz best differentiated between cases (average decrement, -21.5%) and controls (average decrement, -2.8%), the researchers reported. When at least one eye showed a decrement, the ideal cutoff was a decrement of at least 15.2%, which detected myasthenia gravis with a sensitivity of 89% and a specificity of 64%. When both eyes were affected, the ideal cutoff for the smaller of the two decrements was at least 20.4%, which yielded a sensitivity of 100% and a specificity of 63%. For both cutoffs, the test was similarly sensitive for detecting ocular and generalized myasthenia gravis. For the unilateral cutoff, the sensitivity was 92% for patients with isolated ocular myasthenia gravis and 86% for patients with generalized myasthenia gravis. For the bilateral cutoff, specificity was 62% in ocular myasthenia gravis and 64% in generalized myasthenia gravis.
The results provide class III evidence that oVEMP can distinguish between patients with myasthenia gravis and healthy controls, "but future studies will need to confirm its diagnostic utility in clinical practice, where the main challenge is differentiation from patients with other neuro-ophthalmologic conditions," the researchers said. "The possibility to apply fast repetition rates is one important advantage of oVEMP, which is not possible by measuring voluntary saccadic eye movements. As a consequence, oVEMP allowed us to unmask myasthenic decrements even in clinically asymptomatic eyes."
Because the study used a confirmed diagnosis of myasthenia gravis as a benchmark, all patients were already being treated with cholinesterase inhibitors. Although participants underwent oVEMP testing in the morning before their first dose of medication, the test needs further study in drug-naïve patients, as well as in patients with worse limitations in their upward gaze, the researchers added.
—Amy Karon
Suggested Reading
Valko Y, Rosengren SM, Jung HH, et al. Ocular vestibular evoked myogenic potentials as a test for myasthenia gravis. Neurology. 2016 Jan 20 [Epub ahead of print].
A test for ocular vestibular evoked myogenic potentials (oVEMP) has a sensitivity of 89% and a specificity of 64% for detecting myasthenia gravis, according to a case–control study of 55 adults published online ahead of print January 20 in Neurology.
"The presence of an oVEMP decrement is a sensitive and specific marker for myasthenia gravis," said Yulia Valko, MD, a resident at University Hospital Zurich in Switzerland, and her associates. "This test allows direct and noninvasive examination of extraocular muscle activity, with similarly good diagnostic accuracy in ocular and generalized myasthenia gravis."
Myasthenia gravis usually manifests first in the eyes, and early diagnosis and treatment can limit generalization. Nearly half of patients remain undiagnosed a year after onset, however, partly because standard tests often fail to detect isolated ocular myasthenia gravis, the researchers noted. The recently developed oVEMP test directly measures the activity of the extraocular inferior oblique muscle in response to repeated bursts of vibratory stimulation to the forehead. A decreased response, or decrement, indicates failed neuromuscular transmission, as with standard repetitive nerve stimulation. The researchers evaluated the test in 13 patients with isolated ocular myasthenia gravis, 14 patients with generalized myasthenia gravis, and 28 healthy controls. They defined the oVEMP decrement as the decrease between the second stimulus and the average of the fifth through ninth stimuli.
A repetition rate of 20 Hz best differentiated between cases (average decrement, -21.5%) and controls (average decrement, -2.8%), the researchers reported. When at least one eye showed a decrement, the ideal cutoff was a decrement of at least 15.2%, which detected myasthenia gravis with a sensitivity of 89% and a specificity of 64%. When both eyes were affected, the ideal cutoff for the smaller of the two decrements was at least 20.4%, which yielded a sensitivity of 100% and a specificity of 63%. For both cutoffs, the test was similarly sensitive for detecting ocular and generalized myasthenia gravis. For the unilateral cutoff, the sensitivity was 92% for patients with isolated ocular myasthenia gravis and 86% for patients with generalized myasthenia gravis. For the bilateral cutoff, specificity was 62% in ocular myasthenia gravis and 64% in generalized myasthenia gravis.
The results provide class III evidence that oVEMP can distinguish between patients with myasthenia gravis and healthy controls, "but future studies will need to confirm its diagnostic utility in clinical practice, where the main challenge is differentiation from patients with other neuro-ophthalmologic conditions," the researchers said. "The possibility to apply fast repetition rates is one important advantage of oVEMP, which is not possible by measuring voluntary saccadic eye movements. As a consequence, oVEMP allowed us to unmask myasthenic decrements even in clinically asymptomatic eyes."
Because the study used a confirmed diagnosis of myasthenia gravis as a benchmark, all patients were already being treated with cholinesterase inhibitors. Although participants underwent oVEMP testing in the morning before their first dose of medication, the test needs further study in drug-naïve patients, as well as in patients with worse limitations in their upward gaze, the researchers added.
—Amy Karon
A test for ocular vestibular evoked myogenic potentials (oVEMP) has a sensitivity of 89% and a specificity of 64% for detecting myasthenia gravis, according to a case–control study of 55 adults published online ahead of print January 20 in Neurology.
"The presence of an oVEMP decrement is a sensitive and specific marker for myasthenia gravis," said Yulia Valko, MD, a resident at University Hospital Zurich in Switzerland, and her associates. "This test allows direct and noninvasive examination of extraocular muscle activity, with similarly good diagnostic accuracy in ocular and generalized myasthenia gravis."
Myasthenia gravis usually manifests first in the eyes, and early diagnosis and treatment can limit generalization. Nearly half of patients remain undiagnosed a year after onset, however, partly because standard tests often fail to detect isolated ocular myasthenia gravis, the researchers noted. The recently developed oVEMP test directly measures the activity of the extraocular inferior oblique muscle in response to repeated bursts of vibratory stimulation to the forehead. A decreased response, or decrement, indicates failed neuromuscular transmission, as with standard repetitive nerve stimulation. The researchers evaluated the test in 13 patients with isolated ocular myasthenia gravis, 14 patients with generalized myasthenia gravis, and 28 healthy controls. They defined the oVEMP decrement as the decrease between the second stimulus and the average of the fifth through ninth stimuli.
A repetition rate of 20 Hz best differentiated between cases (average decrement, -21.5%) and controls (average decrement, -2.8%), the researchers reported. When at least one eye showed a decrement, the ideal cutoff was a decrement of at least 15.2%, which detected myasthenia gravis with a sensitivity of 89% and a specificity of 64%. When both eyes were affected, the ideal cutoff for the smaller of the two decrements was at least 20.4%, which yielded a sensitivity of 100% and a specificity of 63%. For both cutoffs, the test was similarly sensitive for detecting ocular and generalized myasthenia gravis. For the unilateral cutoff, the sensitivity was 92% for patients with isolated ocular myasthenia gravis and 86% for patients with generalized myasthenia gravis. For the bilateral cutoff, specificity was 62% in ocular myasthenia gravis and 64% in generalized myasthenia gravis.
The results provide class III evidence that oVEMP can distinguish between patients with myasthenia gravis and healthy controls, "but future studies will need to confirm its diagnostic utility in clinical practice, where the main challenge is differentiation from patients with other neuro-ophthalmologic conditions," the researchers said. "The possibility to apply fast repetition rates is one important advantage of oVEMP, which is not possible by measuring voluntary saccadic eye movements. As a consequence, oVEMP allowed us to unmask myasthenic decrements even in clinically asymptomatic eyes."
Because the study used a confirmed diagnosis of myasthenia gravis as a benchmark, all patients were already being treated with cholinesterase inhibitors. Although participants underwent oVEMP testing in the morning before their first dose of medication, the test needs further study in drug-naïve patients, as well as in patients with worse limitations in their upward gaze, the researchers added.
—Amy Karon
Suggested Reading
Valko Y, Rosengren SM, Jung HH, et al. Ocular vestibular evoked myogenic potentials as a test for myasthenia gravis. Neurology. 2016 Jan 20 [Epub ahead of print].
Suggested Reading
Valko Y, Rosengren SM, Jung HH, et al. Ocular vestibular evoked myogenic potentials as a test for myasthenia gravis. Neurology. 2016 Jan 20 [Epub ahead of print].
L-Selectin May Not Predict PML Risk Accurately
The expression of L-selectin (CD62L) on specific T cells in peripheral blood in patients with relapsing forms of multiple sclerosis (MS) does not predict the risk of progressive multifocal leukoencephalopathy (PML) during natalizumab treatment reliably, according to findings published January 26 in Neurology.
These findings contradict those of a previous preliminary study that used a different analytical technique. Investigators in the earlier study found a decrease in the percentage of CD4- and CD3-positive T cells expressing CD62L at least four months and often two years before PML diagnosis. They concluded that measuring the percentage of CD4- and CD3-positive T cells expressing CD62L “may improve stratification of patients taking natalizumab who are at risk for developing PML.”
Linda A. Lieberman, PhD, a research scientist at Biogen in Cambridge, Massachusetts, and colleagues sought to confirm the findings, enhance the reproducibility of the CD62L assay, “and potentially enable the deployment of CD62L as a biomarker for PML in a global setting.” The investigators, however, did not find a significant difference in the percentage of CD62L in cryopreserved peripheral blood mononuclear cells between 104 patients with relapsing forms of MS who received natalizumab and did not develop PML, and 21 patients who developed PML.
In the current study, the investigators detected a large range of CD62L (ie, 0.31% to 68.4%) in a subset of natalizumab-treated MS patients without PML at two time points at least six months apart. Because CD62L and the chemokine receptor CCR7 are coexpressed on CD4- and CD3-positive T cells, the researchers also examined the level of variation in simultaneous measurements of CD62L and CCR7 on the same cells at two separate time points in the same patients. They found that CD62L expression varied substantially, whereas CCR7 varied little, and the difference between the two was significant, “signifying that CD62L is not a stable outcome measure,” they wrote.
Dr. Lieberman and her colleagues also confirmed a positive correlation between lymphocyte viability and CD62L expression, which highlights the “technique-driven variability of the assay” used in the preliminary study.
In patient samples collected at least six months before PML diagnosis, the percentage of CD62L did not discriminate significantly between non-PML and active PML (defined as 0 to 6 months prior to diagnosis). The median percentage of CD62L varied according to the viability of cryopreserved CD4- and CD3-positive T cells. Median percentage of CD62L was no different between non-PML and pre-PML samples with lymphocyte viability greater than 75% (25.9% vs 26.3%, respectively), but was significantly lower than with non-PML and pre-PML samples with lymphocyte viability less than 75% (10.55% and 5.41%). There was no difference in lymphocyte viability between non-PML and pre-PML samples.
In a case–control comparison of patients receiving natalizumab who had multiple pre-PML samples, nine patients who developed PML had CD62L levels that in most samples were similar to those of nine matched control patients without PML.
Examination of samples from healthy controls demonstrated that CD62L also varied significantly in various disease states, such as after influenza vaccination and during hospitalization for total knee replacement surgery or methicillin-resistant Staphylococcus aureus infection, the researchers found.
—Jeff Evans
Suggested Reading
Lieberman LA, Zeng W, Singh C, et al. CD62L is not a reliable biomarker for predicting PML risk in natalizumab-treated R-MS patients. Neurology. 2016;86(4):375-381.
The expression of L-selectin (CD62L) on specific T cells in peripheral blood in patients with relapsing forms of multiple sclerosis (MS) does not predict the risk of progressive multifocal leukoencephalopathy (PML) during natalizumab treatment reliably, according to findings published January 26 in Neurology.
These findings contradict those of a previous preliminary study that used a different analytical technique. Investigators in the earlier study found a decrease in the percentage of CD4- and CD3-positive T cells expressing CD62L at least four months and often two years before PML diagnosis. They concluded that measuring the percentage of CD4- and CD3-positive T cells expressing CD62L “may improve stratification of patients taking natalizumab who are at risk for developing PML.”
Linda A. Lieberman, PhD, a research scientist at Biogen in Cambridge, Massachusetts, and colleagues sought to confirm the findings, enhance the reproducibility of the CD62L assay, “and potentially enable the deployment of CD62L as a biomarker for PML in a global setting.” The investigators, however, did not find a significant difference in the percentage of CD62L in cryopreserved peripheral blood mononuclear cells between 104 patients with relapsing forms of MS who received natalizumab and did not develop PML, and 21 patients who developed PML.
In the current study, the investigators detected a large range of CD62L (ie, 0.31% to 68.4%) in a subset of natalizumab-treated MS patients without PML at two time points at least six months apart. Because CD62L and the chemokine receptor CCR7 are coexpressed on CD4- and CD3-positive T cells, the researchers also examined the level of variation in simultaneous measurements of CD62L and CCR7 on the same cells at two separate time points in the same patients. They found that CD62L expression varied substantially, whereas CCR7 varied little, and the difference between the two was significant, “signifying that CD62L is not a stable outcome measure,” they wrote.
Dr. Lieberman and her colleagues also confirmed a positive correlation between lymphocyte viability and CD62L expression, which highlights the “technique-driven variability of the assay” used in the preliminary study.
In patient samples collected at least six months before PML diagnosis, the percentage of CD62L did not discriminate significantly between non-PML and active PML (defined as 0 to 6 months prior to diagnosis). The median percentage of CD62L varied according to the viability of cryopreserved CD4- and CD3-positive T cells. Median percentage of CD62L was no different between non-PML and pre-PML samples with lymphocyte viability greater than 75% (25.9% vs 26.3%, respectively), but was significantly lower than with non-PML and pre-PML samples with lymphocyte viability less than 75% (10.55% and 5.41%). There was no difference in lymphocyte viability between non-PML and pre-PML samples.
In a case–control comparison of patients receiving natalizumab who had multiple pre-PML samples, nine patients who developed PML had CD62L levels that in most samples were similar to those of nine matched control patients without PML.
Examination of samples from healthy controls demonstrated that CD62L also varied significantly in various disease states, such as after influenza vaccination and during hospitalization for total knee replacement surgery or methicillin-resistant Staphylococcus aureus infection, the researchers found.
—Jeff Evans
The expression of L-selectin (CD62L) on specific T cells in peripheral blood in patients with relapsing forms of multiple sclerosis (MS) does not predict the risk of progressive multifocal leukoencephalopathy (PML) during natalizumab treatment reliably, according to findings published January 26 in Neurology.
These findings contradict those of a previous preliminary study that used a different analytical technique. Investigators in the earlier study found a decrease in the percentage of CD4- and CD3-positive T cells expressing CD62L at least four months and often two years before PML diagnosis. They concluded that measuring the percentage of CD4- and CD3-positive T cells expressing CD62L “may improve stratification of patients taking natalizumab who are at risk for developing PML.”
Linda A. Lieberman, PhD, a research scientist at Biogen in Cambridge, Massachusetts, and colleagues sought to confirm the findings, enhance the reproducibility of the CD62L assay, “and potentially enable the deployment of CD62L as a biomarker for PML in a global setting.” The investigators, however, did not find a significant difference in the percentage of CD62L in cryopreserved peripheral blood mononuclear cells between 104 patients with relapsing forms of MS who received natalizumab and did not develop PML, and 21 patients who developed PML.
In the current study, the investigators detected a large range of CD62L (ie, 0.31% to 68.4%) in a subset of natalizumab-treated MS patients without PML at two time points at least six months apart. Because CD62L and the chemokine receptor CCR7 are coexpressed on CD4- and CD3-positive T cells, the researchers also examined the level of variation in simultaneous measurements of CD62L and CCR7 on the same cells at two separate time points in the same patients. They found that CD62L expression varied substantially, whereas CCR7 varied little, and the difference between the two was significant, “signifying that CD62L is not a stable outcome measure,” they wrote.
Dr. Lieberman and her colleagues also confirmed a positive correlation between lymphocyte viability and CD62L expression, which highlights the “technique-driven variability of the assay” used in the preliminary study.
In patient samples collected at least six months before PML diagnosis, the percentage of CD62L did not discriminate significantly between non-PML and active PML (defined as 0 to 6 months prior to diagnosis). The median percentage of CD62L varied according to the viability of cryopreserved CD4- and CD3-positive T cells. Median percentage of CD62L was no different between non-PML and pre-PML samples with lymphocyte viability greater than 75% (25.9% vs 26.3%, respectively), but was significantly lower than with non-PML and pre-PML samples with lymphocyte viability less than 75% (10.55% and 5.41%). There was no difference in lymphocyte viability between non-PML and pre-PML samples.
In a case–control comparison of patients receiving natalizumab who had multiple pre-PML samples, nine patients who developed PML had CD62L levels that in most samples were similar to those of nine matched control patients without PML.
Examination of samples from healthy controls demonstrated that CD62L also varied significantly in various disease states, such as after influenza vaccination and during hospitalization for total knee replacement surgery or methicillin-resistant Staphylococcus aureus infection, the researchers found.
—Jeff Evans
Suggested Reading
Lieberman LA, Zeng W, Singh C, et al. CD62L is not a reliable biomarker for predicting PML risk in natalizumab-treated R-MS patients. Neurology. 2016;86(4):375-381.
Suggested Reading
Lieberman LA, Zeng W, Singh C, et al. CD62L is not a reliable biomarker for predicting PML risk in natalizumab-treated R-MS patients. Neurology. 2016;86(4):375-381.
High WMH Volume and Depression Increase Risk of Functional Decline
Older adults with depression and a high volume of white matter hyperintensities (WMH) are at increased risk for functional decline, compared with older adults who have not been diagnosed with depression and older adults with low volumes of WMH, according to a study published in the January issue of American Journal of Geriatric Psychiatry.
In this investigation, 381 individuals age 60 or older responded to the Duke Depression Evaluation Schedule’s questions about their functional limitations. The study population, which comprised 244 patients with major depression and 137 individuals who had never had depression (ie, the controls), also underwent MRI to identify their total volume of WMH in the periventricular region and in deep white matter at baseline. The median length of follow-up for the study’s participants was five years, and some individuals were followed for as many as 16 years.
“We estimated linear mixed models to measure the associations between white matter lesion volume at baseline and change in functional status over time for each participant, controlling for age, sex, race, years of education, Mini-Mental State Examination score, and self-reported hypertension at the time of study enrollment,” said Celia F. Hybels, PhD, Associate Professor in Psychiatry and Behavioral Sciences at Duke University School of Medicine in Durham, North Carolina, and her colleagues.
Study participants with depression and a higher volume of WMH at baseline were at greatest risk for functional decline. Among participants who had never had depression, individuals with a higher WMH volume at the beginning of the study “had a more accelerated rate of functional decline,” compared with the other controls. In contrast to patients with depression and a higher volume of WMH at baseline, the patients with depression and a lower WMH volume at baseline had functional decline at a similar rate as controls with lower WMH volume.
The authors found that “depression is a modifier in the association between WMH and functional decline in older adults, which suggests one pathway by which older depressed adults develop disability. Those older adults with increased WMH volume who were also depressed had a sharper increase in the number of limitations over time, compared with older adults with a lower volume of WMH who were also depressed and with older adults without a history of depression. That depression is a moderator in the association between white matter pathology and functional decline suggests a unique contribution of depression.” They concluded by suggesting that follow-up studies use physical performance instead of self-report to measure individuals’ functional status.
—Katie Wagner Lennon
Suggested Reading
Hybels CF, Pieper CF, Payne ME, Steffens DC. Late-life depression modifies the association between cerebral white matter hyperintensities and functional decline among older adults. Am J Geriatr Psychiatry. 2016;24(1):42-49.
Older adults with depression and a high volume of white matter hyperintensities (WMH) are at increased risk for functional decline, compared with older adults who have not been diagnosed with depression and older adults with low volumes of WMH, according to a study published in the January issue of American Journal of Geriatric Psychiatry.
In this investigation, 381 individuals age 60 or older responded to the Duke Depression Evaluation Schedule’s questions about their functional limitations. The study population, which comprised 244 patients with major depression and 137 individuals who had never had depression (ie, the controls), also underwent MRI to identify their total volume of WMH in the periventricular region and in deep white matter at baseline. The median length of follow-up for the study’s participants was five years, and some individuals were followed for as many as 16 years.
“We estimated linear mixed models to measure the associations between white matter lesion volume at baseline and change in functional status over time for each participant, controlling for age, sex, race, years of education, Mini-Mental State Examination score, and self-reported hypertension at the time of study enrollment,” said Celia F. Hybels, PhD, Associate Professor in Psychiatry and Behavioral Sciences at Duke University School of Medicine in Durham, North Carolina, and her colleagues.
Study participants with depression and a higher volume of WMH at baseline were at greatest risk for functional decline. Among participants who had never had depression, individuals with a higher WMH volume at the beginning of the study “had a more accelerated rate of functional decline,” compared with the other controls. In contrast to patients with depression and a higher volume of WMH at baseline, the patients with depression and a lower WMH volume at baseline had functional decline at a similar rate as controls with lower WMH volume.
The authors found that “depression is a modifier in the association between WMH and functional decline in older adults, which suggests one pathway by which older depressed adults develop disability. Those older adults with increased WMH volume who were also depressed had a sharper increase in the number of limitations over time, compared with older adults with a lower volume of WMH who were also depressed and with older adults without a history of depression. That depression is a moderator in the association between white matter pathology and functional decline suggests a unique contribution of depression.” They concluded by suggesting that follow-up studies use physical performance instead of self-report to measure individuals’ functional status.
—Katie Wagner Lennon
Older adults with depression and a high volume of white matter hyperintensities (WMH) are at increased risk for functional decline, compared with older adults who have not been diagnosed with depression and older adults with low volumes of WMH, according to a study published in the January issue of American Journal of Geriatric Psychiatry.
In this investigation, 381 individuals age 60 or older responded to the Duke Depression Evaluation Schedule’s questions about their functional limitations. The study population, which comprised 244 patients with major depression and 137 individuals who had never had depression (ie, the controls), also underwent MRI to identify their total volume of WMH in the periventricular region and in deep white matter at baseline. The median length of follow-up for the study’s participants was five years, and some individuals were followed for as many as 16 years.
“We estimated linear mixed models to measure the associations between white matter lesion volume at baseline and change in functional status over time for each participant, controlling for age, sex, race, years of education, Mini-Mental State Examination score, and self-reported hypertension at the time of study enrollment,” said Celia F. Hybels, PhD, Associate Professor in Psychiatry and Behavioral Sciences at Duke University School of Medicine in Durham, North Carolina, and her colleagues.
Study participants with depression and a higher volume of WMH at baseline were at greatest risk for functional decline. Among participants who had never had depression, individuals with a higher WMH volume at the beginning of the study “had a more accelerated rate of functional decline,” compared with the other controls. In contrast to patients with depression and a higher volume of WMH at baseline, the patients with depression and a lower WMH volume at baseline had functional decline at a similar rate as controls with lower WMH volume.
The authors found that “depression is a modifier in the association between WMH and functional decline in older adults, which suggests one pathway by which older depressed adults develop disability. Those older adults with increased WMH volume who were also depressed had a sharper increase in the number of limitations over time, compared with older adults with a lower volume of WMH who were also depressed and with older adults without a history of depression. That depression is a moderator in the association between white matter pathology and functional decline suggests a unique contribution of depression.” They concluded by suggesting that follow-up studies use physical performance instead of self-report to measure individuals’ functional status.
—Katie Wagner Lennon
Suggested Reading
Hybels CF, Pieper CF, Payne ME, Steffens DC. Late-life depression modifies the association between cerebral white matter hyperintensities and functional decline among older adults. Am J Geriatr Psychiatry. 2016;24(1):42-49.
Suggested Reading
Hybels CF, Pieper CF, Payne ME, Steffens DC. Late-life depression modifies the association between cerebral white matter hyperintensities and functional decline among older adults. Am J Geriatr Psychiatry. 2016;24(1):42-49.
Later-Life Weight Loss May Indicate Incipient MCI
Increasing weight loss between midlife and late life is a marker for mild cognitive impairment (MCI), according to a report published online ahead of print February 1 in JAMA Neurology. “While weight loss may not be causally related to MCI, we hypothesize that weight loss may represent a prodromal stage or an early manifestation of MCI,” said Rabe E. Alhurani, MBBS, a neurologist at Mayo Clinic in Rochester, Minnesota, and his associates.
Recent studies have reported a link between weight loss and dementia, “but overall, the findings of different studies have been inconclusive,” they noted. To examine any association between weight loss over time and incident MCI, the investigators analyzed information in the population-based Mayo Clinic Study of Aging database, which included approximately 10,000 residents of Olmsted County, Minnesota, who were between ages 70 and 89 at the beginning of the study in 2004. They focused on 1,895 participants who were cognitively normal at entry into the study and whose medical records included data on weight and height from midlife (ie, ages 40 to 65) onward. All of these study subjects underwent a physical examination and an extensive neuropsychologic evaluation every 15 months for a mean of 4.4 years.
A total of 524 study participants developed MCI during follow-up. The mean weight loss since midlife was significantly greater for people who developed MCI (–2.0 kg) than for those who remained cognitively normal (–1.2 kg). After the data were adjusted to account for patient sex, education level, and APOE ε4 allele status, loss of weight after midlife was robustly associated with incident MCI, and a loss of 5 kg per decade corresponded to a 24% increase in risk of MCI, Dr. Alhurani and his associates reported.
These findings remained consistent across all categories of baseline weight, regardless of whether the participants were underweight, normal weight, overweight, or obese at enrollment. The effect sizes of the associations were greater in men than in women, but were significant in both sexes.
This study could not establish causality, but the researchers speculated that the association between weight loss and MCI could result from three possible mechanisms. First, the weight loss could stem from the “anorexia of aging,” meaning that dysfunctional production of certain hormones, such as cholecystokinin, leptin, cytokines, dynorphin, neuropeptide Y, and serotonin, or dysfunctional dietary intakes and energy metabolism could lead patients to eat less, which could in turn raise the risk of MCI.
Second, “neuropsychiatric symptoms such as depression and apathy, which are prodromal and predictors of MCI and dementia, may contribute to decreased appetite and weight loss prior to the diagnosis of these conditions,” they suggested.
Third, weight loss and MCI could share an etiology. Researchers have reported finding protein deposits, including deposits of Lewy bodies, tau, or amyloid, in the olfactory bulb and central olfactory pathways before the onset of dementia, and olfactory dysfunction is a marker for cognitive impairment and dementia. “Impairment in smell with related changes in taste may contribute to decreased appetite, reduced dietary intake, and the weight loss observed with MCI, Alzheimer dementia, and other neurodegenerative conditions,” said Dr. Alhurani.
—Mary Ann Moon
Suggested Reading
Alhurani RE, Vassilaki M, Aakre JA, et al. Decline in weight and incident mild cognitive impairment: Mayo Clinic Study of Aging. JAMA Neurol. 2016 Feb 1 [Epub ahead of print].
Increasing weight loss between midlife and late life is a marker for mild cognitive impairment (MCI), according to a report published online ahead of print February 1 in JAMA Neurology. “While weight loss may not be causally related to MCI, we hypothesize that weight loss may represent a prodromal stage or an early manifestation of MCI,” said Rabe E. Alhurani, MBBS, a neurologist at Mayo Clinic in Rochester, Minnesota, and his associates.
Recent studies have reported a link between weight loss and dementia, “but overall, the findings of different studies have been inconclusive,” they noted. To examine any association between weight loss over time and incident MCI, the investigators analyzed information in the population-based Mayo Clinic Study of Aging database, which included approximately 10,000 residents of Olmsted County, Minnesota, who were between ages 70 and 89 at the beginning of the study in 2004. They focused on 1,895 participants who were cognitively normal at entry into the study and whose medical records included data on weight and height from midlife (ie, ages 40 to 65) onward. All of these study subjects underwent a physical examination and an extensive neuropsychologic evaluation every 15 months for a mean of 4.4 years.
A total of 524 study participants developed MCI during follow-up. The mean weight loss since midlife was significantly greater for people who developed MCI (–2.0 kg) than for those who remained cognitively normal (–1.2 kg). After the data were adjusted to account for patient sex, education level, and APOE ε4 allele status, loss of weight after midlife was robustly associated with incident MCI, and a loss of 5 kg per decade corresponded to a 24% increase in risk of MCI, Dr. Alhurani and his associates reported.
These findings remained consistent across all categories of baseline weight, regardless of whether the participants were underweight, normal weight, overweight, or obese at enrollment. The effect sizes of the associations were greater in men than in women, but were significant in both sexes.
This study could not establish causality, but the researchers speculated that the association between weight loss and MCI could result from three possible mechanisms. First, the weight loss could stem from the “anorexia of aging,” meaning that dysfunctional production of certain hormones, such as cholecystokinin, leptin, cytokines, dynorphin, neuropeptide Y, and serotonin, or dysfunctional dietary intakes and energy metabolism could lead patients to eat less, which could in turn raise the risk of MCI.
Second, “neuropsychiatric symptoms such as depression and apathy, which are prodromal and predictors of MCI and dementia, may contribute to decreased appetite and weight loss prior to the diagnosis of these conditions,” they suggested.
Third, weight loss and MCI could share an etiology. Researchers have reported finding protein deposits, including deposits of Lewy bodies, tau, or amyloid, in the olfactory bulb and central olfactory pathways before the onset of dementia, and olfactory dysfunction is a marker for cognitive impairment and dementia. “Impairment in smell with related changes in taste may contribute to decreased appetite, reduced dietary intake, and the weight loss observed with MCI, Alzheimer dementia, and other neurodegenerative conditions,” said Dr. Alhurani.
—Mary Ann Moon
Increasing weight loss between midlife and late life is a marker for mild cognitive impairment (MCI), according to a report published online ahead of print February 1 in JAMA Neurology. “While weight loss may not be causally related to MCI, we hypothesize that weight loss may represent a prodromal stage or an early manifestation of MCI,” said Rabe E. Alhurani, MBBS, a neurologist at Mayo Clinic in Rochester, Minnesota, and his associates.
Recent studies have reported a link between weight loss and dementia, “but overall, the findings of different studies have been inconclusive,” they noted. To examine any association between weight loss over time and incident MCI, the investigators analyzed information in the population-based Mayo Clinic Study of Aging database, which included approximately 10,000 residents of Olmsted County, Minnesota, who were between ages 70 and 89 at the beginning of the study in 2004. They focused on 1,895 participants who were cognitively normal at entry into the study and whose medical records included data on weight and height from midlife (ie, ages 40 to 65) onward. All of these study subjects underwent a physical examination and an extensive neuropsychologic evaluation every 15 months for a mean of 4.4 years.
A total of 524 study participants developed MCI during follow-up. The mean weight loss since midlife was significantly greater for people who developed MCI (–2.0 kg) than for those who remained cognitively normal (–1.2 kg). After the data were adjusted to account for patient sex, education level, and APOE ε4 allele status, loss of weight after midlife was robustly associated with incident MCI, and a loss of 5 kg per decade corresponded to a 24% increase in risk of MCI, Dr. Alhurani and his associates reported.
These findings remained consistent across all categories of baseline weight, regardless of whether the participants were underweight, normal weight, overweight, or obese at enrollment. The effect sizes of the associations were greater in men than in women, but were significant in both sexes.
This study could not establish causality, but the researchers speculated that the association between weight loss and MCI could result from three possible mechanisms. First, the weight loss could stem from the “anorexia of aging,” meaning that dysfunctional production of certain hormones, such as cholecystokinin, leptin, cytokines, dynorphin, neuropeptide Y, and serotonin, or dysfunctional dietary intakes and energy metabolism could lead patients to eat less, which could in turn raise the risk of MCI.
Second, “neuropsychiatric symptoms such as depression and apathy, which are prodromal and predictors of MCI and dementia, may contribute to decreased appetite and weight loss prior to the diagnosis of these conditions,” they suggested.
Third, weight loss and MCI could share an etiology. Researchers have reported finding protein deposits, including deposits of Lewy bodies, tau, or amyloid, in the olfactory bulb and central olfactory pathways before the onset of dementia, and olfactory dysfunction is a marker for cognitive impairment and dementia. “Impairment in smell with related changes in taste may contribute to decreased appetite, reduced dietary intake, and the weight loss observed with MCI, Alzheimer dementia, and other neurodegenerative conditions,” said Dr. Alhurani.
—Mary Ann Moon
Suggested Reading
Alhurani RE, Vassilaki M, Aakre JA, et al. Decline in weight and incident mild cognitive impairment: Mayo Clinic Study of Aging. JAMA Neurol. 2016 Feb 1 [Epub ahead of print].
Suggested Reading
Alhurani RE, Vassilaki M, Aakre JA, et al. Decline in weight and incident mild cognitive impairment: Mayo Clinic Study of Aging. JAMA Neurol. 2016 Feb 1 [Epub ahead of print].
Proposal Requires Three Lesions for Diagnosis of MS
A European expert group has proposed several revisions to the 2010 McDonald criteria for the use of MRI in diagnosing multiple sclerosis (MS). In the January 25 online issue of Lancet Neurology, the MAGNIMS collaborative research network asserted that new data on the application of MRI, as well as improvements in MRI technology, demanded changes to the MS diagnostic criteria.
The first proposed recommendation is that three or more focal lesions, rather than a single lesion, should be present for a physician to diagnose the involvement of the periventricular region and to show disease dissemination in space. “A single lesion was deemed not sufficiently specific to determine whether involvement of the periventricular region is due to a demyelinating inflammatory event, and the use of one periventricular lesion for assessing dissemination in space has never been formally validated,” said Massimo Filippi, MD, Professor of Neurology at Vita-Salute San Raffaele University in Milan, and his coauthors. The authors also pointed out that incidental periventricular lesions are observed in as much as 30% of patients with migraine, and in individuals with other neurologic disorders.
In addition, the group recommended that optic nerve lesions be added to the criteria for dissemination in space. “Clinical documentation of optic nerve atrophy or pallor, neurophysiological confirmation of optic nerve dysfunction (slowed conduction), or imaging features of clinically silent optic nerve inflammation (MRI lesions or retinal nerve fiber layer thinning) support dissemination in space and, in patients without concurrent visual symptoms, dissemination in time.”
According to the new recommendations, disease dissemination in space can be shown by the involvement of at least two areas from a list of the following five possibilities: three or more periventricular lesions, one or more infratentorial lesions, one or more spinal cord lesions, one or more optic nerve lesions, or one or more cortical or juxtacortical lesions.
The group did not propose any significant changes to the criteria for dissemination in time. They did note, however, that the presence of nonenhancing black holes should not be considered as a potential alternative criterion to show dissemination in time in adult patients.
The committee also supported the existing recommendations that children age 11 or older with nonacute disseminated encephalomyelitis-like presentation should be diagnosed with the same criteria for dissemination in time and space as are applied to adults. “Several studies have confirmed that the 2010 McDonald criteria perform better than or similarly to previously proposed pediatric MS criteria for diagnosis of children with nonacute disseminated encephalomyelitis presentations and pediatric patients older than 11 years, and the consensus group therefore recommend caution when using these criteria in children younger than 11 years,” they said.
Other recommendations include that there be no distinction required between symptomatic and asymptomatic MRI lesions for diagnosing dissemination in time or space; that the whole spinal cord be imaged to define dissemination in space, particularly in patients who do not fulfill the brain MRI criteria; and that the same criteria for dissemination in space be used for primary progressive MS and relapse-onset MS, with CSF results considered for clinically uncertain cases of primary progressive MS.
—Bianca Nogrady
Suggested Reading
Filippi M, Rocca MA, Ciccarelli O, et al. MRI criteria for the diagnosis of multiple sclerosis: MAGNIMS consensus guidelines. Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
A European expert group has proposed several revisions to the 2010 McDonald criteria for the use of MRI in diagnosing multiple sclerosis (MS). In the January 25 online issue of Lancet Neurology, the MAGNIMS collaborative research network asserted that new data on the application of MRI, as well as improvements in MRI technology, demanded changes to the MS diagnostic criteria.
The first proposed recommendation is that three or more focal lesions, rather than a single lesion, should be present for a physician to diagnose the involvement of the periventricular region and to show disease dissemination in space. “A single lesion was deemed not sufficiently specific to determine whether involvement of the periventricular region is due to a demyelinating inflammatory event, and the use of one periventricular lesion for assessing dissemination in space has never been formally validated,” said Massimo Filippi, MD, Professor of Neurology at Vita-Salute San Raffaele University in Milan, and his coauthors. The authors also pointed out that incidental periventricular lesions are observed in as much as 30% of patients with migraine, and in individuals with other neurologic disorders.
In addition, the group recommended that optic nerve lesions be added to the criteria for dissemination in space. “Clinical documentation of optic nerve atrophy or pallor, neurophysiological confirmation of optic nerve dysfunction (slowed conduction), or imaging features of clinically silent optic nerve inflammation (MRI lesions or retinal nerve fiber layer thinning) support dissemination in space and, in patients without concurrent visual symptoms, dissemination in time.”
According to the new recommendations, disease dissemination in space can be shown by the involvement of at least two areas from a list of the following five possibilities: three or more periventricular lesions, one or more infratentorial lesions, one or more spinal cord lesions, one or more optic nerve lesions, or one or more cortical or juxtacortical lesions.
The group did not propose any significant changes to the criteria for dissemination in time. They did note, however, that the presence of nonenhancing black holes should not be considered as a potential alternative criterion to show dissemination in time in adult patients.
The committee also supported the existing recommendations that children age 11 or older with nonacute disseminated encephalomyelitis-like presentation should be diagnosed with the same criteria for dissemination in time and space as are applied to adults. “Several studies have confirmed that the 2010 McDonald criteria perform better than or similarly to previously proposed pediatric MS criteria for diagnosis of children with nonacute disseminated encephalomyelitis presentations and pediatric patients older than 11 years, and the consensus group therefore recommend caution when using these criteria in children younger than 11 years,” they said.
Other recommendations include that there be no distinction required between symptomatic and asymptomatic MRI lesions for diagnosing dissemination in time or space; that the whole spinal cord be imaged to define dissemination in space, particularly in patients who do not fulfill the brain MRI criteria; and that the same criteria for dissemination in space be used for primary progressive MS and relapse-onset MS, with CSF results considered for clinically uncertain cases of primary progressive MS.
—Bianca Nogrady
A European expert group has proposed several revisions to the 2010 McDonald criteria for the use of MRI in diagnosing multiple sclerosis (MS). In the January 25 online issue of Lancet Neurology, the MAGNIMS collaborative research network asserted that new data on the application of MRI, as well as improvements in MRI technology, demanded changes to the MS diagnostic criteria.
The first proposed recommendation is that three or more focal lesions, rather than a single lesion, should be present for a physician to diagnose the involvement of the periventricular region and to show disease dissemination in space. “A single lesion was deemed not sufficiently specific to determine whether involvement of the periventricular region is due to a demyelinating inflammatory event, and the use of one periventricular lesion for assessing dissemination in space has never been formally validated,” said Massimo Filippi, MD, Professor of Neurology at Vita-Salute San Raffaele University in Milan, and his coauthors. The authors also pointed out that incidental periventricular lesions are observed in as much as 30% of patients with migraine, and in individuals with other neurologic disorders.
In addition, the group recommended that optic nerve lesions be added to the criteria for dissemination in space. “Clinical documentation of optic nerve atrophy or pallor, neurophysiological confirmation of optic nerve dysfunction (slowed conduction), or imaging features of clinically silent optic nerve inflammation (MRI lesions or retinal nerve fiber layer thinning) support dissemination in space and, in patients without concurrent visual symptoms, dissemination in time.”
According to the new recommendations, disease dissemination in space can be shown by the involvement of at least two areas from a list of the following five possibilities: three or more periventricular lesions, one or more infratentorial lesions, one or more spinal cord lesions, one or more optic nerve lesions, or one or more cortical or juxtacortical lesions.
The group did not propose any significant changes to the criteria for dissemination in time. They did note, however, that the presence of nonenhancing black holes should not be considered as a potential alternative criterion to show dissemination in time in adult patients.
The committee also supported the existing recommendations that children age 11 or older with nonacute disseminated encephalomyelitis-like presentation should be diagnosed with the same criteria for dissemination in time and space as are applied to adults. “Several studies have confirmed that the 2010 McDonald criteria perform better than or similarly to previously proposed pediatric MS criteria for diagnosis of children with nonacute disseminated encephalomyelitis presentations and pediatric patients older than 11 years, and the consensus group therefore recommend caution when using these criteria in children younger than 11 years,” they said.
Other recommendations include that there be no distinction required between symptomatic and asymptomatic MRI lesions for diagnosing dissemination in time or space; that the whole spinal cord be imaged to define dissemination in space, particularly in patients who do not fulfill the brain MRI criteria; and that the same criteria for dissemination in space be used for primary progressive MS and relapse-onset MS, with CSF results considered for clinically uncertain cases of primary progressive MS.
—Bianca Nogrady
Suggested Reading
Filippi M, Rocca MA, Ciccarelli O, et al. MRI criteria for the diagnosis of multiple sclerosis: MAGNIMS consensus guidelines. Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Suggested Reading
Filippi M, Rocca MA, Ciccarelli O, et al. MRI criteria for the diagnosis of multiple sclerosis: MAGNIMS consensus guidelines. Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Phenytoin May Offer Neuroprotection to Patients With Optic Neuritis
Patients with acute demyelinating optic neuritis who received phenytoin lost 30% less of their retinal nerve fiber layer (RNFL) than did placebo-treated patients in a randomized phase II study published online ahead of print January 25 in Lancet Neurology. “The results of this clinical trial support the concept of neuroprotection using phenytoin to inhibit voltage-gated sodium channels in patients with acute optic neuritis,” said Rhian Raftopoulos, MD, of the National Hospital for Neurology and Neurosurgery in London, and coauthors.
In the study of 86 people with acute optic neuritis, investigators randomized 29 participants to receive 4 mg/kg/day of oral phenytoin, 13 participants to receive 6 mg/kg/day of oral phenytoin, and 44 participants to receive placebo for three months. All participants were randomized within 14 days of vision loss. One-third of the patients had previously been diagnosed with multiple sclerosis or was diagnosed at presentation, and 74% had at least one brain lesion on MRI.
Treatment with phenytoin was associated with a decline of mean RNFL thickness in the affected eye from 130.62 μm at baseline to 81.46 μm at six months, compared with a decline from 125.20 μm to 74.29 μm in the placebo group, representing a statistically significant adjusted mean difference of 7.15 μm.
The researchers also noted a significant 34% reduction in macular volume loss in the treatment arm, compared with placebo, representing an adjusted mean difference of 0.20 mm3. However, the treatment had no significant effect on low-contrast visual acuity and visual evoked potentials.
The most common adverse event in the treatment arm was maculopapular rash, which was judged as severe in one patient treated with phenytoin.
“The absence of regular, early outcome assessments around one to two months after initiation of treatment makes it hard to interpret the results because they would have helped to rule out a primarily anti-inflammatory effect of the treatment by tracking RNFL swelling and possible optic nerve inflammation, especially given that there was higher baseline RNFL thickness and worse low-contrast visual acuity in the patients who received phenytoin,” said Shiv Saidha, MBBCh, Assistant Professor of Neurology, and Peter A. Calabresi, MD, Director of the Division of Neuroimmunology, both at Johns Hopkins University in Baltimore, in an accompanying editorial. “If the true RNFL thickness at baseline in the affected eye of patients in the phenytoin group was higher than those in the placebo group, it could have accounted for the findings, even though the investigators made a prespecified adjustment for it.
“Although the results of this study are a major advancement and undeniably encouraging, future studies need to include more frequent optical coherence tomography (OCT) sampling, as well as more detailed OCT-segmentation-derived retinal measures such as ganglion cell plus inner plexiform layer thickness, which do not swell during acute optic neuritis, mitigating the need for statistical corrections involving the unaffected eye,” they concluded.
—Bianca Nogrady
Suggested Reading
Raftopoulos R, Hickman SJ, Toosy A, et al. Phenytoin for neuroprotection in patients with acute optic neuritis: a randomised, placebo-controlled, phase 2 trial. Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Saidha S, Calabresi PA. Phenytoin in acute optic neuritis: neuroprotective or not? Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Patients with acute demyelinating optic neuritis who received phenytoin lost 30% less of their retinal nerve fiber layer (RNFL) than did placebo-treated patients in a randomized phase II study published online ahead of print January 25 in Lancet Neurology. “The results of this clinical trial support the concept of neuroprotection using phenytoin to inhibit voltage-gated sodium channels in patients with acute optic neuritis,” said Rhian Raftopoulos, MD, of the National Hospital for Neurology and Neurosurgery in London, and coauthors.
In the study of 86 people with acute optic neuritis, investigators randomized 29 participants to receive 4 mg/kg/day of oral phenytoin, 13 participants to receive 6 mg/kg/day of oral phenytoin, and 44 participants to receive placebo for three months. All participants were randomized within 14 days of vision loss. One-third of the patients had previously been diagnosed with multiple sclerosis or was diagnosed at presentation, and 74% had at least one brain lesion on MRI.
Treatment with phenytoin was associated with a decline of mean RNFL thickness in the affected eye from 130.62 μm at baseline to 81.46 μm at six months, compared with a decline from 125.20 μm to 74.29 μm in the placebo group, representing a statistically significant adjusted mean difference of 7.15 μm.
The researchers also noted a significant 34% reduction in macular volume loss in the treatment arm, compared with placebo, representing an adjusted mean difference of 0.20 mm3. However, the treatment had no significant effect on low-contrast visual acuity and visual evoked potentials.
The most common adverse event in the treatment arm was maculopapular rash, which was judged as severe in one patient treated with phenytoin.
“The absence of regular, early outcome assessments around one to two months after initiation of treatment makes it hard to interpret the results because they would have helped to rule out a primarily anti-inflammatory effect of the treatment by tracking RNFL swelling and possible optic nerve inflammation, especially given that there was higher baseline RNFL thickness and worse low-contrast visual acuity in the patients who received phenytoin,” said Shiv Saidha, MBBCh, Assistant Professor of Neurology, and Peter A. Calabresi, MD, Director of the Division of Neuroimmunology, both at Johns Hopkins University in Baltimore, in an accompanying editorial. “If the true RNFL thickness at baseline in the affected eye of patients in the phenytoin group was higher than those in the placebo group, it could have accounted for the findings, even though the investigators made a prespecified adjustment for it.
“Although the results of this study are a major advancement and undeniably encouraging, future studies need to include more frequent optical coherence tomography (OCT) sampling, as well as more detailed OCT-segmentation-derived retinal measures such as ganglion cell plus inner plexiform layer thickness, which do not swell during acute optic neuritis, mitigating the need for statistical corrections involving the unaffected eye,” they concluded.
—Bianca Nogrady
Patients with acute demyelinating optic neuritis who received phenytoin lost 30% less of their retinal nerve fiber layer (RNFL) than did placebo-treated patients in a randomized phase II study published online ahead of print January 25 in Lancet Neurology. “The results of this clinical trial support the concept of neuroprotection using phenytoin to inhibit voltage-gated sodium channels in patients with acute optic neuritis,” said Rhian Raftopoulos, MD, of the National Hospital for Neurology and Neurosurgery in London, and coauthors.
In the study of 86 people with acute optic neuritis, investigators randomized 29 participants to receive 4 mg/kg/day of oral phenytoin, 13 participants to receive 6 mg/kg/day of oral phenytoin, and 44 participants to receive placebo for three months. All participants were randomized within 14 days of vision loss. One-third of the patients had previously been diagnosed with multiple sclerosis or was diagnosed at presentation, and 74% had at least one brain lesion on MRI.
Treatment with phenytoin was associated with a decline of mean RNFL thickness in the affected eye from 130.62 μm at baseline to 81.46 μm at six months, compared with a decline from 125.20 μm to 74.29 μm in the placebo group, representing a statistically significant adjusted mean difference of 7.15 μm.
The researchers also noted a significant 34% reduction in macular volume loss in the treatment arm, compared with placebo, representing an adjusted mean difference of 0.20 mm3. However, the treatment had no significant effect on low-contrast visual acuity and visual evoked potentials.
The most common adverse event in the treatment arm was maculopapular rash, which was judged as severe in one patient treated with phenytoin.
“The absence of regular, early outcome assessments around one to two months after initiation of treatment makes it hard to interpret the results because they would have helped to rule out a primarily anti-inflammatory effect of the treatment by tracking RNFL swelling and possible optic nerve inflammation, especially given that there was higher baseline RNFL thickness and worse low-contrast visual acuity in the patients who received phenytoin,” said Shiv Saidha, MBBCh, Assistant Professor of Neurology, and Peter A. Calabresi, MD, Director of the Division of Neuroimmunology, both at Johns Hopkins University in Baltimore, in an accompanying editorial. “If the true RNFL thickness at baseline in the affected eye of patients in the phenytoin group was higher than those in the placebo group, it could have accounted for the findings, even though the investigators made a prespecified adjustment for it.
“Although the results of this study are a major advancement and undeniably encouraging, future studies need to include more frequent optical coherence tomography (OCT) sampling, as well as more detailed OCT-segmentation-derived retinal measures such as ganglion cell plus inner plexiform layer thickness, which do not swell during acute optic neuritis, mitigating the need for statistical corrections involving the unaffected eye,” they concluded.
—Bianca Nogrady
Suggested Reading
Raftopoulos R, Hickman SJ, Toosy A, et al. Phenytoin for neuroprotection in patients with acute optic neuritis: a randomised, placebo-controlled, phase 2 trial. Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Saidha S, Calabresi PA. Phenytoin in acute optic neuritis: neuroprotective or not? Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Suggested Reading
Raftopoulos R, Hickman SJ, Toosy A, et al. Phenytoin for neuroprotection in patients with acute optic neuritis: a randomised, placebo-controlled, phase 2 trial. Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Saidha S, Calabresi PA. Phenytoin in acute optic neuritis: neuroprotective or not? Lancet Neurol. 2016 Jan 25 [Epub ahead of print].
Brain Changes Precede Symptoms of Familial Alzheimer’s Disease by Decades
Inflammatory brain changes appear to develop 20 years before the onset of symptoms in people with some familial forms of Alzheimer’s disease, according to a longitudinal analysis of PET imaging biomarkers of astrocyte activation, amyloid-beta accumulation, and glucose metabolism in the brain. The analysis was published online ahead of print January 26 in Brain.
Increase and Decrease in Amyloid Beta
Imaging studies demonstrated a sharp elevation in astrocyte activation, a response to neuronal insult, before amyloid-beta began to accumulate in the brain and subsequently decline. These changes eventually were followed by a steady and spreading decrease in glucose metabolism that reached the hippocampus two years before symptoms appeared, said Elena Rodriguez-Vieitez, PhD, Senior Researcher at the Karolinska Institutet in Stockholm, and her colleagues.
The findings suggest that the disease may begin with an unknown insult that, in familial forms of Alzheimer’s disease, stimulates reactive astrocytosis. The data also suggest that the cells themselves could be a legitimate therapeutic target.
The identity of the initial insult is still unknown, but it may be a reaction to soluble amyloid-beta. “Recent studies have supported the hypothesis that astrocytes have a beneficial role contributing to amyloid-beta clearance, but also that excess amyloid-beta can lead to oxidative stress and damage, and as a consequence to reduced astrocyte functionality, leading to reactive changes and decreased neuronal support, and thereby contributing to neurodegeneration,” said Dr. Rodriguez-Vieitez.
The significance of the decline in astrocyte activation with disease progression also is unknown. It may be “an indication of a reduction in a certain type of astrocyte activation or functionality, a change of astrocyte activation phenotype, or possibly ‘astrodegeneration’ and astrocyte cell loss itself, as has been reported toward the late stages of Alzheimer’s disease.”
An Analysis of Three PET Tracers
The study tracked brain PET imaging changes in 52 people. Of the total population, 27 people came from families with autosomal dominant Alzheimer’s disease (ADAD) mutations, including presenilin 1 or amyloid precursor protein genes, and 25 people had sporadic Alzheimer’s disease or mild cognitive impairment (MCI). Researchers used Pittsburgh imaging compound B (PiB) to detect amyloid-beta, 18F-fluorodeoxyglucose (FDG) to detect glucose metabolism, and 11C-deuterium-L-deprenyl (DED) to detect astrocyte activation. Half of participants had a follow-up visit after a mean of 2.8 years.
The ADAD cohort included 16 noncarriers, four symptomatic carriers, and seven presymptomatic carriers who were a mean of 10 years from symptom onset. The group of participants with sporadic Alzheimer’s disease included 13 amyloid-positive patients, four amyloid-negative patients with MCI, and eight patients with a diagnosis of Alzheimer’s disease. The 16 ADAD noncarriers served as a control group for PiB retention and FDG uptake, and 14 age-matched healthy controls served as controls for DED binding.
At baseline, all subjects underwent PET imaging with all three tracers, as well as CSF biomarker analysis and neuropsychologic assessments. Half of the subjects had additional clinical and imaging studies three years later. The investigators used historical data to estimate the age of symptom onset in the subjects with ADAD, and thus extrapolated the imaging findings to reflect the pathologic course.
Form of Disease May Affect Astrocyte Activation
At baseline, the researchers found significant between-group differences in all three imaging studies. PiB-positive patients with MCI and patients with Alzheimer’s disease from the sporadic and ADAD cohorts had the highest PiB retention. Presymptomatic carriers, however, had significantly higher DED binding than did any of the other groups. The increased astrocytosis that occurs with higher DED binding occurred in particular in the following four of 12 brain regions surveyed: the anterior cingulate cortex, the thalamus, and the frontal and parietal regions. DED binding in the sporadic Alzheimer’s disease and ADAD groups was not statistically different from that seen in a group of healthy control patients, except for a trend for higher binding in the frontal region.
In most brain regions at baseline, FDG uptake was greater in presymptomatic carriers and healthy controls than in either the PiB-positive MCI, sporadic Alzheimer’s disease, or ADAD groups, but there were no significant differences between the presymptomatic carriers and healthy controls or between the PiB-positive MCI, sporadic Alzheimer’s disease, or ADAD groups. FDG uptake at baseline was lower in the left parietal region of presymptomatic carriers than in the healthy controls. At follow-up, this difference spread to the left posterior cingulate cortex and other parietal regions and to the left middle frontal gyrus and the bilateral cuneus. Presymptomatic carriers had significantly greater FDG uptake in the frontal and temporal regions and the right thalamus than did PiB-positive patients with MCI. This difference persisted through follow-up.
The investigators used a linear mixed-effects model to estimate how astrocytosis, amyloid deposition, and glucose metabolism would change over time in the ADAD and sporadic Alzheimer’s disease groups. Comparing ADAD carriers to noncarriers, the model determined that amyloid began to aggregate in the putamen at about 17 years before symptom onset in carriers. Plaques subsequently spread outward, reaching the caudate nucleus and the anterior and posterior cingulate cortices by 15 years before symptom onset, and reaching the frontal cortex and other cortical regions at about 14 years before onset.
Conversely, the model found that DED binding in ADAD presymptomatic carriers steadily declined from its peak at about 20 years before expected symptom onset. By the time symptoms appeared, astrocyte activation was similar to that observed in noncarriers. Among patients with sporadic Alzheimer’s disease or MCI, the researchers found no significant temporal changes in DED binding.
Presymptomatic carriers also showed significant linear changes in glucose utilization. FDG uptake in the parietal and temporal regions began to decline at about seven years before symptom onset. Glucose metabolism was maintained in the caudate nucleus, thalamus, and hippocampus until it began to decline at about two years before symptom onset.
Patients with amyloid-positive MCI or sporadic Alzheimer’s disease showed significant temporal increases in amyloid deposition, and declines in glucose metabolism in the anterior cingulate cortex, but no significant changes in astrocyte activation.
“Possible explanations for this finding include the heterogeneity of disease stage in MCI patients, the shorter time span investigated in the sporadic patients compared to the ADAD participants, or a possibly different progression of astrocyte activation in the sporadic compared to the autosomal dominant forms,” said the authors.
—Michele G. Sullivan
Suggested Reading
Rodriguez-Vieitez E, Saint-Aubert L, Carter SF, et al. Diverging longitudinal changes in astrocytosis and amyloid PET in autosomal dominant Alzheimer’s disease. Brain. 2016 Jan 26 [Epub ahead of print].
Inflammatory brain changes appear to develop 20 years before the onset of symptoms in people with some familial forms of Alzheimer’s disease, according to a longitudinal analysis of PET imaging biomarkers of astrocyte activation, amyloid-beta accumulation, and glucose metabolism in the brain. The analysis was published online ahead of print January 26 in Brain.
Increase and Decrease in Amyloid Beta
Imaging studies demonstrated a sharp elevation in astrocyte activation, a response to neuronal insult, before amyloid-beta began to accumulate in the brain and subsequently decline. These changes eventually were followed by a steady and spreading decrease in glucose metabolism that reached the hippocampus two years before symptoms appeared, said Elena Rodriguez-Vieitez, PhD, Senior Researcher at the Karolinska Institutet in Stockholm, and her colleagues.
The findings suggest that the disease may begin with an unknown insult that, in familial forms of Alzheimer’s disease, stimulates reactive astrocytosis. The data also suggest that the cells themselves could be a legitimate therapeutic target.
The identity of the initial insult is still unknown, but it may be a reaction to soluble amyloid-beta. “Recent studies have supported the hypothesis that astrocytes have a beneficial role contributing to amyloid-beta clearance, but also that excess amyloid-beta can lead to oxidative stress and damage, and as a consequence to reduced astrocyte functionality, leading to reactive changes and decreased neuronal support, and thereby contributing to neurodegeneration,” said Dr. Rodriguez-Vieitez.
The significance of the decline in astrocyte activation with disease progression also is unknown. It may be “an indication of a reduction in a certain type of astrocyte activation or functionality, a change of astrocyte activation phenotype, or possibly ‘astrodegeneration’ and astrocyte cell loss itself, as has been reported toward the late stages of Alzheimer’s disease.”
An Analysis of Three PET Tracers
The study tracked brain PET imaging changes in 52 people. Of the total population, 27 people came from families with autosomal dominant Alzheimer’s disease (ADAD) mutations, including presenilin 1 or amyloid precursor protein genes, and 25 people had sporadic Alzheimer’s disease or mild cognitive impairment (MCI). Researchers used Pittsburgh imaging compound B (PiB) to detect amyloid-beta, 18F-fluorodeoxyglucose (FDG) to detect glucose metabolism, and 11C-deuterium-L-deprenyl (DED) to detect astrocyte activation. Half of participants had a follow-up visit after a mean of 2.8 years.
The ADAD cohort included 16 noncarriers, four symptomatic carriers, and seven presymptomatic carriers who were a mean of 10 years from symptom onset. The group of participants with sporadic Alzheimer’s disease included 13 amyloid-positive patients, four amyloid-negative patients with MCI, and eight patients with a diagnosis of Alzheimer’s disease. The 16 ADAD noncarriers served as a control group for PiB retention and FDG uptake, and 14 age-matched healthy controls served as controls for DED binding.
At baseline, all subjects underwent PET imaging with all three tracers, as well as CSF biomarker analysis and neuropsychologic assessments. Half of the subjects had additional clinical and imaging studies three years later. The investigators used historical data to estimate the age of symptom onset in the subjects with ADAD, and thus extrapolated the imaging findings to reflect the pathologic course.
Form of Disease May Affect Astrocyte Activation
At baseline, the researchers found significant between-group differences in all three imaging studies. PiB-positive patients with MCI and patients with Alzheimer’s disease from the sporadic and ADAD cohorts had the highest PiB retention. Presymptomatic carriers, however, had significantly higher DED binding than did any of the other groups. The increased astrocytosis that occurs with higher DED binding occurred in particular in the following four of 12 brain regions surveyed: the anterior cingulate cortex, the thalamus, and the frontal and parietal regions. DED binding in the sporadic Alzheimer’s disease and ADAD groups was not statistically different from that seen in a group of healthy control patients, except for a trend for higher binding in the frontal region.
In most brain regions at baseline, FDG uptake was greater in presymptomatic carriers and healthy controls than in either the PiB-positive MCI, sporadic Alzheimer’s disease, or ADAD groups, but there were no significant differences between the presymptomatic carriers and healthy controls or between the PiB-positive MCI, sporadic Alzheimer’s disease, or ADAD groups. FDG uptake at baseline was lower in the left parietal region of presymptomatic carriers than in the healthy controls. At follow-up, this difference spread to the left posterior cingulate cortex and other parietal regions and to the left middle frontal gyrus and the bilateral cuneus. Presymptomatic carriers had significantly greater FDG uptake in the frontal and temporal regions and the right thalamus than did PiB-positive patients with MCI. This difference persisted through follow-up.
The investigators used a linear mixed-effects model to estimate how astrocytosis, amyloid deposition, and glucose metabolism would change over time in the ADAD and sporadic Alzheimer’s disease groups. Comparing ADAD carriers to noncarriers, the model determined that amyloid began to aggregate in the putamen at about 17 years before symptom onset in carriers. Plaques subsequently spread outward, reaching the caudate nucleus and the anterior and posterior cingulate cortices by 15 years before symptom onset, and reaching the frontal cortex and other cortical regions at about 14 years before onset.
Conversely, the model found that DED binding in ADAD presymptomatic carriers steadily declined from its peak at about 20 years before expected symptom onset. By the time symptoms appeared, astrocyte activation was similar to that observed in noncarriers. Among patients with sporadic Alzheimer’s disease or MCI, the researchers found no significant temporal changes in DED binding.
Presymptomatic carriers also showed significant linear changes in glucose utilization. FDG uptake in the parietal and temporal regions began to decline at about seven years before symptom onset. Glucose metabolism was maintained in the caudate nucleus, thalamus, and hippocampus until it began to decline at about two years before symptom onset.
Patients with amyloid-positive MCI or sporadic Alzheimer’s disease showed significant temporal increases in amyloid deposition, and declines in glucose metabolism in the anterior cingulate cortex, but no significant changes in astrocyte activation.
“Possible explanations for this finding include the heterogeneity of disease stage in MCI patients, the shorter time span investigated in the sporadic patients compared to the ADAD participants, or a possibly different progression of astrocyte activation in the sporadic compared to the autosomal dominant forms,” said the authors.
—Michele G. Sullivan
Inflammatory brain changes appear to develop 20 years before the onset of symptoms in people with some familial forms of Alzheimer’s disease, according to a longitudinal analysis of PET imaging biomarkers of astrocyte activation, amyloid-beta accumulation, and glucose metabolism in the brain. The analysis was published online ahead of print January 26 in Brain.
Increase and Decrease in Amyloid Beta
Imaging studies demonstrated a sharp elevation in astrocyte activation, a response to neuronal insult, before amyloid-beta began to accumulate in the brain and subsequently decline. These changes eventually were followed by a steady and spreading decrease in glucose metabolism that reached the hippocampus two years before symptoms appeared, said Elena Rodriguez-Vieitez, PhD, Senior Researcher at the Karolinska Institutet in Stockholm, and her colleagues.
The findings suggest that the disease may begin with an unknown insult that, in familial forms of Alzheimer’s disease, stimulates reactive astrocytosis. The data also suggest that the cells themselves could be a legitimate therapeutic target.
The identity of the initial insult is still unknown, but it may be a reaction to soluble amyloid-beta. “Recent studies have supported the hypothesis that astrocytes have a beneficial role contributing to amyloid-beta clearance, but also that excess amyloid-beta can lead to oxidative stress and damage, and as a consequence to reduced astrocyte functionality, leading to reactive changes and decreased neuronal support, and thereby contributing to neurodegeneration,” said Dr. Rodriguez-Vieitez.
The significance of the decline in astrocyte activation with disease progression also is unknown. It may be “an indication of a reduction in a certain type of astrocyte activation or functionality, a change of astrocyte activation phenotype, or possibly ‘astrodegeneration’ and astrocyte cell loss itself, as has been reported toward the late stages of Alzheimer’s disease.”
An Analysis of Three PET Tracers
The study tracked brain PET imaging changes in 52 people. Of the total population, 27 people came from families with autosomal dominant Alzheimer’s disease (ADAD) mutations, including presenilin 1 or amyloid precursor protein genes, and 25 people had sporadic Alzheimer’s disease or mild cognitive impairment (MCI). Researchers used Pittsburgh imaging compound B (PiB) to detect amyloid-beta, 18F-fluorodeoxyglucose (FDG) to detect glucose metabolism, and 11C-deuterium-L-deprenyl (DED) to detect astrocyte activation. Half of participants had a follow-up visit after a mean of 2.8 years.
The ADAD cohort included 16 noncarriers, four symptomatic carriers, and seven presymptomatic carriers who were a mean of 10 years from symptom onset. The group of participants with sporadic Alzheimer’s disease included 13 amyloid-positive patients, four amyloid-negative patients with MCI, and eight patients with a diagnosis of Alzheimer’s disease. The 16 ADAD noncarriers served as a control group for PiB retention and FDG uptake, and 14 age-matched healthy controls served as controls for DED binding.
At baseline, all subjects underwent PET imaging with all three tracers, as well as CSF biomarker analysis and neuropsychologic assessments. Half of the subjects had additional clinical and imaging studies three years later. The investigators used historical data to estimate the age of symptom onset in the subjects with ADAD, and thus extrapolated the imaging findings to reflect the pathologic course.
Form of Disease May Affect Astrocyte Activation
At baseline, the researchers found significant between-group differences in all three imaging studies. PiB-positive patients with MCI and patients with Alzheimer’s disease from the sporadic and ADAD cohorts had the highest PiB retention. Presymptomatic carriers, however, had significantly higher DED binding than did any of the other groups. The increased astrocytosis that occurs with higher DED binding occurred in particular in the following four of 12 brain regions surveyed: the anterior cingulate cortex, the thalamus, and the frontal and parietal regions. DED binding in the sporadic Alzheimer’s disease and ADAD groups was not statistically different from that seen in a group of healthy control patients, except for a trend for higher binding in the frontal region.
In most brain regions at baseline, FDG uptake was greater in presymptomatic carriers and healthy controls than in either the PiB-positive MCI, sporadic Alzheimer’s disease, or ADAD groups, but there were no significant differences between the presymptomatic carriers and healthy controls or between the PiB-positive MCI, sporadic Alzheimer’s disease, or ADAD groups. FDG uptake at baseline was lower in the left parietal region of presymptomatic carriers than in the healthy controls. At follow-up, this difference spread to the left posterior cingulate cortex and other parietal regions and to the left middle frontal gyrus and the bilateral cuneus. Presymptomatic carriers had significantly greater FDG uptake in the frontal and temporal regions and the right thalamus than did PiB-positive patients with MCI. This difference persisted through follow-up.
The investigators used a linear mixed-effects model to estimate how astrocytosis, amyloid deposition, and glucose metabolism would change over time in the ADAD and sporadic Alzheimer’s disease groups. Comparing ADAD carriers to noncarriers, the model determined that amyloid began to aggregate in the putamen at about 17 years before symptom onset in carriers. Plaques subsequently spread outward, reaching the caudate nucleus and the anterior and posterior cingulate cortices by 15 years before symptom onset, and reaching the frontal cortex and other cortical regions at about 14 years before onset.
Conversely, the model found that DED binding in ADAD presymptomatic carriers steadily declined from its peak at about 20 years before expected symptom onset. By the time symptoms appeared, astrocyte activation was similar to that observed in noncarriers. Among patients with sporadic Alzheimer’s disease or MCI, the researchers found no significant temporal changes in DED binding.
Presymptomatic carriers also showed significant linear changes in glucose utilization. FDG uptake in the parietal and temporal regions began to decline at about seven years before symptom onset. Glucose metabolism was maintained in the caudate nucleus, thalamus, and hippocampus until it began to decline at about two years before symptom onset.
Patients with amyloid-positive MCI or sporadic Alzheimer’s disease showed significant temporal increases in amyloid deposition, and declines in glucose metabolism in the anterior cingulate cortex, but no significant changes in astrocyte activation.
“Possible explanations for this finding include the heterogeneity of disease stage in MCI patients, the shorter time span investigated in the sporadic patients compared to the ADAD participants, or a possibly different progression of astrocyte activation in the sporadic compared to the autosomal dominant forms,” said the authors.
—Michele G. Sullivan
Suggested Reading
Rodriguez-Vieitez E, Saint-Aubert L, Carter SF, et al. Diverging longitudinal changes in astrocytosis and amyloid PET in autosomal dominant Alzheimer’s disease. Brain. 2016 Jan 26 [Epub ahead of print].
Suggested Reading
Rodriguez-Vieitez E, Saint-Aubert L, Carter SF, et al. Diverging longitudinal changes in astrocytosis and amyloid PET in autosomal dominant Alzheimer’s disease. Brain. 2016 Jan 26 [Epub ahead of print].
Opicapone May Reduce Off Time for Patients With Parkinson’s Disease
Administering 50 mg of opicapone as an adjunct to levodopa treatment decreases the amount of off time by approximately 61 minutes, compared with placebo, for patients with Parkinson’s disease and end-of-dose motor fluctuations, according to research published in the February issue of Lancet Neurology. Data indicate that the drug is safe, well tolerated, and noninferior to entacapone for this indication.
Opicapone is “the only once-daily catechol-O-methyltransferase (COMT) inhibitor to provide a mean reduction in time in the off state that is clinically relevant,” said Joaquim J. Ferreira, MD, Professor of Neurology and Clinical Pharmacology at the University of Lisbon, and colleagues. Administering the drug once daily could simplify a patient’s drug regimen by permitting the physician to decrease the total daily levodopa dose, increase the dosing interval, and reduce the number of intakes, thereby maximizing the benefit of therapy, he added.
Comparing Opicapone, Entacapone, and Placebo
The half-life of oral levodopa is between 60 and 90 minutes and is linked with end-of-dose motor fluctuations. COMT inhibitors increase the plasma elimination half-life of levodopa and decrease peak–trough variations. Entacapone, a COMT inhibitor, provides moderate reductions in daily off time, but needs to be administered with each dose of levodopa. Neurologists thus have sought a more effective COMT inhibitor that can be used easily in clinical practice.
Dr. Ferreira and colleagues assessed the safety and efficacy of opicapone as an adjunct to levodopa, compared with placebo and entacapone, in patients with Parkinson’s disease and motor fluctuations. Eligible participants had had a clinical diagnosis of Parkinson’s disease for at least three years, a Hoehn and Yahr stage of 1 to 3 during the on state, and at least one year of clinical improvement with levodopa treatment. People who had used entacapone previously, had significant dyskinesia disability, or had severe or unpredictable off periods were excluded from the study.
Equal groups of patients were computer randomized to once-daily opicapone (5 mg, 25 mg, or 50 mg), placebo, or 200 mg of entacapone with every levodopa intake. The participants and investigators were blinded to treatment allocation throughout the study. Opicapone capsules and entacapone tablets were overencapsulated to maintain blinding.
Doses of study drugs were given concomitantly with each levodopa intake. Patients in the opicapone groups received placebo for the daytime doses and active treatment for the bedtime dose. Patients in the entacapone group took the active treatment during the day and placebo as the bedtime dose. Investigators assessed participants at screening, baseline, and at five subsequent time points.
The study’s primary end point was the change from baseline to the end of study treatment in absolute off time, as assessed by daily patient diaries. Secondary end points included the change from baseline to the end of study treatment in the proportion of patients who had at least a one-hour reduction in absolute off time and the change from baseline to the end of study treatment in the proportion of patients who had at least a one-hour increase in absolute total on time.
For statistical analysis, population sets were defined as the full analysis set, which included all randomly assigned patients who took at least one dose of study drug and had at least one assessment of time in the off state after baseline; the per-protocol set, which included all patients in the full analysis set who did not have any major protocol deviations; and the safety set, which included all patients who received at least one dose of study drug.
Opicapone Was Noninferior to Entacapone
The researchers enrolled 600 patients, of whom 121 received placebo, 122 received 200 mg of entacapone, 122 received 5 mg of opicapone, 119 received 25 mg of opicapone, and 116 received 50 mg of opicapone. The full analysis included 590 patients, and the per-protocol analysis included 537 patients. In all, 542 patients completed the study. Patient demographics, baseline Parkinson’s disease characteristics, and treatment history did not differ between the treatment groups in the safety analysis.
In the full analysis, the adjusted mean change from baseline in absolute off time was –116.8 minutes in the opicapone 50 mg group, compared with –96.3 minutes in the entacapone group, –91.3 minutes in the opicapone 5 mg group, –85.9 minutes in the opicapone 25 mg group, and –56.0 minutes in the placebo group. The per-protocol analysis yielded similar results.
The investigators only tested the 50-mg dose of opicapone for noninferiority of absolute off time. This treatment was superior to placebo and noninferior to entacapone. The researchers found no significant differences between placebo and opicapone 5 mg or opicapone 25 mg. Entacapone was superior to placebo.
Compared with placebo, the proportion of patients with a reduction in off time of at least one hour was significantly higher in patient who received 25 mg or 50 mg of opicapone, and the proportion of patients with an increase in on time of at least one hour was significantly higher in patients who received 50 mg of opicapone. No significant differences were noted in off and on state rates for entacapone versus placebo. Results for the other secondary end points supported those of the primary analysis.
Adverse Events Were Uncommon
The percentage of patients who discontinued because of treatment-emergent adverse events was low and similar across the treatment groups. The most common treatment-emergent adverse events leading to discontinuation were diarrhea, visual hallucinations, and dyskinesia. Dyskinesia was the most frequently reported treatment-emergent adverse event possibly related to the study drug, and the highest incidence occurred in the opicapone groups. Approximately 80% of treatment-emergent dyskinesias occurred in patients in all groups who already had dyskinesia at baseline. The incidence of serious treatment-emergent adverse events was low (ie, 7% or less) in all groups, and 35% of these events were judged to be unrelated to the study drug.
“The beneficial effects of opicapone 50 mg at reducing the time in the off state were accompanied by a corresponding increase in time in the on state without troublesome dyskinesia, whereas the duration of time in the on state with troublesome dyskinesia did not change,” said Dr. Ferreira. The study results “suggest an overall positive risk-to-benefit ratio for the use of opicapone in patients with Parkinson’s disease with end-of-dose motor fluctuations,” he added. Results of the authors’ open-label extension study will be published in the future.
—Erik Greb
Suggested Reading
Ferreira JJ, Lees A, Rocha JF, et al. Opicapone as an adjunct to levodopa in patients with Parkinson’s disease and end-of-dose motor fluctuations: a randomised, double-blind, controlled trial. Lancet Neurol. 2016;15(2):154-165.
Administering 50 mg of opicapone as an adjunct to levodopa treatment decreases the amount of off time by approximately 61 minutes, compared with placebo, for patients with Parkinson’s disease and end-of-dose motor fluctuations, according to research published in the February issue of Lancet Neurology. Data indicate that the drug is safe, well tolerated, and noninferior to entacapone for this indication.
Opicapone is “the only once-daily catechol-O-methyltransferase (COMT) inhibitor to provide a mean reduction in time in the off state that is clinically relevant,” said Joaquim J. Ferreira, MD, Professor of Neurology and Clinical Pharmacology at the University of Lisbon, and colleagues. Administering the drug once daily could simplify a patient’s drug regimen by permitting the physician to decrease the total daily levodopa dose, increase the dosing interval, and reduce the number of intakes, thereby maximizing the benefit of therapy, he added.
Comparing Opicapone, Entacapone, and Placebo
The half-life of oral levodopa is between 60 and 90 minutes and is linked with end-of-dose motor fluctuations. COMT inhibitors increase the plasma elimination half-life of levodopa and decrease peak–trough variations. Entacapone, a COMT inhibitor, provides moderate reductions in daily off time, but needs to be administered with each dose of levodopa. Neurologists thus have sought a more effective COMT inhibitor that can be used easily in clinical practice.
Dr. Ferreira and colleagues assessed the safety and efficacy of opicapone as an adjunct to levodopa, compared with placebo and entacapone, in patients with Parkinson’s disease and motor fluctuations. Eligible participants had had a clinical diagnosis of Parkinson’s disease for at least three years, a Hoehn and Yahr stage of 1 to 3 during the on state, and at least one year of clinical improvement with levodopa treatment. People who had used entacapone previously, had significant dyskinesia disability, or had severe or unpredictable off periods were excluded from the study.
Equal groups of patients were computer randomized to once-daily opicapone (5 mg, 25 mg, or 50 mg), placebo, or 200 mg of entacapone with every levodopa intake. The participants and investigators were blinded to treatment allocation throughout the study. Opicapone capsules and entacapone tablets were overencapsulated to maintain blinding.
Doses of study drugs were given concomitantly with each levodopa intake. Patients in the opicapone groups received placebo for the daytime doses and active treatment for the bedtime dose. Patients in the entacapone group took the active treatment during the day and placebo as the bedtime dose. Investigators assessed participants at screening, baseline, and at five subsequent time points.
The study’s primary end point was the change from baseline to the end of study treatment in absolute off time, as assessed by daily patient diaries. Secondary end points included the change from baseline to the end of study treatment in the proportion of patients who had at least a one-hour reduction in absolute off time and the change from baseline to the end of study treatment in the proportion of patients who had at least a one-hour increase in absolute total on time.
For statistical analysis, population sets were defined as the full analysis set, which included all randomly assigned patients who took at least one dose of study drug and had at least one assessment of time in the off state after baseline; the per-protocol set, which included all patients in the full analysis set who did not have any major protocol deviations; and the safety set, which included all patients who received at least one dose of study drug.
Opicapone Was Noninferior to Entacapone
The researchers enrolled 600 patients, of whom 121 received placebo, 122 received 200 mg of entacapone, 122 received 5 mg of opicapone, 119 received 25 mg of opicapone, and 116 received 50 mg of opicapone. The full analysis included 590 patients, and the per-protocol analysis included 537 patients. In all, 542 patients completed the study. Patient demographics, baseline Parkinson’s disease characteristics, and treatment history did not differ between the treatment groups in the safety analysis.
In the full analysis, the adjusted mean change from baseline in absolute off time was –116.8 minutes in the opicapone 50 mg group, compared with –96.3 minutes in the entacapone group, –91.3 minutes in the opicapone 5 mg group, –85.9 minutes in the opicapone 25 mg group, and –56.0 minutes in the placebo group. The per-protocol analysis yielded similar results.
The investigators only tested the 50-mg dose of opicapone for noninferiority of absolute off time. This treatment was superior to placebo and noninferior to entacapone. The researchers found no significant differences between placebo and opicapone 5 mg or opicapone 25 mg. Entacapone was superior to placebo.
Compared with placebo, the proportion of patients with a reduction in off time of at least one hour was significantly higher in patient who received 25 mg or 50 mg of opicapone, and the proportion of patients with an increase in on time of at least one hour was significantly higher in patients who received 50 mg of opicapone. No significant differences were noted in off and on state rates for entacapone versus placebo. Results for the other secondary end points supported those of the primary analysis.
Adverse Events Were Uncommon
The percentage of patients who discontinued because of treatment-emergent adverse events was low and similar across the treatment groups. The most common treatment-emergent adverse events leading to discontinuation were diarrhea, visual hallucinations, and dyskinesia. Dyskinesia was the most frequently reported treatment-emergent adverse event possibly related to the study drug, and the highest incidence occurred in the opicapone groups. Approximately 80% of treatment-emergent dyskinesias occurred in patients in all groups who already had dyskinesia at baseline. The incidence of serious treatment-emergent adverse events was low (ie, 7% or less) in all groups, and 35% of these events were judged to be unrelated to the study drug.
“The beneficial effects of opicapone 50 mg at reducing the time in the off state were accompanied by a corresponding increase in time in the on state without troublesome dyskinesia, whereas the duration of time in the on state with troublesome dyskinesia did not change,” said Dr. Ferreira. The study results “suggest an overall positive risk-to-benefit ratio for the use of opicapone in patients with Parkinson’s disease with end-of-dose motor fluctuations,” he added. Results of the authors’ open-label extension study will be published in the future.
—Erik Greb
Administering 50 mg of opicapone as an adjunct to levodopa treatment decreases the amount of off time by approximately 61 minutes, compared with placebo, for patients with Parkinson’s disease and end-of-dose motor fluctuations, according to research published in the February issue of Lancet Neurology. Data indicate that the drug is safe, well tolerated, and noninferior to entacapone for this indication.
Opicapone is “the only once-daily catechol-O-methyltransferase (COMT) inhibitor to provide a mean reduction in time in the off state that is clinically relevant,” said Joaquim J. Ferreira, MD, Professor of Neurology and Clinical Pharmacology at the University of Lisbon, and colleagues. Administering the drug once daily could simplify a patient’s drug regimen by permitting the physician to decrease the total daily levodopa dose, increase the dosing interval, and reduce the number of intakes, thereby maximizing the benefit of therapy, he added.
Comparing Opicapone, Entacapone, and Placebo
The half-life of oral levodopa is between 60 and 90 minutes and is linked with end-of-dose motor fluctuations. COMT inhibitors increase the plasma elimination half-life of levodopa and decrease peak–trough variations. Entacapone, a COMT inhibitor, provides moderate reductions in daily off time, but needs to be administered with each dose of levodopa. Neurologists thus have sought a more effective COMT inhibitor that can be used easily in clinical practice.
Dr. Ferreira and colleagues assessed the safety and efficacy of opicapone as an adjunct to levodopa, compared with placebo and entacapone, in patients with Parkinson’s disease and motor fluctuations. Eligible participants had had a clinical diagnosis of Parkinson’s disease for at least three years, a Hoehn and Yahr stage of 1 to 3 during the on state, and at least one year of clinical improvement with levodopa treatment. People who had used entacapone previously, had significant dyskinesia disability, or had severe or unpredictable off periods were excluded from the study.
Equal groups of patients were computer randomized to once-daily opicapone (5 mg, 25 mg, or 50 mg), placebo, or 200 mg of entacapone with every levodopa intake. The participants and investigators were blinded to treatment allocation throughout the study. Opicapone capsules and entacapone tablets were overencapsulated to maintain blinding.
Doses of study drugs were given concomitantly with each levodopa intake. Patients in the opicapone groups received placebo for the daytime doses and active treatment for the bedtime dose. Patients in the entacapone group took the active treatment during the day and placebo as the bedtime dose. Investigators assessed participants at screening, baseline, and at five subsequent time points.
The study’s primary end point was the change from baseline to the end of study treatment in absolute off time, as assessed by daily patient diaries. Secondary end points included the change from baseline to the end of study treatment in the proportion of patients who had at least a one-hour reduction in absolute off time and the change from baseline to the end of study treatment in the proportion of patients who had at least a one-hour increase in absolute total on time.
For statistical analysis, population sets were defined as the full analysis set, which included all randomly assigned patients who took at least one dose of study drug and had at least one assessment of time in the off state after baseline; the per-protocol set, which included all patients in the full analysis set who did not have any major protocol deviations; and the safety set, which included all patients who received at least one dose of study drug.
Opicapone Was Noninferior to Entacapone
The researchers enrolled 600 patients, of whom 121 received placebo, 122 received 200 mg of entacapone, 122 received 5 mg of opicapone, 119 received 25 mg of opicapone, and 116 received 50 mg of opicapone. The full analysis included 590 patients, and the per-protocol analysis included 537 patients. In all, 542 patients completed the study. Patient demographics, baseline Parkinson’s disease characteristics, and treatment history did not differ between the treatment groups in the safety analysis.
In the full analysis, the adjusted mean change from baseline in absolute off time was –116.8 minutes in the opicapone 50 mg group, compared with –96.3 minutes in the entacapone group, –91.3 minutes in the opicapone 5 mg group, –85.9 minutes in the opicapone 25 mg group, and –56.0 minutes in the placebo group. The per-protocol analysis yielded similar results.
The investigators only tested the 50-mg dose of opicapone for noninferiority of absolute off time. This treatment was superior to placebo and noninferior to entacapone. The researchers found no significant differences between placebo and opicapone 5 mg or opicapone 25 mg. Entacapone was superior to placebo.
Compared with placebo, the proportion of patients with a reduction in off time of at least one hour was significantly higher in patient who received 25 mg or 50 mg of opicapone, and the proportion of patients with an increase in on time of at least one hour was significantly higher in patients who received 50 mg of opicapone. No significant differences were noted in off and on state rates for entacapone versus placebo. Results for the other secondary end points supported those of the primary analysis.
Adverse Events Were Uncommon
The percentage of patients who discontinued because of treatment-emergent adverse events was low and similar across the treatment groups. The most common treatment-emergent adverse events leading to discontinuation were diarrhea, visual hallucinations, and dyskinesia. Dyskinesia was the most frequently reported treatment-emergent adverse event possibly related to the study drug, and the highest incidence occurred in the opicapone groups. Approximately 80% of treatment-emergent dyskinesias occurred in patients in all groups who already had dyskinesia at baseline. The incidence of serious treatment-emergent adverse events was low (ie, 7% or less) in all groups, and 35% of these events were judged to be unrelated to the study drug.
“The beneficial effects of opicapone 50 mg at reducing the time in the off state were accompanied by a corresponding increase in time in the on state without troublesome dyskinesia, whereas the duration of time in the on state with troublesome dyskinesia did not change,” said Dr. Ferreira. The study results “suggest an overall positive risk-to-benefit ratio for the use of opicapone in patients with Parkinson’s disease with end-of-dose motor fluctuations,” he added. Results of the authors’ open-label extension study will be published in the future.
—Erik Greb
Suggested Reading
Ferreira JJ, Lees A, Rocha JF, et al. Opicapone as an adjunct to levodopa in patients with Parkinson’s disease and end-of-dose motor fluctuations: a randomised, double-blind, controlled trial. Lancet Neurol. 2016;15(2):154-165.
Suggested Reading
Ferreira JJ, Lees A, Rocha JF, et al. Opicapone as an adjunct to levodopa in patients with Parkinson’s disease and end-of-dose motor fluctuations: a randomised, double-blind, controlled trial. Lancet Neurol. 2016;15(2):154-165.
Is Task-Oriented Stroke Rehab Better Than Usual Rehab?
A structured, task-oriented rehabilitation program, compared with usual rehabilitation, did not result in better motor function or recovery after 12 months for patients with moderate upper-extremity impairment following a stroke, according to a study published in the February 9 issue of JAMA. “The findings from this study provide important new guidance to clinicians who must choose the best treatment for patients with stroke,” according to the study authors. “The results suggest that usual and customary community-based therapy, provided during the typical outpatient rehabilitation time window by licensed therapists, improves upper-extremity motor function and that more than doubling the dose of therapy does not lead to meaningful differences in motor outcomes.”
As payers pressure physicians to reduce inpatient rehabilitation, outpatient rehabilitation may have greater importance for patients recovering from stroke. Clinicians lack evidence for determining the best type and amount of motor therapy during outpatient rehabilitation, however. Clinical trials suggest that higher doses of task-oriented training are superior to current clinical practice for patients with stroke with upper-extremity motor deficits.
To test stroke rehabilitation programs, Carolee J. Winstein, PhD, Professor of Biokinesiology and Physical Therapy at the University of Southern California in Los Angeles, and colleagues randomly assigned 361 participants with moderate motor impairment following a stroke to structured, task-oriented upper extremity training (n = 119), dose-equivalent occupational therapy (n = 120), or monitoring-only occupational therapy (n = 122). The dose-equivalent occupational therapy group was prescribed 30 one-hour sessions over 10 weeks; the monitoring-only occupational therapy group was only monitored, without specification of dose. Participants were recruited from seven US hospitals, treated in the outpatient setting, and tested at 12 months on various measures of motor function and recovery.
Among the 361 patients (average age, 60.7), 304 (84%) completed the 12-month primary outcome assessment. The researchers found no group differences in upper-extremity motor performance; specifically, the structured, task-oriented motor therapy was not superior to usual outpatient occupational therapy for the same number of hours, showing no additional benefit for an evidence-based, intensive, restorative therapy program. In addition, there was no advantage to providing more than twice the average dose (average, 27 hours) of therapy, compared with the average 11 hours received by the monitoring-only group. Substantially more therapy time thus was not associated with additional motor restoration.
“These findings do not support [the] superiority of this task-oriented rehabilitation program for patients with motor stroke and moderate upper-extremity impairment,” the authors concluded.
Suggested Reading
Winstein CJ, Wolf SL, Dromerick AW, et al. Effect of a task-oriented rehabilitation program on upper extremity recovery following motor stroke: the ICARE randomized clinical trial. JAMA. 2016;315(6):571-581.
A structured, task-oriented rehabilitation program, compared with usual rehabilitation, did not result in better motor function or recovery after 12 months for patients with moderate upper-extremity impairment following a stroke, according to a study published in the February 9 issue of JAMA. “The findings from this study provide important new guidance to clinicians who must choose the best treatment for patients with stroke,” according to the study authors. “The results suggest that usual and customary community-based therapy, provided during the typical outpatient rehabilitation time window by licensed therapists, improves upper-extremity motor function and that more than doubling the dose of therapy does not lead to meaningful differences in motor outcomes.”
As payers pressure physicians to reduce inpatient rehabilitation, outpatient rehabilitation may have greater importance for patients recovering from stroke. Clinicians lack evidence for determining the best type and amount of motor therapy during outpatient rehabilitation, however. Clinical trials suggest that higher doses of task-oriented training are superior to current clinical practice for patients with stroke with upper-extremity motor deficits.
To test stroke rehabilitation programs, Carolee J. Winstein, PhD, Professor of Biokinesiology and Physical Therapy at the University of Southern California in Los Angeles, and colleagues randomly assigned 361 participants with moderate motor impairment following a stroke to structured, task-oriented upper extremity training (n = 119), dose-equivalent occupational therapy (n = 120), or monitoring-only occupational therapy (n = 122). The dose-equivalent occupational therapy group was prescribed 30 one-hour sessions over 10 weeks; the monitoring-only occupational therapy group was only monitored, without specification of dose. Participants were recruited from seven US hospitals, treated in the outpatient setting, and tested at 12 months on various measures of motor function and recovery.
Among the 361 patients (average age, 60.7), 304 (84%) completed the 12-month primary outcome assessment. The researchers found no group differences in upper-extremity motor performance; specifically, the structured, task-oriented motor therapy was not superior to usual outpatient occupational therapy for the same number of hours, showing no additional benefit for an evidence-based, intensive, restorative therapy program. In addition, there was no advantage to providing more than twice the average dose (average, 27 hours) of therapy, compared with the average 11 hours received by the monitoring-only group. Substantially more therapy time thus was not associated with additional motor restoration.
“These findings do not support [the] superiority of this task-oriented rehabilitation program for patients with motor stroke and moderate upper-extremity impairment,” the authors concluded.
A structured, task-oriented rehabilitation program, compared with usual rehabilitation, did not result in better motor function or recovery after 12 months for patients with moderate upper-extremity impairment following a stroke, according to a study published in the February 9 issue of JAMA. “The findings from this study provide important new guidance to clinicians who must choose the best treatment for patients with stroke,” according to the study authors. “The results suggest that usual and customary community-based therapy, provided during the typical outpatient rehabilitation time window by licensed therapists, improves upper-extremity motor function and that more than doubling the dose of therapy does not lead to meaningful differences in motor outcomes.”
As payers pressure physicians to reduce inpatient rehabilitation, outpatient rehabilitation may have greater importance for patients recovering from stroke. Clinicians lack evidence for determining the best type and amount of motor therapy during outpatient rehabilitation, however. Clinical trials suggest that higher doses of task-oriented training are superior to current clinical practice for patients with stroke with upper-extremity motor deficits.
To test stroke rehabilitation programs, Carolee J. Winstein, PhD, Professor of Biokinesiology and Physical Therapy at the University of Southern California in Los Angeles, and colleagues randomly assigned 361 participants with moderate motor impairment following a stroke to structured, task-oriented upper extremity training (n = 119), dose-equivalent occupational therapy (n = 120), or monitoring-only occupational therapy (n = 122). The dose-equivalent occupational therapy group was prescribed 30 one-hour sessions over 10 weeks; the monitoring-only occupational therapy group was only monitored, without specification of dose. Participants were recruited from seven US hospitals, treated in the outpatient setting, and tested at 12 months on various measures of motor function and recovery.
Among the 361 patients (average age, 60.7), 304 (84%) completed the 12-month primary outcome assessment. The researchers found no group differences in upper-extremity motor performance; specifically, the structured, task-oriented motor therapy was not superior to usual outpatient occupational therapy for the same number of hours, showing no additional benefit for an evidence-based, intensive, restorative therapy program. In addition, there was no advantage to providing more than twice the average dose (average, 27 hours) of therapy, compared with the average 11 hours received by the monitoring-only group. Substantially more therapy time thus was not associated with additional motor restoration.
“These findings do not support [the] superiority of this task-oriented rehabilitation program for patients with motor stroke and moderate upper-extremity impairment,” the authors concluded.
Suggested Reading
Winstein CJ, Wolf SL, Dromerick AW, et al. Effect of a task-oriented rehabilitation program on upper extremity recovery following motor stroke: the ICARE randomized clinical trial. JAMA. 2016;315(6):571-581.
Suggested Reading
Winstein CJ, Wolf SL, Dromerick AW, et al. Effect of a task-oriented rehabilitation program on upper extremity recovery following motor stroke: the ICARE randomized clinical trial. JAMA. 2016;315(6):571-581.
Prestroke Antiplatelet Therapy Confers Benefits and Risks
Among patients with an acute ischemic stroke treated with IV t-PA, those receiving antiplatelet therapy prior to the stroke had a higher risk for symptomatic intracranial hemorrhage but better functional outcomes than those who were not receiving antiplatelet therapy, according to a report published in the January issue of JAMA Neurology.
“This study represents the largest clinical experience of stroke thrombolysis in patients receiving antiplatelet therapy before stroke onset,” the study authors wrote. “The use of IV t-PA in patients receiving antiplatelet therapy was associated with a slight increased risk for symptomatic intracranial hemorrhage, especially among patients receiving aspirin or dual antiplatelet therapy. However, the relatively small excess risk did not translate into higher mortality and appeared to be associated with favorable functional outcomes at discharge. Therefore, the overall benefits of thrombolytic therapy may outweigh the risks in eligible patients with ischemic stroke receiving antiplatelet therapy before stroke onset.”
A Large, Nationwide Sample
In their observational study, lead author Ying Xian, MD, PhD, a fellow at Duke Clinical Research Institute in Durham, North Carolina, and colleagues used data from the American Heart Association and American Stroke Association’s Get With the Guidelines–Stroke registry to assess the risks and benefits associated with prestroke antiplatelet therapy. The registry included 85,072 adults with ischemic stroke who received IV t-PA in 1,545 registry hospitals from January 1, 2009, through March 31, 2015.
Main study outcomes included symptomatic intracranial hemorrhage, in-hospital mortality, discharge ambulatory status, and modified Rankin Scale score.
Patients receiving antiplatelet therapy (n = 38,844, 45.7% of the total cohort) were older and had a higher prevalence of cardiovascular risk factors than those not receiving antiplatelet therapy before admission for stroke (n = 46,228, 54.3% of the total cohort). The unadjusted rate of symptomatic intracranial hemorrhage was higher in patients receiving prestroke antiplatelet therapy (5.0% vs 3.7%). After risk adjustment, prior use of antiplatelet agents remained associated with higher odds of symptomatic intracranial hemorrhage, compared with no prior therapy (adjusted odds ratio, 1.18).
Among patients enrolled on October 1, 2012, or later, the highest odds of symptomatic intracranial hemorrhage were found in 15,116 patients receiving aspirin alone (adjusted odds ratio, 1.19) and 2,397 patients receiving aspirin and clopidogrel (adjusted odds ratio, 1.47). The adjusted risk for in-hospital mortality was similar between patients who were receiving prior antiplatelet therapy and those who were not. However, patients receiving antiplatelet therapy had a greater risk-adjusted likelihood of independent ambulation (46.6% vs 42.1%; adjusted odds ratio, 1.13) and better functional outcomes (ie, modified Rankin Scale score of 0 or 1) at discharge (27.8% vs 24.1%; adjusted odds ratio, 1.14).
In their analyses of secondary outcomes, the researchers found that the rates of life-threatening or serious symptomatic intracranial hemorrhage and any t-PA complications were higher in the patients receiving prior antiplatelet therapy. In the unadjusted analysis, fewer patients who received antiplatelet therapy were discharged home (41.4% vs 45.6%). However, they had higher adjusted odds ratios for being discharged home than patients without antiplatelet use (adjusted odds ratio, 1.13). No statistically significant differences were seen in discharge to hospice (unadjusted rates, 6.3% vs 4.6%; adjusted odds ratio, 0.97) or to a skilled nursing facility (unadjusted rates, 17.3% vs 13.9%; adjusted odds ratio, 0.98) between the groups.
Risk–Benefit Calculation
“Given the high prevalence of coronary artery disease and the widespread use of percutaneous coronary intervention, many patients are receiving aspirin or dual aspirin antiplatelet therapy at the time of stroke onset,” Dr. Xian and colleagues wrote. According to the researchers, their findings demonstrate that the risk for symptomatic intracranial hemorrhage among patients with stroke who receive antiplatelet therapy before the stroke and are treated with IV t-PA is relatively low and must be weighed against the potential benefits, in terms of improved functional outcomes. They concluded that improved functional outcomes in those receiving prestroke antiplatelet therapy outweigh the slightly increased risk of intracranial bleeding.
—Glenn S. Williams
Suggested Reading
Xian Y, Federspiel JJ, Grau-Sepulveda M, et al. Risks and benefits associated with prestroke antiplatelet therapy among patients with acute ischemic stroke treated with intravenous tissue plasminogen activator. JAMA Neurol. 2016;73(1):50-59.
Among patients with an acute ischemic stroke treated with IV t-PA, those receiving antiplatelet therapy prior to the stroke had a higher risk for symptomatic intracranial hemorrhage but better functional outcomes than those who were not receiving antiplatelet therapy, according to a report published in the January issue of JAMA Neurology.
“This study represents the largest clinical experience of stroke thrombolysis in patients receiving antiplatelet therapy before stroke onset,” the study authors wrote. “The use of IV t-PA in patients receiving antiplatelet therapy was associated with a slight increased risk for symptomatic intracranial hemorrhage, especially among patients receiving aspirin or dual antiplatelet therapy. However, the relatively small excess risk did not translate into higher mortality and appeared to be associated with favorable functional outcomes at discharge. Therefore, the overall benefits of thrombolytic therapy may outweigh the risks in eligible patients with ischemic stroke receiving antiplatelet therapy before stroke onset.”
A Large, Nationwide Sample
In their observational study, lead author Ying Xian, MD, PhD, a fellow at Duke Clinical Research Institute in Durham, North Carolina, and colleagues used data from the American Heart Association and American Stroke Association’s Get With the Guidelines–Stroke registry to assess the risks and benefits associated with prestroke antiplatelet therapy. The registry included 85,072 adults with ischemic stroke who received IV t-PA in 1,545 registry hospitals from January 1, 2009, through March 31, 2015.
Main study outcomes included symptomatic intracranial hemorrhage, in-hospital mortality, discharge ambulatory status, and modified Rankin Scale score.
Patients receiving antiplatelet therapy (n = 38,844, 45.7% of the total cohort) were older and had a higher prevalence of cardiovascular risk factors than those not receiving antiplatelet therapy before admission for stroke (n = 46,228, 54.3% of the total cohort). The unadjusted rate of symptomatic intracranial hemorrhage was higher in patients receiving prestroke antiplatelet therapy (5.0% vs 3.7%). After risk adjustment, prior use of antiplatelet agents remained associated with higher odds of symptomatic intracranial hemorrhage, compared with no prior therapy (adjusted odds ratio, 1.18).
Among patients enrolled on October 1, 2012, or later, the highest odds of symptomatic intracranial hemorrhage were found in 15,116 patients receiving aspirin alone (adjusted odds ratio, 1.19) and 2,397 patients receiving aspirin and clopidogrel (adjusted odds ratio, 1.47). The adjusted risk for in-hospital mortality was similar between patients who were receiving prior antiplatelet therapy and those who were not. However, patients receiving antiplatelet therapy had a greater risk-adjusted likelihood of independent ambulation (46.6% vs 42.1%; adjusted odds ratio, 1.13) and better functional outcomes (ie, modified Rankin Scale score of 0 or 1) at discharge (27.8% vs 24.1%; adjusted odds ratio, 1.14).
In their analyses of secondary outcomes, the researchers found that the rates of life-threatening or serious symptomatic intracranial hemorrhage and any t-PA complications were higher in the patients receiving prior antiplatelet therapy. In the unadjusted analysis, fewer patients who received antiplatelet therapy were discharged home (41.4% vs 45.6%). However, they had higher adjusted odds ratios for being discharged home than patients without antiplatelet use (adjusted odds ratio, 1.13). No statistically significant differences were seen in discharge to hospice (unadjusted rates, 6.3% vs 4.6%; adjusted odds ratio, 0.97) or to a skilled nursing facility (unadjusted rates, 17.3% vs 13.9%; adjusted odds ratio, 0.98) between the groups.
Risk–Benefit Calculation
“Given the high prevalence of coronary artery disease and the widespread use of percutaneous coronary intervention, many patients are receiving aspirin or dual aspirin antiplatelet therapy at the time of stroke onset,” Dr. Xian and colleagues wrote. According to the researchers, their findings demonstrate that the risk for symptomatic intracranial hemorrhage among patients with stroke who receive antiplatelet therapy before the stroke and are treated with IV t-PA is relatively low and must be weighed against the potential benefits, in terms of improved functional outcomes. They concluded that improved functional outcomes in those receiving prestroke antiplatelet therapy outweigh the slightly increased risk of intracranial bleeding.
—Glenn S. Williams
Among patients with an acute ischemic stroke treated with IV t-PA, those receiving antiplatelet therapy prior to the stroke had a higher risk for symptomatic intracranial hemorrhage but better functional outcomes than those who were not receiving antiplatelet therapy, according to a report published in the January issue of JAMA Neurology.
“This study represents the largest clinical experience of stroke thrombolysis in patients receiving antiplatelet therapy before stroke onset,” the study authors wrote. “The use of IV t-PA in patients receiving antiplatelet therapy was associated with a slight increased risk for symptomatic intracranial hemorrhage, especially among patients receiving aspirin or dual antiplatelet therapy. However, the relatively small excess risk did not translate into higher mortality and appeared to be associated with favorable functional outcomes at discharge. Therefore, the overall benefits of thrombolytic therapy may outweigh the risks in eligible patients with ischemic stroke receiving antiplatelet therapy before stroke onset.”
A Large, Nationwide Sample
In their observational study, lead author Ying Xian, MD, PhD, a fellow at Duke Clinical Research Institute in Durham, North Carolina, and colleagues used data from the American Heart Association and American Stroke Association’s Get With the Guidelines–Stroke registry to assess the risks and benefits associated with prestroke antiplatelet therapy. The registry included 85,072 adults with ischemic stroke who received IV t-PA in 1,545 registry hospitals from January 1, 2009, through March 31, 2015.
Main study outcomes included symptomatic intracranial hemorrhage, in-hospital mortality, discharge ambulatory status, and modified Rankin Scale score.
Patients receiving antiplatelet therapy (n = 38,844, 45.7% of the total cohort) were older and had a higher prevalence of cardiovascular risk factors than those not receiving antiplatelet therapy before admission for stroke (n = 46,228, 54.3% of the total cohort). The unadjusted rate of symptomatic intracranial hemorrhage was higher in patients receiving prestroke antiplatelet therapy (5.0% vs 3.7%). After risk adjustment, prior use of antiplatelet agents remained associated with higher odds of symptomatic intracranial hemorrhage, compared with no prior therapy (adjusted odds ratio, 1.18).
Among patients enrolled on October 1, 2012, or later, the highest odds of symptomatic intracranial hemorrhage were found in 15,116 patients receiving aspirin alone (adjusted odds ratio, 1.19) and 2,397 patients receiving aspirin and clopidogrel (adjusted odds ratio, 1.47). The adjusted risk for in-hospital mortality was similar between patients who were receiving prior antiplatelet therapy and those who were not. However, patients receiving antiplatelet therapy had a greater risk-adjusted likelihood of independent ambulation (46.6% vs 42.1%; adjusted odds ratio, 1.13) and better functional outcomes (ie, modified Rankin Scale score of 0 or 1) at discharge (27.8% vs 24.1%; adjusted odds ratio, 1.14).
In their analyses of secondary outcomes, the researchers found that the rates of life-threatening or serious symptomatic intracranial hemorrhage and any t-PA complications were higher in the patients receiving prior antiplatelet therapy. In the unadjusted analysis, fewer patients who received antiplatelet therapy were discharged home (41.4% vs 45.6%). However, they had higher adjusted odds ratios for being discharged home than patients without antiplatelet use (adjusted odds ratio, 1.13). No statistically significant differences were seen in discharge to hospice (unadjusted rates, 6.3% vs 4.6%; adjusted odds ratio, 0.97) or to a skilled nursing facility (unadjusted rates, 17.3% vs 13.9%; adjusted odds ratio, 0.98) between the groups.
Risk–Benefit Calculation
“Given the high prevalence of coronary artery disease and the widespread use of percutaneous coronary intervention, many patients are receiving aspirin or dual aspirin antiplatelet therapy at the time of stroke onset,” Dr. Xian and colleagues wrote. According to the researchers, their findings demonstrate that the risk for symptomatic intracranial hemorrhage among patients with stroke who receive antiplatelet therapy before the stroke and are treated with IV t-PA is relatively low and must be weighed against the potential benefits, in terms of improved functional outcomes. They concluded that improved functional outcomes in those receiving prestroke antiplatelet therapy outweigh the slightly increased risk of intracranial bleeding.
—Glenn S. Williams
Suggested Reading
Xian Y, Federspiel JJ, Grau-Sepulveda M, et al. Risks and benefits associated with prestroke antiplatelet therapy among patients with acute ischemic stroke treated with intravenous tissue plasminogen activator. JAMA Neurol. 2016;73(1):50-59.
Suggested Reading
Xian Y, Federspiel JJ, Grau-Sepulveda M, et al. Risks and benefits associated with prestroke antiplatelet therapy among patients with acute ischemic stroke treated with intravenous tissue plasminogen activator. JAMA Neurol. 2016;73(1):50-59.