Advance directives for psychiatric care reduce compulsory admissions

Article Type
Changed
Thu, 06/23/2022 - 16:35

Providing peer or community health workers to help psychiatric patients complete psychiatric advance directives (PAD) – which govern care in advance of a mental health crisis – is associated with a significant reduction in compulsory hospital admissions, new research shows.

Results of a randomized trial showed the peer worker PAD group had a 42% reduction in compulsory admission over the following 12 months. This study group also had lower symptom scores, greater rates of recovery, and increased empowerment, compared with patients assigned to usual care.

In addition to proving that PADs are effective in reducing compulsory admission, the results show that facilitation by peer workers is relevant, study investigator Aurélie Tinland, MD, PhD, Faculté de Médecine Timone, Aix-Marseille University, Marseille, France, told delegates attending the virtual European Psychiatric Association (EPA) 2022 Congress. The study was simultaneously published online in JAMA Psychiatry.

However, Dr. Tinland noted that more research that includes “harder to reach” populations is needed. In addition, greater use of PADs is also key to reducing compulsory admissions.
 

‘Most coercive’ country

The researchers note that respect for patient autonomy is a strong pillar of health care, such that “involuntary treatment should be unusual.” However, they point out that “compulsory psychiatric admissions are far too common in countries of all income levels.”

In France, said Dr. Tinland, 24% of psychiatric hospitalizations are compulsory. The country is ranked the sixth “most coercive” country in the world, and there are concerns about human rights in French psychiatric facilities.

She added that advance care statements are the most efficient tool for reducing coercion, with one study suggesting they could cut rates by 25%, compared with usual care.

However, she noted there is an “asymmetry” between medical professionals and patients and a risk of “undue influence” when clinicians facilitate the completion of care statements.

To examine the impact on clinical outcomes of peer-worker facilitated PADs, the researchers studied adults with a diagnosis of schizophrenia, bipolar I disorder, or schizoaffective disorder who were admitted to a psychiatric hospital within the previous 12 months. Peer workers are individuals who have lived experience with mental illness and help inform and guide current patients about care options in the event of a mental health crisis.

Study participants were randomly assigned 1:1 to an intervention group or a usual care control group. The intervention group received a PAD document and were assigned a peer worker while the usual care group received comprehensive information about the PAD concept at study entry and were free to complete it, but they were not connected with a peer worker.

The PAD document included information about future treatment and support preferences, early signs of relapse, and coping strategies. Participants could meet the peer worker in a place of their choice and be supported in drafting the document and in sharing it with health care professionals.

In all, 394 individuals completed the study. The majority (61%) of participants were male and 66% had completed post-secondary education. Schizophrenia was diagnosed in 45%, bipolar I disorder in 36%, and schizoaffective disorder in 19%.

Participants in the intervention group were significantly younger than those in the control group, with a mean of 37.4 years versus 41 years (P = .003) and were less likely to have one or more somatic comorbidities, at 61.2% versus 69.2%.

A PAD was completed by 54.6% of individuals in the intervention group versus 7.1% of controls (P < .001). The PAD was written with peer worker support by 41.3% of those in the intervention and by 2% of controls. Of those who completed a PAD, 75.7% met care facilitators, and 27.1% used it during a crisis over the following 12 months.

Results showed that the rate of compulsory admissions was significantly lower in the peer worker PAD group, at 27% versus 39.9% in control participants, at an odds ratio of 0.58 (P = .007).

Participants in the intervention group had lower symptoms on the modified Colorado Symptom Score than usual care patients with an effect size of -0.20 (P = .03) and higher scores on the Empowerment Scale (effect size 0.30, P = .003).

Scores on the Recovery Assessment Scale were also significantly higher in the peer worker PAD group versus controls with an effect size of 0.44 (P < .001). There were no significant differences, however, in overall admission rates, the quality of the therapeutic alliance, or quality of life.
 

 

 

Putting patients in the driver’s seat

Commenting on the findings, Robert Dabney Jr., MA, MDiv, peer apprentice program manager at the Depression and Bipolar Support Alliance, Chicago, said the study “tells us there are many benefits to completing a psychiatric advance directive, but perhaps the most powerful one is putting the person receiving mental health care in the driver’s seat of their own recovery.”

However, he noted that “many people living with mental health conditions don’t know the option exists to decide on their treatment plan in advance of a crisis.”

“This is where peer support specialists can come in. Having a peer who has been through similar experiences and can guide you through the process is as comforting as it is empowering. I have witnessed and experienced firsthand the power of peer support,” he said.

“It’s my personal hope and the goal of the Depression and Bipolar Support Alliance to empower more people to either become peer support specialists or seek out peer support services, because we know it improves and even saves lives,” Mr. Dabney added.

Virginia A. Brown, PhD, department of psychiatry & behavioral sciences, University of Texas at Austin Dell Medical School, noted there are huge differences between the health care systems in France and the United States.

She explained that two of the greatest barriers to PADs in the United States is that until 2016, filling one out was not billable and that “practitioners don’t know anything about advanced care plans.”

Dr. Brown said her own work shows that individuals who support patients during a crisis believe it would be “really helpful if we had some kind of document that we could share with the health care system that says: ‘Hey, look, I’m the designated person to speak for this patient, they’ve identified me through a document.’ So, people were actually describing a need for this document but didn’t know that it existed.”

Another problem is that in the United States, hospitals operate in a “closed system” and cannot talk to an unrelated hospital or to the police department “to get information to those first responders during an emergency about who to talk to about their wishes and preferences.”

“There are a lot of hurdles that we’ve got to get over to make a more robust system that protects the autonomy of people who live with serious mental illness,” Dr. Brown said, as “losing capacity during a crisis is time-limited, and it requires us to respond to it as a medical emergency.”

The study was supported by an institutional grant from the French 2017 National Program of Health Services Research. The Clinical Research Direction of Assistance Publique Hôpitaux de Marseille sponsored the trial. Dr. Tinland declares grants from the French Ministry of Health Directorate General of Health Care Services during the conduct of the study.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Providing peer or community health workers to help psychiatric patients complete psychiatric advance directives (PAD) – which govern care in advance of a mental health crisis – is associated with a significant reduction in compulsory hospital admissions, new research shows.

Results of a randomized trial showed the peer worker PAD group had a 42% reduction in compulsory admission over the following 12 months. This study group also had lower symptom scores, greater rates of recovery, and increased empowerment, compared with patients assigned to usual care.

In addition to proving that PADs are effective in reducing compulsory admission, the results show that facilitation by peer workers is relevant, study investigator Aurélie Tinland, MD, PhD, Faculté de Médecine Timone, Aix-Marseille University, Marseille, France, told delegates attending the virtual European Psychiatric Association (EPA) 2022 Congress. The study was simultaneously published online in JAMA Psychiatry.

However, Dr. Tinland noted that more research that includes “harder to reach” populations is needed. In addition, greater use of PADs is also key to reducing compulsory admissions.
 

‘Most coercive’ country

The researchers note that respect for patient autonomy is a strong pillar of health care, such that “involuntary treatment should be unusual.” However, they point out that “compulsory psychiatric admissions are far too common in countries of all income levels.”

In France, said Dr. Tinland, 24% of psychiatric hospitalizations are compulsory. The country is ranked the sixth “most coercive” country in the world, and there are concerns about human rights in French psychiatric facilities.

She added that advance care statements are the most efficient tool for reducing coercion, with one study suggesting they could cut rates by 25%, compared with usual care.

However, she noted there is an “asymmetry” between medical professionals and patients and a risk of “undue influence” when clinicians facilitate the completion of care statements.

To examine the impact on clinical outcomes of peer-worker facilitated PADs, the researchers studied adults with a diagnosis of schizophrenia, bipolar I disorder, or schizoaffective disorder who were admitted to a psychiatric hospital within the previous 12 months. Peer workers are individuals who have lived experience with mental illness and help inform and guide current patients about care options in the event of a mental health crisis.

Study participants were randomly assigned 1:1 to an intervention group or a usual care control group. The intervention group received a PAD document and were assigned a peer worker while the usual care group received comprehensive information about the PAD concept at study entry and were free to complete it, but they were not connected with a peer worker.

The PAD document included information about future treatment and support preferences, early signs of relapse, and coping strategies. Participants could meet the peer worker in a place of their choice and be supported in drafting the document and in sharing it with health care professionals.

In all, 394 individuals completed the study. The majority (61%) of participants were male and 66% had completed post-secondary education. Schizophrenia was diagnosed in 45%, bipolar I disorder in 36%, and schizoaffective disorder in 19%.

Participants in the intervention group were significantly younger than those in the control group, with a mean of 37.4 years versus 41 years (P = .003) and were less likely to have one or more somatic comorbidities, at 61.2% versus 69.2%.

A PAD was completed by 54.6% of individuals in the intervention group versus 7.1% of controls (P < .001). The PAD was written with peer worker support by 41.3% of those in the intervention and by 2% of controls. Of those who completed a PAD, 75.7% met care facilitators, and 27.1% used it during a crisis over the following 12 months.

Results showed that the rate of compulsory admissions was significantly lower in the peer worker PAD group, at 27% versus 39.9% in control participants, at an odds ratio of 0.58 (P = .007).

Participants in the intervention group had lower symptoms on the modified Colorado Symptom Score than usual care patients with an effect size of -0.20 (P = .03) and higher scores on the Empowerment Scale (effect size 0.30, P = .003).

Scores on the Recovery Assessment Scale were also significantly higher in the peer worker PAD group versus controls with an effect size of 0.44 (P < .001). There were no significant differences, however, in overall admission rates, the quality of the therapeutic alliance, or quality of life.
 

 

 

Putting patients in the driver’s seat

Commenting on the findings, Robert Dabney Jr., MA, MDiv, peer apprentice program manager at the Depression and Bipolar Support Alliance, Chicago, said the study “tells us there are many benefits to completing a psychiatric advance directive, but perhaps the most powerful one is putting the person receiving mental health care in the driver’s seat of their own recovery.”

However, he noted that “many people living with mental health conditions don’t know the option exists to decide on their treatment plan in advance of a crisis.”

“This is where peer support specialists can come in. Having a peer who has been through similar experiences and can guide you through the process is as comforting as it is empowering. I have witnessed and experienced firsthand the power of peer support,” he said.

“It’s my personal hope and the goal of the Depression and Bipolar Support Alliance to empower more people to either become peer support specialists or seek out peer support services, because we know it improves and even saves lives,” Mr. Dabney added.

Virginia A. Brown, PhD, department of psychiatry & behavioral sciences, University of Texas at Austin Dell Medical School, noted there are huge differences between the health care systems in France and the United States.

She explained that two of the greatest barriers to PADs in the United States is that until 2016, filling one out was not billable and that “practitioners don’t know anything about advanced care plans.”

Dr. Brown said her own work shows that individuals who support patients during a crisis believe it would be “really helpful if we had some kind of document that we could share with the health care system that says: ‘Hey, look, I’m the designated person to speak for this patient, they’ve identified me through a document.’ So, people were actually describing a need for this document but didn’t know that it existed.”

Another problem is that in the United States, hospitals operate in a “closed system” and cannot talk to an unrelated hospital or to the police department “to get information to those first responders during an emergency about who to talk to about their wishes and preferences.”

“There are a lot of hurdles that we’ve got to get over to make a more robust system that protects the autonomy of people who live with serious mental illness,” Dr. Brown said, as “losing capacity during a crisis is time-limited, and it requires us to respond to it as a medical emergency.”

The study was supported by an institutional grant from the French 2017 National Program of Health Services Research. The Clinical Research Direction of Assistance Publique Hôpitaux de Marseille sponsored the trial. Dr. Tinland declares grants from the French Ministry of Health Directorate General of Health Care Services during the conduct of the study.

A version of this article first appeared on Medscape.com.

Providing peer or community health workers to help psychiatric patients complete psychiatric advance directives (PAD) – which govern care in advance of a mental health crisis – is associated with a significant reduction in compulsory hospital admissions, new research shows.

Results of a randomized trial showed the peer worker PAD group had a 42% reduction in compulsory admission over the following 12 months. This study group also had lower symptom scores, greater rates of recovery, and increased empowerment, compared with patients assigned to usual care.

In addition to proving that PADs are effective in reducing compulsory admission, the results show that facilitation by peer workers is relevant, study investigator Aurélie Tinland, MD, PhD, Faculté de Médecine Timone, Aix-Marseille University, Marseille, France, told delegates attending the virtual European Psychiatric Association (EPA) 2022 Congress. The study was simultaneously published online in JAMA Psychiatry.

However, Dr. Tinland noted that more research that includes “harder to reach” populations is needed. In addition, greater use of PADs is also key to reducing compulsory admissions.
 

‘Most coercive’ country

The researchers note that respect for patient autonomy is a strong pillar of health care, such that “involuntary treatment should be unusual.” However, they point out that “compulsory psychiatric admissions are far too common in countries of all income levels.”

In France, said Dr. Tinland, 24% of psychiatric hospitalizations are compulsory. The country is ranked the sixth “most coercive” country in the world, and there are concerns about human rights in French psychiatric facilities.

She added that advance care statements are the most efficient tool for reducing coercion, with one study suggesting they could cut rates by 25%, compared with usual care.

However, she noted there is an “asymmetry” between medical professionals and patients and a risk of “undue influence” when clinicians facilitate the completion of care statements.

To examine the impact on clinical outcomes of peer-worker facilitated PADs, the researchers studied adults with a diagnosis of schizophrenia, bipolar I disorder, or schizoaffective disorder who were admitted to a psychiatric hospital within the previous 12 months. Peer workers are individuals who have lived experience with mental illness and help inform and guide current patients about care options in the event of a mental health crisis.

Study participants were randomly assigned 1:1 to an intervention group or a usual care control group. The intervention group received a PAD document and were assigned a peer worker while the usual care group received comprehensive information about the PAD concept at study entry and were free to complete it, but they were not connected with a peer worker.

The PAD document included information about future treatment and support preferences, early signs of relapse, and coping strategies. Participants could meet the peer worker in a place of their choice and be supported in drafting the document and in sharing it with health care professionals.

In all, 394 individuals completed the study. The majority (61%) of participants were male and 66% had completed post-secondary education. Schizophrenia was diagnosed in 45%, bipolar I disorder in 36%, and schizoaffective disorder in 19%.

Participants in the intervention group were significantly younger than those in the control group, with a mean of 37.4 years versus 41 years (P = .003) and were less likely to have one or more somatic comorbidities, at 61.2% versus 69.2%.

A PAD was completed by 54.6% of individuals in the intervention group versus 7.1% of controls (P < .001). The PAD was written with peer worker support by 41.3% of those in the intervention and by 2% of controls. Of those who completed a PAD, 75.7% met care facilitators, and 27.1% used it during a crisis over the following 12 months.

Results showed that the rate of compulsory admissions was significantly lower in the peer worker PAD group, at 27% versus 39.9% in control participants, at an odds ratio of 0.58 (P = .007).

Participants in the intervention group had lower symptoms on the modified Colorado Symptom Score than usual care patients with an effect size of -0.20 (P = .03) and higher scores on the Empowerment Scale (effect size 0.30, P = .003).

Scores on the Recovery Assessment Scale were also significantly higher in the peer worker PAD group versus controls with an effect size of 0.44 (P < .001). There were no significant differences, however, in overall admission rates, the quality of the therapeutic alliance, or quality of life.
 

 

 

Putting patients in the driver’s seat

Commenting on the findings, Robert Dabney Jr., MA, MDiv, peer apprentice program manager at the Depression and Bipolar Support Alliance, Chicago, said the study “tells us there are many benefits to completing a psychiatric advance directive, but perhaps the most powerful one is putting the person receiving mental health care in the driver’s seat of their own recovery.”

However, he noted that “many people living with mental health conditions don’t know the option exists to decide on their treatment plan in advance of a crisis.”

“This is where peer support specialists can come in. Having a peer who has been through similar experiences and can guide you through the process is as comforting as it is empowering. I have witnessed and experienced firsthand the power of peer support,” he said.

“It’s my personal hope and the goal of the Depression and Bipolar Support Alliance to empower more people to either become peer support specialists or seek out peer support services, because we know it improves and even saves lives,” Mr. Dabney added.

Virginia A. Brown, PhD, department of psychiatry & behavioral sciences, University of Texas at Austin Dell Medical School, noted there are huge differences between the health care systems in France and the United States.

She explained that two of the greatest barriers to PADs in the United States is that until 2016, filling one out was not billable and that “practitioners don’t know anything about advanced care plans.”

Dr. Brown said her own work shows that individuals who support patients during a crisis believe it would be “really helpful if we had some kind of document that we could share with the health care system that says: ‘Hey, look, I’m the designated person to speak for this patient, they’ve identified me through a document.’ So, people were actually describing a need for this document but didn’t know that it existed.”

Another problem is that in the United States, hospitals operate in a “closed system” and cannot talk to an unrelated hospital or to the police department “to get information to those first responders during an emergency about who to talk to about their wishes and preferences.”

“There are a lot of hurdles that we’ve got to get over to make a more robust system that protects the autonomy of people who live with serious mental illness,” Dr. Brown said, as “losing capacity during a crisis is time-limited, and it requires us to respond to it as a medical emergency.”

The study was supported by an institutional grant from the French 2017 National Program of Health Services Research. The Clinical Research Direction of Assistance Publique Hôpitaux de Marseille sponsored the trial. Dr. Tinland declares grants from the French Ministry of Health Directorate General of Health Care Services during the conduct of the study.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EPA 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bone density loss in lean male runners parallels similar issue in women

Article Type
Changed
Fri, 06/24/2022 - 10:11

Similar to a phenomenon already well documented in women, inadequate nutrition appears to be linked to hormonal abnormalities and potentially preventable tibial cortical bone density loss in athletic men, according to results of a small, prospective study.

Based on these findings, “we suspect that a subset of male runners might not be fueling their bodies with enough nutrition and calories for their physical activity,” reported Melanie S. Haines, MD, at the annual meeting of the Endocrine Society.

This is not the first study to suggest male athletes are at risk of a condition equivalent to what has been commonly referred to as the female athlete triad, but it enlarges the objective data that the phenomenon is real, and it makes insufficient availability of energy the likely cause.

Dr. Melanie S. Haines

In women, the triad is described as a lack of adequate stored energy, irregular menses, and bone density loss. In men, menstrual cycles are not relevant, of course, but this study like others suggests a link between the failure to maintain adequate stores of energy, disturbances in hormone function, and decreased bone density in both men and women, Dr. Haines explained.
 

RED-S vs. male or female athlete triad

“There is now a move away from the term female athlete triad or male athlete triad,” Dr. Haines reported. Rather the factors of failing to maintain adequate energy for metabolic demands, hormonal disturbances, and bone density loss appear to be relevant to both sexes, according to Dr. Haines, an endocrinologist at Massachusetts General Hospital and assistant professor of medicine at Harvard Medical School, both in Boston. She said several groups, including the International Olympic Committee (IOC), have transitioned to the term RED-S to apply to both sexes.

“RED-S is an acronym for relative energy deficiency in sport, and it appears to be gaining traction,” Dr. Haines said in an interview.

According to her study and others, excessive lean body mass from failure to supply sufficient energy for physiological needs “negatively affects hormones and bone,” Dr. Haines explained. In men and women, endocrine disturbances are triggered when insufficient calories lead to inadequate macro- and micronutrients.

In this study, 31 men aged 16-30 years were evaluated. Fifteen were in the athlete group, defined by running at least 30 miles per week for at least the previous 6 months. There were 16 control subjects; all exercised less than 2 hours per week and did not participate in team sports, but they were not permitted in the study if their body mass index exceeded 27.5 kg/m2.
 

Athletes vs. otherwise healthy controls

Conditions that affect bone health were exclusion criteria in both groups, and neither group was permitted to take medications affecting bone health other than dietary calcium or vitamin D supplements for 2 months prior to the study.

Tibial cortical porosity was significantly greater – signaling deterioration in microarchitecture – in athletes, compared with control subjects (P = .003), according to quantitative computed tomography measurements. There was also significantly lower tibial cortical bone mineral density (P = .008) among athletes relative to controls.

Conversely, tibial trabecular measures of bone density and architecture were better among athletes than controls, but this was expected and did not contradict the hypothesis of the study.

“Trabecular bone refers to the inner part of the bone, which increases with weight-bearing exercise, but cortical bone is the outer shell, and the source of stress fractures,” Dr. Haines explained.

The median age of both the athletes and the controls was 24 years. Baseline measurements were similar. Body mass index, fat mass, estradiol, and leptin were all numerically lower in the athletes than controls, but none were significant, although there was a trend for the difference in leptin (P = .085).
 

 

 

Hormones correlated with tibial failure load

When these characteristics were evaluated in the context of mean tibial failure load, a metric related to strength, there was a strongly significant positive association with lean body mass (R = 0.85; P < 0.001) and estradiol level (R = 0.66; P = .007). The relationship with leptin also reached significance (R = 0.59; P = .046).

Unexpectedly, there was no relationship between testosterone and tibial failure load. The reason is unclear, but Dr. Haines’s interpretation is that the relationship between specific hormonal disturbances and bone density loss “might not be as simple” as once hypothesized.

The next step is a longitudinal evaluation of the same group of athletes to follow changes in the relationship between these variables over time, according to Dr. Haines.

Eventually, with evidence that there is a causal relationship between nutrition, hormonal changes, and bone loss, the research in this area will focus on better detection of risk and prophylactic strategies.

“Intervention trials to show that we can prevent stress factors will be difficult to perform,” Dr. Haines acknowledged, but she said that preventing adverse changes in bone at relatively young ages could have implications for long-term bone health, including protection from osteoporosis later in life.

Dr. Siobhan M. Statuta

The research presented by Dr. Haines is consistent with an area of research that is several decades old, at least in females, according to Siobhan M. Statuta, MD, a sports medicine primary care specialist at the University of Virginia, Charlottesville. The evidence that the same phenomenon occurs in men is more recent, but she said that it is now well accepted the there is a parallel hormonal issue in men and women.

“It is not a question of not eating enough. Often, athletes continue to consume the same diet, but their activity increases,” Dr. Statuta explained. “The problem is that they are not supplying enough of the calories they need to sustain the energy they are expending. You might say they are not fueling their engines appropriately.”

In 2014, the International Olympic Committee published a consensus statement on RED-S. They described this as a condition in which a state of energy deficiency leads to numerous complications in athletes, not just osteoporosis. Rather, a host of physiological systems, ranging from gastrointestinal complaints to cardiovascular events, were described.
 

RED-S addresses health beyond bones

“The RED-S theory is better described as a spoke-and-wheel concept rather than a triad. While inadequate energy availability is important to both, RED-S places this at the center of the wheel with spokes leading to all the possible complications rather than as a first event in a limited triad,” Dr. Statuta said in an interview.

However, she noted that the term RED-S is not yet appropriate to replace that of the male and female athlete triad.

“More research is required to hash out the relationship of a body in a state of energy deficiency and how it affects the entire body, which is the principle of RED-S,” Dr. Statuta said. “There likely are scientific effects, and we are currently investigating these relationships more.”

“These are really quite similar entities but have different foci,” she added. Based on data collected over several decades, “the triad narrows in on two body systems affected by low energy – the reproductive system and bones. RED-S incorporates these same systems yet adds on many more organ systems.

The original group of researchers have remained loyal to the concept of the triad that involves inadequate availability of energy followed by hormonal irregularities and osteoporosis. This group, the Female and Male Athlete Triad Coalition, has issued publications on this topic several times. Consensus statements were updated last year.

“The premise is that the triad leading to bone loss is shared by both men and women, even if the clinical manifestations differ,” said Dr. Statuta. The most notable difference is that men do not experience menstrual irregularities, but Dr. Statuta suggested that the clinical consequences are not necessarily any less.

“Males do not have menstrual cycles as an outward marker of an endocrine disturbance, so it is harder to recognize clinically, but I think there is agreement that not having enough energy available is the trigger of endocrine changes and then bone loss is relevant to both sexes,” she said. She said this is supported by a growing body of evidence, including the data presented by Dr. Haines at the Endocrine Society meeting.

Dr. Haines and Dr. Statuta report no potential conflicts of interest.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Similar to a phenomenon already well documented in women, inadequate nutrition appears to be linked to hormonal abnormalities and potentially preventable tibial cortical bone density loss in athletic men, according to results of a small, prospective study.

Based on these findings, “we suspect that a subset of male runners might not be fueling their bodies with enough nutrition and calories for their physical activity,” reported Melanie S. Haines, MD, at the annual meeting of the Endocrine Society.

This is not the first study to suggest male athletes are at risk of a condition equivalent to what has been commonly referred to as the female athlete triad, but it enlarges the objective data that the phenomenon is real, and it makes insufficient availability of energy the likely cause.

Dr. Melanie S. Haines

In women, the triad is described as a lack of adequate stored energy, irregular menses, and bone density loss. In men, menstrual cycles are not relevant, of course, but this study like others suggests a link between the failure to maintain adequate stores of energy, disturbances in hormone function, and decreased bone density in both men and women, Dr. Haines explained.
 

RED-S vs. male or female athlete triad

“There is now a move away from the term female athlete triad or male athlete triad,” Dr. Haines reported. Rather the factors of failing to maintain adequate energy for metabolic demands, hormonal disturbances, and bone density loss appear to be relevant to both sexes, according to Dr. Haines, an endocrinologist at Massachusetts General Hospital and assistant professor of medicine at Harvard Medical School, both in Boston. She said several groups, including the International Olympic Committee (IOC), have transitioned to the term RED-S to apply to both sexes.

“RED-S is an acronym for relative energy deficiency in sport, and it appears to be gaining traction,” Dr. Haines said in an interview.

According to her study and others, excessive lean body mass from failure to supply sufficient energy for physiological needs “negatively affects hormones and bone,” Dr. Haines explained. In men and women, endocrine disturbances are triggered when insufficient calories lead to inadequate macro- and micronutrients.

In this study, 31 men aged 16-30 years were evaluated. Fifteen were in the athlete group, defined by running at least 30 miles per week for at least the previous 6 months. There were 16 control subjects; all exercised less than 2 hours per week and did not participate in team sports, but they were not permitted in the study if their body mass index exceeded 27.5 kg/m2.
 

Athletes vs. otherwise healthy controls

Conditions that affect bone health were exclusion criteria in both groups, and neither group was permitted to take medications affecting bone health other than dietary calcium or vitamin D supplements for 2 months prior to the study.

Tibial cortical porosity was significantly greater – signaling deterioration in microarchitecture – in athletes, compared with control subjects (P = .003), according to quantitative computed tomography measurements. There was also significantly lower tibial cortical bone mineral density (P = .008) among athletes relative to controls.

Conversely, tibial trabecular measures of bone density and architecture were better among athletes than controls, but this was expected and did not contradict the hypothesis of the study.

“Trabecular bone refers to the inner part of the bone, which increases with weight-bearing exercise, but cortical bone is the outer shell, and the source of stress fractures,” Dr. Haines explained.

The median age of both the athletes and the controls was 24 years. Baseline measurements were similar. Body mass index, fat mass, estradiol, and leptin were all numerically lower in the athletes than controls, but none were significant, although there was a trend for the difference in leptin (P = .085).
 

 

 

Hormones correlated with tibial failure load

When these characteristics were evaluated in the context of mean tibial failure load, a metric related to strength, there was a strongly significant positive association with lean body mass (R = 0.85; P < 0.001) and estradiol level (R = 0.66; P = .007). The relationship with leptin also reached significance (R = 0.59; P = .046).

Unexpectedly, there was no relationship between testosterone and tibial failure load. The reason is unclear, but Dr. Haines’s interpretation is that the relationship between specific hormonal disturbances and bone density loss “might not be as simple” as once hypothesized.

The next step is a longitudinal evaluation of the same group of athletes to follow changes in the relationship between these variables over time, according to Dr. Haines.

Eventually, with evidence that there is a causal relationship between nutrition, hormonal changes, and bone loss, the research in this area will focus on better detection of risk and prophylactic strategies.

“Intervention trials to show that we can prevent stress factors will be difficult to perform,” Dr. Haines acknowledged, but she said that preventing adverse changes in bone at relatively young ages could have implications for long-term bone health, including protection from osteoporosis later in life.

Dr. Siobhan M. Statuta

The research presented by Dr. Haines is consistent with an area of research that is several decades old, at least in females, according to Siobhan M. Statuta, MD, a sports medicine primary care specialist at the University of Virginia, Charlottesville. The evidence that the same phenomenon occurs in men is more recent, but she said that it is now well accepted the there is a parallel hormonal issue in men and women.

“It is not a question of not eating enough. Often, athletes continue to consume the same diet, but their activity increases,” Dr. Statuta explained. “The problem is that they are not supplying enough of the calories they need to sustain the energy they are expending. You might say they are not fueling their engines appropriately.”

In 2014, the International Olympic Committee published a consensus statement on RED-S. They described this as a condition in which a state of energy deficiency leads to numerous complications in athletes, not just osteoporosis. Rather, a host of physiological systems, ranging from gastrointestinal complaints to cardiovascular events, were described.
 

RED-S addresses health beyond bones

“The RED-S theory is better described as a spoke-and-wheel concept rather than a triad. While inadequate energy availability is important to both, RED-S places this at the center of the wheel with spokes leading to all the possible complications rather than as a first event in a limited triad,” Dr. Statuta said in an interview.

However, she noted that the term RED-S is not yet appropriate to replace that of the male and female athlete triad.

“More research is required to hash out the relationship of a body in a state of energy deficiency and how it affects the entire body, which is the principle of RED-S,” Dr. Statuta said. “There likely are scientific effects, and we are currently investigating these relationships more.”

“These are really quite similar entities but have different foci,” she added. Based on data collected over several decades, “the triad narrows in on two body systems affected by low energy – the reproductive system and bones. RED-S incorporates these same systems yet adds on many more organ systems.

The original group of researchers have remained loyal to the concept of the triad that involves inadequate availability of energy followed by hormonal irregularities and osteoporosis. This group, the Female and Male Athlete Triad Coalition, has issued publications on this topic several times. Consensus statements were updated last year.

“The premise is that the triad leading to bone loss is shared by both men and women, even if the clinical manifestations differ,” said Dr. Statuta. The most notable difference is that men do not experience menstrual irregularities, but Dr. Statuta suggested that the clinical consequences are not necessarily any less.

“Males do not have menstrual cycles as an outward marker of an endocrine disturbance, so it is harder to recognize clinically, but I think there is agreement that not having enough energy available is the trigger of endocrine changes and then bone loss is relevant to both sexes,” she said. She said this is supported by a growing body of evidence, including the data presented by Dr. Haines at the Endocrine Society meeting.

Dr. Haines and Dr. Statuta report no potential conflicts of interest.

Similar to a phenomenon already well documented in women, inadequate nutrition appears to be linked to hormonal abnormalities and potentially preventable tibial cortical bone density loss in athletic men, according to results of a small, prospective study.

Based on these findings, “we suspect that a subset of male runners might not be fueling their bodies with enough nutrition and calories for their physical activity,” reported Melanie S. Haines, MD, at the annual meeting of the Endocrine Society.

This is not the first study to suggest male athletes are at risk of a condition equivalent to what has been commonly referred to as the female athlete triad, but it enlarges the objective data that the phenomenon is real, and it makes insufficient availability of energy the likely cause.

Dr. Melanie S. Haines

In women, the triad is described as a lack of adequate stored energy, irregular menses, and bone density loss. In men, menstrual cycles are not relevant, of course, but this study like others suggests a link between the failure to maintain adequate stores of energy, disturbances in hormone function, and decreased bone density in both men and women, Dr. Haines explained.
 

RED-S vs. male or female athlete triad

“There is now a move away from the term female athlete triad or male athlete triad,” Dr. Haines reported. Rather the factors of failing to maintain adequate energy for metabolic demands, hormonal disturbances, and bone density loss appear to be relevant to both sexes, according to Dr. Haines, an endocrinologist at Massachusetts General Hospital and assistant professor of medicine at Harvard Medical School, both in Boston. She said several groups, including the International Olympic Committee (IOC), have transitioned to the term RED-S to apply to both sexes.

“RED-S is an acronym for relative energy deficiency in sport, and it appears to be gaining traction,” Dr. Haines said in an interview.

According to her study and others, excessive lean body mass from failure to supply sufficient energy for physiological needs “negatively affects hormones and bone,” Dr. Haines explained. In men and women, endocrine disturbances are triggered when insufficient calories lead to inadequate macro- and micronutrients.

In this study, 31 men aged 16-30 years were evaluated. Fifteen were in the athlete group, defined by running at least 30 miles per week for at least the previous 6 months. There were 16 control subjects; all exercised less than 2 hours per week and did not participate in team sports, but they were not permitted in the study if their body mass index exceeded 27.5 kg/m2.
 

Athletes vs. otherwise healthy controls

Conditions that affect bone health were exclusion criteria in both groups, and neither group was permitted to take medications affecting bone health other than dietary calcium or vitamin D supplements for 2 months prior to the study.

Tibial cortical porosity was significantly greater – signaling deterioration in microarchitecture – in athletes, compared with control subjects (P = .003), according to quantitative computed tomography measurements. There was also significantly lower tibial cortical bone mineral density (P = .008) among athletes relative to controls.

Conversely, tibial trabecular measures of bone density and architecture were better among athletes than controls, but this was expected and did not contradict the hypothesis of the study.

“Trabecular bone refers to the inner part of the bone, which increases with weight-bearing exercise, but cortical bone is the outer shell, and the source of stress fractures,” Dr. Haines explained.

The median age of both the athletes and the controls was 24 years. Baseline measurements were similar. Body mass index, fat mass, estradiol, and leptin were all numerically lower in the athletes than controls, but none were significant, although there was a trend for the difference in leptin (P = .085).
 

 

 

Hormones correlated with tibial failure load

When these characteristics were evaluated in the context of mean tibial failure load, a metric related to strength, there was a strongly significant positive association with lean body mass (R = 0.85; P < 0.001) and estradiol level (R = 0.66; P = .007). The relationship with leptin also reached significance (R = 0.59; P = .046).

Unexpectedly, there was no relationship between testosterone and tibial failure load. The reason is unclear, but Dr. Haines’s interpretation is that the relationship between specific hormonal disturbances and bone density loss “might not be as simple” as once hypothesized.

The next step is a longitudinal evaluation of the same group of athletes to follow changes in the relationship between these variables over time, according to Dr. Haines.

Eventually, with evidence that there is a causal relationship between nutrition, hormonal changes, and bone loss, the research in this area will focus on better detection of risk and prophylactic strategies.

“Intervention trials to show that we can prevent stress factors will be difficult to perform,” Dr. Haines acknowledged, but she said that preventing adverse changes in bone at relatively young ages could have implications for long-term bone health, including protection from osteoporosis later in life.

Dr. Siobhan M. Statuta

The research presented by Dr. Haines is consistent with an area of research that is several decades old, at least in females, according to Siobhan M. Statuta, MD, a sports medicine primary care specialist at the University of Virginia, Charlottesville. The evidence that the same phenomenon occurs in men is more recent, but she said that it is now well accepted the there is a parallel hormonal issue in men and women.

“It is not a question of not eating enough. Often, athletes continue to consume the same diet, but their activity increases,” Dr. Statuta explained. “The problem is that they are not supplying enough of the calories they need to sustain the energy they are expending. You might say they are not fueling their engines appropriately.”

In 2014, the International Olympic Committee published a consensus statement on RED-S. They described this as a condition in which a state of energy deficiency leads to numerous complications in athletes, not just osteoporosis. Rather, a host of physiological systems, ranging from gastrointestinal complaints to cardiovascular events, were described.
 

RED-S addresses health beyond bones

“The RED-S theory is better described as a spoke-and-wheel concept rather than a triad. While inadequate energy availability is important to both, RED-S places this at the center of the wheel with spokes leading to all the possible complications rather than as a first event in a limited triad,” Dr. Statuta said in an interview.

However, she noted that the term RED-S is not yet appropriate to replace that of the male and female athlete triad.

“More research is required to hash out the relationship of a body in a state of energy deficiency and how it affects the entire body, which is the principle of RED-S,” Dr. Statuta said. “There likely are scientific effects, and we are currently investigating these relationships more.”

“These are really quite similar entities but have different foci,” she added. Based on data collected over several decades, “the triad narrows in on two body systems affected by low energy – the reproductive system and bones. RED-S incorporates these same systems yet adds on many more organ systems.

The original group of researchers have remained loyal to the concept of the triad that involves inadequate availability of energy followed by hormonal irregularities and osteoporosis. This group, the Female and Male Athlete Triad Coalition, has issued publications on this topic several times. Consensus statements were updated last year.

“The premise is that the triad leading to bone loss is shared by both men and women, even if the clinical manifestations differ,” said Dr. Statuta. The most notable difference is that men do not experience menstrual irregularities, but Dr. Statuta suggested that the clinical consequences are not necessarily any less.

“Males do not have menstrual cycles as an outward marker of an endocrine disturbance, so it is harder to recognize clinically, but I think there is agreement that not having enough energy available is the trigger of endocrine changes and then bone loss is relevant to both sexes,” she said. She said this is supported by a growing body of evidence, including the data presented by Dr. Haines at the Endocrine Society meeting.

Dr. Haines and Dr. Statuta report no potential conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ENDO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Debated: Nonfactor versus gene therapy for hemophilia

Article Type
Changed
Wed, 06/22/2022 - 09:11

Whether hemophilia A patients should stick with effective nonfactor therapy or join a clinical trial for a potential cure with gene therapy cure – this question was debated at he annual meeting of the European Hematology Association.

Ultimately, results of a very informal polling of the online audience suggested a strong leaning toward the known benefits of nonfactor therapy, as opposed to as-yet unapproved gene therapy. Although Benjamin Samelson-Jones, MD, PhD, argued for gene therapy, he also saluted the progress made that had enabled such choices.

Dr. Benjamin Samuelson-Jones

“Our patients and the field have greatly benefited from this broad spectrum of different therapies and how they’ve been implemented, and it’s a truly exciting time because there will continue to be advancements in both these therapeutic modalities in the next 5-10 years,” said Dr. Samelson-Jones, an assistant professor of pediatrics in the division of hematology at the Children’s Hospital of Philadelphia.
 

Game changers emerge

Hemophilia A, characterized by a hereditary deficiency in factor VIII disorder, has long involved prophylaxis treatment with procoagulant factor replacement therapy that requires intravenous injection as often as several times a week. This can cause problems with venous access that are particularly burdensome for child patients.

Nonfactor therapy, currently consisting of the approved emicizumab but with more agents in development, provides coagulation without replacement of factor VIII. Importantly, this treatment requires only subcutaneous injection which, after a loading dose period, may be needed weekly or even just once a month.

However, in 2018, at approximately the same time that emicizumab was approved, patients with hemophilia A became eligible to enroll in clinical trials for the far more revolutionary concept of gene therapy, with the chance to become infusion free after just a single infusion.

There are caveats aplenty. Four of the therapies now in phase 3 development are adeno-associated viral vectors that are liver directed, meaning that patients need to be closely followed in the first months post infusion, with regular blood tests and other monitoring.

Notably, once patients receive an infusion, they cannot receive another, because of the buildup of antibodies.

“I think [this is] most important when considering current gene therapy – a patient can only receive it once, based on current technology,” Dr. Samelson-Jones said in an interview.“That means if a patient received gene therapy in 2023, and something better is developed in 2025, they are unlikely to be able to receive it.”

Nevertheless, with favorable phase 3 data reported in March 2022 in the New England Journal of Medicine, the first gene therapy for hemophilia A, valoctocogene roxaparvovec (BioMarin), appears poised for possible regulatory approval very soon.

“I expect this product to be approved in the next year, though I been previously surprised before about delays in this product’s clinical development,” Dr. Samelson-Jones said.
 

Pros of nonfactor therapy

Arguing on the side of nonfactor therapy in the debate, Roseline d’Oiron, MD, underscored the extent to which nonfactor therapy has dramatically transformed lives.

With intravenous injections, “the burden of the stress and anxiety of the injections is underestimated, even when you don’t have venous access problems,” said Dr. d’Oiron, a clinician investigator at the University Paris XI.

The heavy toll that these therapeutic challenges have had on patients’ lives and identities has been documented in patient advocacy reports, underscoring that “the availability of subcutaneous therapies through the nonfactor therapies for hemophilia A has really been a game changer,” said Dr. d’Oiron, who is also the associate director of the Reference Centre for Hemophilia and Other Congenital Rare Bleeding Disorders, Congenital Platelets Disorders, and von Willebrand Disease at Bicêtre (France) Hospital AP-HP.

She noted that newer therapies in development show the potential to offer longer half-lives, providing “even more improvement with wider intervals between the subcutaneous injections.”

The efficacy of nonfactor therapies also translates to lower rates of joint bleeding, which represent the most common complication in hemophilia, potentially causing acute or chronic pain.

“These therapies allow a life that is much closer to what would be considered a normal life, and especially allowing some physical activities with the prevention of bleeding episodes,” Dr. d’Oiron said. “The drugs have a good safety profile and are completely changing the picture of this disease.”

Dr. d’Oiron noted that, in the real-world clinical setting, there is no debate over nonfactor versus gene therapy. Most prefer to stick with what is already working well for them.

“In my clinical practice, only a very limited number of patients are really willing and considering the switch to gene therapy,” she said. “They feel that the nonfactor therapy is filling their previous unmet needs quite well, and the impression is that we don’t necessarily need look for something different.”
 

Limitations of nonfactor therapy

Echoing that he has had the same favorable experiences with patients on emicizumab as described by Dr. d’Oiron, Dr. Samelson-Jones, pointed out key caveats that significantly differentiate it from gene therapy, not the least of which is the basic issue of the requirement of injections.

“Even with longer half-lives, approximately monthly injections are still required with nonfactor therapy,” which can – and have – been compromised by any range of societal disruptions, including a pandemic or supply issues.

Furthermore, the mechanism of nonfactor therapies in providing hemostatic regulation outside of normal factor VIII is unregulated, with ‘no easy ‘off’ switch,’ he explained.

“The balance that nonfactor agents provide between pro- and anticoagulant forces is inherently more fragile – more like a knife’s edge, and has resulted in the risk for thrombotic complications in most examples of nonfactor therapies,” he said.

In addition, the therapies have unknown immunogenicity, with an increased risk of the development of antidrug antibodies, called inhibitors, a theoretical complication of nonfactor therapies, if factor VIII is only administered in the setting of bleeds or perioperatively, Dr. Samelson-Jones said.

That being said, “nonfactor agents are not for all patients with hemophilia A in the future – but rather gene therapy is,” he noted.
 

Normal hemostasis ‘only achievable with gene therapy’

In contrast to nonfactor therapy, just one infusion of gene therapy “ideally offers many years of potentially curative hemostatic protection,” Dr. Samuelson-Jones said. “The ultimate goal, I believe, is to achieve normal hemostasis and health equity, and I contend this goal is only really achievable with gene therapy.”

He noted that, while gene therapies will require initial monitoring, “once the gene therapy recipient is 3 or 12 months out, the monitoring really de-escalates, and the patient is free from all drug delivery or needing to be in close contact with their treatment center.”

Regarding concerns about not being able to receive gene therapy more than once, Dr. Samuelson-Jones said that work is underway to develop alternative viral vectors and nonviral vectors that may overcome those challenges.

Overall, he underscored that challenges are par for the course in the development of any novel therapeutic approach.

For instance, similar challenges were experienced 10 years ago in the development of gene therapy for hemophilia B. However, with advances, “they’ve now been able to achieve long-term sustained levels in the normal ordinary curative range. And I’m optimistic that similar advances may be able to be achieved for factor VIII gene transfer,” he said.
 

Nonfactor therapies as bridge?

That being said, nonfactor therapies are going to be essential in treating patients until such advances come to fruition, Dr. Samelson-Jones noted.

“I would agree that nonfactor therapies in 2022 have really simplified and improved the convenience of prophylaxis,” he said, “but I would view them as a bridging therapy until gene therapy goes through clinical development and are licensed for all patients with hemophilia.”

While Dr. d’Oiron agreed with that possibility, she countered that, when it comes to crossing over to gene therapy, some very long bridges might be needed.

“I would love to have a therapy that would be both extremely safe and effective and offering a cure and normalization of hemostasis,” she said. “But I’m afraid that the current available gene therapy that might be arriving soon still does no fulfill all of these criteria. I think there are a lot of questions so far.”

Ultimately, Dr. Samelson-Jones conceded that the success of emicizumab has set a high bar in the minds of clinicians and patients alike, which will strongly influence perceptions of any alternative approaches –and of participation in clinical trials.

“I think that, unequivocally, emicizumab has changed the risk-benefit discussion about enrolling in clinical trials, and in gene therapy in particular,” he said. “And I think it also has set the threshold for efficacy – and if a gene therapy product in development can’t achieve bleeding control that is similar to that provided with emicizumab, then that is not a product that is going to be able to continue in clinical development.”

Importantly, both debaters underscored the need for ongoing efforts to make the novel – and therefore costly therapies accessible to all, through organizations including the World Federation of Hemophilia Humanitarian Aid Program.

“It would be my hope that we can then extend all of these great therapies to the majority of undertreated patients with hemophilia around the world,” Dr. Samelson-Jones said. “I think that’s an issue that must be addressed with all of these novel therapies.”

Commenting on these issues, Riitta Lassila, MD, professor of coagulation medicine at the Comprehensive Cancer Center at Helsinki University Hospital, , who moderated the debate, said it has also been her experience that some patients express reluctance to enter the gene therapy trials

“There are two groups of patients, just as in the healthy population as well,” she said in an interview. “Some more ready to take risks and some are very hesitant [regarding] anything new. We do have the saying: If something is not broken, don’t fix it.”

She noted the additional concern that while the therapy has been successful in hemophilia B, factor VIII involves a larger construct and may have limitations with hemophilia A.

Furthermore, “the sustainability of factor VIII production may decrease in a couple of years, and the treatment duration could remain suboptimal,” Dr. Lassila said. “However, hemostasis seems to still [be achieved] with gene therapy, so maybe there will be more efficient solutions in the future.”

Dr. Samuelson-Jones has been a consultant for Pfizer, Bayer, Genentech, Frontera, and Cabaletta and serves on the scientific advisory board of GeneVentiv. Dr. d’Oiron has reported relationships with Baxalta/Shire, Bayer, Biomarin, CSL Behring, LFB, NovoNordisk, Octapharma, Pfizer, Roche, and Sobi. Dr. Lassila has been an adviser for Roche (emicizumab) and Biomarin and CSL for gene therapy.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Whether hemophilia A patients should stick with effective nonfactor therapy or join a clinical trial for a potential cure with gene therapy cure – this question was debated at he annual meeting of the European Hematology Association.

Ultimately, results of a very informal polling of the online audience suggested a strong leaning toward the known benefits of nonfactor therapy, as opposed to as-yet unapproved gene therapy. Although Benjamin Samelson-Jones, MD, PhD, argued for gene therapy, he also saluted the progress made that had enabled such choices.

Dr. Benjamin Samuelson-Jones

“Our patients and the field have greatly benefited from this broad spectrum of different therapies and how they’ve been implemented, and it’s a truly exciting time because there will continue to be advancements in both these therapeutic modalities in the next 5-10 years,” said Dr. Samelson-Jones, an assistant professor of pediatrics in the division of hematology at the Children’s Hospital of Philadelphia.
 

Game changers emerge

Hemophilia A, characterized by a hereditary deficiency in factor VIII disorder, has long involved prophylaxis treatment with procoagulant factor replacement therapy that requires intravenous injection as often as several times a week. This can cause problems with venous access that are particularly burdensome for child patients.

Nonfactor therapy, currently consisting of the approved emicizumab but with more agents in development, provides coagulation without replacement of factor VIII. Importantly, this treatment requires only subcutaneous injection which, after a loading dose period, may be needed weekly or even just once a month.

However, in 2018, at approximately the same time that emicizumab was approved, patients with hemophilia A became eligible to enroll in clinical trials for the far more revolutionary concept of gene therapy, with the chance to become infusion free after just a single infusion.

There are caveats aplenty. Four of the therapies now in phase 3 development are adeno-associated viral vectors that are liver directed, meaning that patients need to be closely followed in the first months post infusion, with regular blood tests and other monitoring.

Notably, once patients receive an infusion, they cannot receive another, because of the buildup of antibodies.

“I think [this is] most important when considering current gene therapy – a patient can only receive it once, based on current technology,” Dr. Samelson-Jones said in an interview.“That means if a patient received gene therapy in 2023, and something better is developed in 2025, they are unlikely to be able to receive it.”

Nevertheless, with favorable phase 3 data reported in March 2022 in the New England Journal of Medicine, the first gene therapy for hemophilia A, valoctocogene roxaparvovec (BioMarin), appears poised for possible regulatory approval very soon.

“I expect this product to be approved in the next year, though I been previously surprised before about delays in this product’s clinical development,” Dr. Samelson-Jones said.
 

Pros of nonfactor therapy

Arguing on the side of nonfactor therapy in the debate, Roseline d’Oiron, MD, underscored the extent to which nonfactor therapy has dramatically transformed lives.

With intravenous injections, “the burden of the stress and anxiety of the injections is underestimated, even when you don’t have venous access problems,” said Dr. d’Oiron, a clinician investigator at the University Paris XI.

The heavy toll that these therapeutic challenges have had on patients’ lives and identities has been documented in patient advocacy reports, underscoring that “the availability of subcutaneous therapies through the nonfactor therapies for hemophilia A has really been a game changer,” said Dr. d’Oiron, who is also the associate director of the Reference Centre for Hemophilia and Other Congenital Rare Bleeding Disorders, Congenital Platelets Disorders, and von Willebrand Disease at Bicêtre (France) Hospital AP-HP.

She noted that newer therapies in development show the potential to offer longer half-lives, providing “even more improvement with wider intervals between the subcutaneous injections.”

The efficacy of nonfactor therapies also translates to lower rates of joint bleeding, which represent the most common complication in hemophilia, potentially causing acute or chronic pain.

“These therapies allow a life that is much closer to what would be considered a normal life, and especially allowing some physical activities with the prevention of bleeding episodes,” Dr. d’Oiron said. “The drugs have a good safety profile and are completely changing the picture of this disease.”

Dr. d’Oiron noted that, in the real-world clinical setting, there is no debate over nonfactor versus gene therapy. Most prefer to stick with what is already working well for them.

“In my clinical practice, only a very limited number of patients are really willing and considering the switch to gene therapy,” she said. “They feel that the nonfactor therapy is filling their previous unmet needs quite well, and the impression is that we don’t necessarily need look for something different.”
 

Limitations of nonfactor therapy

Echoing that he has had the same favorable experiences with patients on emicizumab as described by Dr. d’Oiron, Dr. Samelson-Jones, pointed out key caveats that significantly differentiate it from gene therapy, not the least of which is the basic issue of the requirement of injections.

“Even with longer half-lives, approximately monthly injections are still required with nonfactor therapy,” which can – and have – been compromised by any range of societal disruptions, including a pandemic or supply issues.

Furthermore, the mechanism of nonfactor therapies in providing hemostatic regulation outside of normal factor VIII is unregulated, with ‘no easy ‘off’ switch,’ he explained.

“The balance that nonfactor agents provide between pro- and anticoagulant forces is inherently more fragile – more like a knife’s edge, and has resulted in the risk for thrombotic complications in most examples of nonfactor therapies,” he said.

In addition, the therapies have unknown immunogenicity, with an increased risk of the development of antidrug antibodies, called inhibitors, a theoretical complication of nonfactor therapies, if factor VIII is only administered in the setting of bleeds or perioperatively, Dr. Samelson-Jones said.

That being said, “nonfactor agents are not for all patients with hemophilia A in the future – but rather gene therapy is,” he noted.
 

Normal hemostasis ‘only achievable with gene therapy’

In contrast to nonfactor therapy, just one infusion of gene therapy “ideally offers many years of potentially curative hemostatic protection,” Dr. Samuelson-Jones said. “The ultimate goal, I believe, is to achieve normal hemostasis and health equity, and I contend this goal is only really achievable with gene therapy.”

He noted that, while gene therapies will require initial monitoring, “once the gene therapy recipient is 3 or 12 months out, the monitoring really de-escalates, and the patient is free from all drug delivery or needing to be in close contact with their treatment center.”

Regarding concerns about not being able to receive gene therapy more than once, Dr. Samuelson-Jones said that work is underway to develop alternative viral vectors and nonviral vectors that may overcome those challenges.

Overall, he underscored that challenges are par for the course in the development of any novel therapeutic approach.

For instance, similar challenges were experienced 10 years ago in the development of gene therapy for hemophilia B. However, with advances, “they’ve now been able to achieve long-term sustained levels in the normal ordinary curative range. And I’m optimistic that similar advances may be able to be achieved for factor VIII gene transfer,” he said.
 

Nonfactor therapies as bridge?

That being said, nonfactor therapies are going to be essential in treating patients until such advances come to fruition, Dr. Samelson-Jones noted.

“I would agree that nonfactor therapies in 2022 have really simplified and improved the convenience of prophylaxis,” he said, “but I would view them as a bridging therapy until gene therapy goes through clinical development and are licensed for all patients with hemophilia.”

While Dr. d’Oiron agreed with that possibility, she countered that, when it comes to crossing over to gene therapy, some very long bridges might be needed.

“I would love to have a therapy that would be both extremely safe and effective and offering a cure and normalization of hemostasis,” she said. “But I’m afraid that the current available gene therapy that might be arriving soon still does no fulfill all of these criteria. I think there are a lot of questions so far.”

Ultimately, Dr. Samelson-Jones conceded that the success of emicizumab has set a high bar in the minds of clinicians and patients alike, which will strongly influence perceptions of any alternative approaches –and of participation in clinical trials.

“I think that, unequivocally, emicizumab has changed the risk-benefit discussion about enrolling in clinical trials, and in gene therapy in particular,” he said. “And I think it also has set the threshold for efficacy – and if a gene therapy product in development can’t achieve bleeding control that is similar to that provided with emicizumab, then that is not a product that is going to be able to continue in clinical development.”

Importantly, both debaters underscored the need for ongoing efforts to make the novel – and therefore costly therapies accessible to all, through organizations including the World Federation of Hemophilia Humanitarian Aid Program.

“It would be my hope that we can then extend all of these great therapies to the majority of undertreated patients with hemophilia around the world,” Dr. Samelson-Jones said. “I think that’s an issue that must be addressed with all of these novel therapies.”

Commenting on these issues, Riitta Lassila, MD, professor of coagulation medicine at the Comprehensive Cancer Center at Helsinki University Hospital, , who moderated the debate, said it has also been her experience that some patients express reluctance to enter the gene therapy trials

“There are two groups of patients, just as in the healthy population as well,” she said in an interview. “Some more ready to take risks and some are very hesitant [regarding] anything new. We do have the saying: If something is not broken, don’t fix it.”

She noted the additional concern that while the therapy has been successful in hemophilia B, factor VIII involves a larger construct and may have limitations with hemophilia A.

Furthermore, “the sustainability of factor VIII production may decrease in a couple of years, and the treatment duration could remain suboptimal,” Dr. Lassila said. “However, hemostasis seems to still [be achieved] with gene therapy, so maybe there will be more efficient solutions in the future.”

Dr. Samuelson-Jones has been a consultant for Pfizer, Bayer, Genentech, Frontera, and Cabaletta and serves on the scientific advisory board of GeneVentiv. Dr. d’Oiron has reported relationships with Baxalta/Shire, Bayer, Biomarin, CSL Behring, LFB, NovoNordisk, Octapharma, Pfizer, Roche, and Sobi. Dr. Lassila has been an adviser for Roche (emicizumab) and Biomarin and CSL for gene therapy.

Whether hemophilia A patients should stick with effective nonfactor therapy or join a clinical trial for a potential cure with gene therapy cure – this question was debated at he annual meeting of the European Hematology Association.

Ultimately, results of a very informal polling of the online audience suggested a strong leaning toward the known benefits of nonfactor therapy, as opposed to as-yet unapproved gene therapy. Although Benjamin Samelson-Jones, MD, PhD, argued for gene therapy, he also saluted the progress made that had enabled such choices.

Dr. Benjamin Samuelson-Jones

“Our patients and the field have greatly benefited from this broad spectrum of different therapies and how they’ve been implemented, and it’s a truly exciting time because there will continue to be advancements in both these therapeutic modalities in the next 5-10 years,” said Dr. Samelson-Jones, an assistant professor of pediatrics in the division of hematology at the Children’s Hospital of Philadelphia.
 

Game changers emerge

Hemophilia A, characterized by a hereditary deficiency in factor VIII disorder, has long involved prophylaxis treatment with procoagulant factor replacement therapy that requires intravenous injection as often as several times a week. This can cause problems with venous access that are particularly burdensome for child patients.

Nonfactor therapy, currently consisting of the approved emicizumab but with more agents in development, provides coagulation without replacement of factor VIII. Importantly, this treatment requires only subcutaneous injection which, after a loading dose period, may be needed weekly or even just once a month.

However, in 2018, at approximately the same time that emicizumab was approved, patients with hemophilia A became eligible to enroll in clinical trials for the far more revolutionary concept of gene therapy, with the chance to become infusion free after just a single infusion.

There are caveats aplenty. Four of the therapies now in phase 3 development are adeno-associated viral vectors that are liver directed, meaning that patients need to be closely followed in the first months post infusion, with regular blood tests and other monitoring.

Notably, once patients receive an infusion, they cannot receive another, because of the buildup of antibodies.

“I think [this is] most important when considering current gene therapy – a patient can only receive it once, based on current technology,” Dr. Samelson-Jones said in an interview.“That means if a patient received gene therapy in 2023, and something better is developed in 2025, they are unlikely to be able to receive it.”

Nevertheless, with favorable phase 3 data reported in March 2022 in the New England Journal of Medicine, the first gene therapy for hemophilia A, valoctocogene roxaparvovec (BioMarin), appears poised for possible regulatory approval very soon.

“I expect this product to be approved in the next year, though I been previously surprised before about delays in this product’s clinical development,” Dr. Samelson-Jones said.
 

Pros of nonfactor therapy

Arguing on the side of nonfactor therapy in the debate, Roseline d’Oiron, MD, underscored the extent to which nonfactor therapy has dramatically transformed lives.

With intravenous injections, “the burden of the stress and anxiety of the injections is underestimated, even when you don’t have venous access problems,” said Dr. d’Oiron, a clinician investigator at the University Paris XI.

The heavy toll that these therapeutic challenges have had on patients’ lives and identities has been documented in patient advocacy reports, underscoring that “the availability of subcutaneous therapies through the nonfactor therapies for hemophilia A has really been a game changer,” said Dr. d’Oiron, who is also the associate director of the Reference Centre for Hemophilia and Other Congenital Rare Bleeding Disorders, Congenital Platelets Disorders, and von Willebrand Disease at Bicêtre (France) Hospital AP-HP.

She noted that newer therapies in development show the potential to offer longer half-lives, providing “even more improvement with wider intervals between the subcutaneous injections.”

The efficacy of nonfactor therapies also translates to lower rates of joint bleeding, which represent the most common complication in hemophilia, potentially causing acute or chronic pain.

“These therapies allow a life that is much closer to what would be considered a normal life, and especially allowing some physical activities with the prevention of bleeding episodes,” Dr. d’Oiron said. “The drugs have a good safety profile and are completely changing the picture of this disease.”

Dr. d’Oiron noted that, in the real-world clinical setting, there is no debate over nonfactor versus gene therapy. Most prefer to stick with what is already working well for them.

“In my clinical practice, only a very limited number of patients are really willing and considering the switch to gene therapy,” she said. “They feel that the nonfactor therapy is filling their previous unmet needs quite well, and the impression is that we don’t necessarily need look for something different.”
 

Limitations of nonfactor therapy

Echoing that he has had the same favorable experiences with patients on emicizumab as described by Dr. d’Oiron, Dr. Samelson-Jones, pointed out key caveats that significantly differentiate it from gene therapy, not the least of which is the basic issue of the requirement of injections.

“Even with longer half-lives, approximately monthly injections are still required with nonfactor therapy,” which can – and have – been compromised by any range of societal disruptions, including a pandemic or supply issues.

Furthermore, the mechanism of nonfactor therapies in providing hemostatic regulation outside of normal factor VIII is unregulated, with ‘no easy ‘off’ switch,’ he explained.

“The balance that nonfactor agents provide between pro- and anticoagulant forces is inherently more fragile – more like a knife’s edge, and has resulted in the risk for thrombotic complications in most examples of nonfactor therapies,” he said.

In addition, the therapies have unknown immunogenicity, with an increased risk of the development of antidrug antibodies, called inhibitors, a theoretical complication of nonfactor therapies, if factor VIII is only administered in the setting of bleeds or perioperatively, Dr. Samelson-Jones said.

That being said, “nonfactor agents are not for all patients with hemophilia A in the future – but rather gene therapy is,” he noted.
 

Normal hemostasis ‘only achievable with gene therapy’

In contrast to nonfactor therapy, just one infusion of gene therapy “ideally offers many years of potentially curative hemostatic protection,” Dr. Samuelson-Jones said. “The ultimate goal, I believe, is to achieve normal hemostasis and health equity, and I contend this goal is only really achievable with gene therapy.”

He noted that, while gene therapies will require initial monitoring, “once the gene therapy recipient is 3 or 12 months out, the monitoring really de-escalates, and the patient is free from all drug delivery or needing to be in close contact with their treatment center.”

Regarding concerns about not being able to receive gene therapy more than once, Dr. Samuelson-Jones said that work is underway to develop alternative viral vectors and nonviral vectors that may overcome those challenges.

Overall, he underscored that challenges are par for the course in the development of any novel therapeutic approach.

For instance, similar challenges were experienced 10 years ago in the development of gene therapy for hemophilia B. However, with advances, “they’ve now been able to achieve long-term sustained levels in the normal ordinary curative range. And I’m optimistic that similar advances may be able to be achieved for factor VIII gene transfer,” he said.
 

Nonfactor therapies as bridge?

That being said, nonfactor therapies are going to be essential in treating patients until such advances come to fruition, Dr. Samelson-Jones noted.

“I would agree that nonfactor therapies in 2022 have really simplified and improved the convenience of prophylaxis,” he said, “but I would view them as a bridging therapy until gene therapy goes through clinical development and are licensed for all patients with hemophilia.”

While Dr. d’Oiron agreed with that possibility, she countered that, when it comes to crossing over to gene therapy, some very long bridges might be needed.

“I would love to have a therapy that would be both extremely safe and effective and offering a cure and normalization of hemostasis,” she said. “But I’m afraid that the current available gene therapy that might be arriving soon still does no fulfill all of these criteria. I think there are a lot of questions so far.”

Ultimately, Dr. Samelson-Jones conceded that the success of emicizumab has set a high bar in the minds of clinicians and patients alike, which will strongly influence perceptions of any alternative approaches –and of participation in clinical trials.

“I think that, unequivocally, emicizumab has changed the risk-benefit discussion about enrolling in clinical trials, and in gene therapy in particular,” he said. “And I think it also has set the threshold for efficacy – and if a gene therapy product in development can’t achieve bleeding control that is similar to that provided with emicizumab, then that is not a product that is going to be able to continue in clinical development.”

Importantly, both debaters underscored the need for ongoing efforts to make the novel – and therefore costly therapies accessible to all, through organizations including the World Federation of Hemophilia Humanitarian Aid Program.

“It would be my hope that we can then extend all of these great therapies to the majority of undertreated patients with hemophilia around the world,” Dr. Samelson-Jones said. “I think that’s an issue that must be addressed with all of these novel therapies.”

Commenting on these issues, Riitta Lassila, MD, professor of coagulation medicine at the Comprehensive Cancer Center at Helsinki University Hospital, , who moderated the debate, said it has also been her experience that some patients express reluctance to enter the gene therapy trials

“There are two groups of patients, just as in the healthy population as well,” she said in an interview. “Some more ready to take risks and some are very hesitant [regarding] anything new. We do have the saying: If something is not broken, don’t fix it.”

She noted the additional concern that while the therapy has been successful in hemophilia B, factor VIII involves a larger construct and may have limitations with hemophilia A.

Furthermore, “the sustainability of factor VIII production may decrease in a couple of years, and the treatment duration could remain suboptimal,” Dr. Lassila said. “However, hemostasis seems to still [be achieved] with gene therapy, so maybe there will be more efficient solutions in the future.”

Dr. Samuelson-Jones has been a consultant for Pfizer, Bayer, Genentech, Frontera, and Cabaletta and serves on the scientific advisory board of GeneVentiv. Dr. d’Oiron has reported relationships with Baxalta/Shire, Bayer, Biomarin, CSL Behring, LFB, NovoNordisk, Octapharma, Pfizer, Roche, and Sobi. Dr. Lassila has been an adviser for Roche (emicizumab) and Biomarin and CSL for gene therapy.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EHA 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID-19 Pandemic stress affected ovulation, not menstruation

Article Type
Changed
Tue, 06/21/2022 - 14:50

ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.

Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.

The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.

Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.

“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.

It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.

Lisa Nainggolan/MDedge News
Dr. Genevieve Neal-Perry

Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”

Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”

But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
 

‘Experiment of nature’ revealed invisible effect of pandemic stress

The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.

Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.

Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.

There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.

More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).

The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).

Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.  

The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.

And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).

Employment changes, caring responsibilities, and worry likely causes

The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.

“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.

Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.

“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.

Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.

“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.

Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”

Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.

Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.

The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.

Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.

“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.

It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.

Lisa Nainggolan/MDedge News
Dr. Genevieve Neal-Perry

Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”

Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”

But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
 

‘Experiment of nature’ revealed invisible effect of pandemic stress

The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.

Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.

Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.

There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.

More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).

The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).

Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.  

The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.

And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).

Employment changes, caring responsibilities, and worry likely causes

The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.

“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.

Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.

“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.

Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.

“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.

Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”

Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

ATLANTA – Disturbances in ovulation that didn’t produce any actual changes in the menstrual cycle of women were extremely common during the first year of the COVID-19 pandemic and were linked to emotional stress, according to the findings of an “experiment of nature” that allowed for comparison with women a decade earlier.

Findings from two studies of reproductive-age women, one conducted in 2006-2008 and the other in 2020-2021, were presented by Jerilynn C. Prior, MD, at the annual meeting of the Endocrine Society.

The comparison of the two time periods yielded several novel findings. “I was taught in medical school that when women don’t eat enough they lose their period. But what we now understand is there’s a graded response to various stressors, acting through the hypothalamus in a common pathway. There is a gradation of disturbances, some of which are subclinical or not obvious,” said Dr. Prior, professor of endocrinology and metabolism at the University of British Columbia, Vancouver.

Moreover, women’s menstrual cycle lengths didn’t differ across the two time periods, despite a dramatic 63% decrement in normal ovulatory function related to increased depression, anxiety, and outside stresses that the women reported in diaries.

“Assuming that regular cycles need normal ovulation is something we should just get out of our minds. It changes our concept about what’s normal if we only know about the cycle length,” she observed.

It will be critical going forward to see whether the ovulatory disturbances have resolved as the pandemic has shifted “because there’s strong evidence that ovulatory disturbances, even with normal cycle length, are related to bone loss and some evidence it’s related to early heart attacks, breast and endometrial cancers,” Dr. Prior said during a press conference.

Lisa Nainggolan/MDedge News
Dr. Genevieve Neal-Perry

Asked to comment, session moderator Genevieve Neal-Perry, MD, PhD, told this news organization: “I think what we can take away is that stress itself is a modifier of the way the brain and the gonads communicate with each other, and that then has an impact on ovulatory function.”

Dr. Neal-Perry noted that the association of stress and ovulatory disruption has been reported in various ways previously, but “clearly it doesn’t affect everyone. What we don’t know is who is most susceptible. There have been some studies showing a genetic predisposition and a genetic anomaly that actually makes them more susceptible to the impact of stress on the reproductive system.”

But the lack of data on weight change in the study cohorts is a limitation. “To me one of the more important questions was what was going on with weight. Just looking at a static number doesn’t tell you whether there were changes. We know that weight gain or weight loss can stress the reproductive axis,” noted Dr. Neal-Parry of the department of obstetrics and gynecology at the University of North Carolina at Chapel Hill.
 

‘Experiment of nature’ revealed invisible effect of pandemic stress

The women in both cohorts of the Menstruation Ovulation Study (MOS) were healthy volunteers aged 19-35 years recruited from the metropolitan Vancouver region. All were menstruating monthly and none were taking hormonal birth control. Recruitment for the second cohort had begun just prior to the March 2020 COVID-19 pandemic lockdown.

Interviewer-administered questionnaires (CaMos) covering demographics, socioeconomic status, and reproductive history, and daily diaries kept by the women (menstrual cycle diary) were identical for both cohorts.

Assessments of ovulation differed for the two studies but were cross-validated. For the earlier time period, ovulation was assessed by a threefold increase in follicular-to-luteal urinary progesterone (PdG). For the pandemic-era study, the validated quantitative basal temperature (QBT) method was used.

There were 301 women in the earlier cohort and 125 during the pandemic. Both were an average age of about 29 years and had a body mass index of about 24.3 kg/m2 (within the normal range). The pandemic cohort was more racially/ethnically diverse than the earlier one and more in-line with recent census data.

More of the women were nulliparous during the pandemic than earlier (92.7% vs. 80.4%; P = .002).

The distribution of menstrual cycle lengths didn’t differ, with both cohorts averaging about 30 days (P = .893). However, while 90% of the women in the earlier cohort ovulated normally, only 37% did during the pandemic, a highly significant difference (P < .0001).

Thus, during the pandemic, 63% of women had “silent ovulatory disturbances,” either with short luteal phases after ovulation or no ovulation, compared with just 10% in the earlier cohort, “which is remarkable, unbelievable actually,” Dr. Prior remarked.  

The difference wasn’t explained by any of the demographic information collected either, including socioeconomic status, lifestyle, or reproductive history variables.

And it wasn’t because of COVID-19 vaccination, as the vaccine wasn’t available when most of the women were recruited, and of the 79 who were recruited during vaccine availability, only two received a COVID-19 vaccine during the study (and both had normal ovulation).

Employment changes, caring responsibilities, and worry likely causes

The information from the diaries was more revealing. Several diary components were far more common during the pandemic, including negative mood (feeling depressed or anxious, sleep problems, and outside stresses), self-worth, interest in sex, energy level, and appetite. All were significantly different between the two cohorts (P < .001) and between those with and without ovulatory disturbances.

“So menstrual cycle lengths and long cycles didn’t differ, but there was a much higher prevalence of silent or subclinical ovulatory disturbances, and these were related to the increased stresses that women recorded in their diaries. This means that the estrogen levels were pretty close to normal but the progesterone levels were remarkably decreased,” Dr. Prior said.

Interestingly, reported menstrual cramps were also significantly more common during the pandemic and associated with ovulatory disruption.

“That is a new observation because previously we’ve always thought that you needed to ovulate in order to even have cramps,” she commented.

Asked whether COVID-19 itself might have played a role, Dr. Prior said no woman in the study tested positive for the virus or had long COVID.

“As far as I’m aware, it was the changes in employment … and caring for elders and worry about illness in somebody you loved that was related,” she said.

Asked what she thinks the result would be if the study were conducted now, she said: “I don’t know. We’re still in a stressful time with inflation and not complete recovery, so probably the issue is still very present.”

Dr. Prior and Dr. Neal-Perry have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ENDO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Osteoporosis risk rises with air pollution levels

Article Type
Changed
Tue, 07/05/2022 - 13:58

COPENHAGEN – Chronic exposure to high levels of particulate matter (PM) air pollution 2.5 mcm (PM2.5) or larger, and 10 mcm (PM10) or larger, in size is associated with a significantly higher likelihood of having osteoporosis, according to research presented at the annual European Congress of Rheumatology.

The results of the 7-year longitudinal study carried out across Italy from 2013 to 2019 dovetail with other recent published accounts from the same team of Italian researchers, led by Giovanni Adami, MD, of the rheumatology unit at the University of Verona (Italy). In addition to the current report presented at EULAR 2022, Dr. Adami and associates have reported an increased risk of flares of both rheumatoid arthritis and psoriasis following periods of elevated pollution, as well as an overall elevated risk for autoimmune diseases with higher concentrations of PM2.5 and PM10.



The pathogenesis of osteoporosis is thought to involve both genetic and environmental input, such as smoking, which is itself environmental air pollution, Dr. Adami said. The biological rationale for why air pollution might contribute to risk for osteoporosis comes from studies showing that exposure to indoor air pollution from biomass combustion raises serum levels of RANKL (receptor activator of nuclear factor-kappa ligand 1) but lowers serum osteoprotegerin – suggesting an increased risk of bone resorption – and that toxic metals such as lead, cadmium, mercury, and aluminum accumulate in the skeleton and negatively affect bone health.

In their study, Dr. Adami and colleagues found that, overall, the average exposure during the period 2013-2019 across Italy was 16.0 mcg/m3 for PM2.5 and 25.0 mcg/m3 for PM10.

“I can tell you that [25.0 mcg/m3 for PM10] is a very high exposure. It’s not very good for your health,” Dr. Adami said.

Data on more than 59,000 Italian women

Dr. Adami and colleagues used clinical characteristics and densitometric data from Italy’s osteoporosis fracture risk and osteoporosis screening reimbursement tool known as DeFRAcalc79, which has amassed variables from more than 59,000 women across the country. They used long-term average PM concentrations across Italy during 2013-2019 that were obtained from the Italian Institute for Environmental Protection and Research’s 617 air quality stations in 110 Italian provinces. The researchers linked individuals to a PM exposure value determined from the average concentration of urban, rural, and near-traffic stations in each person’s province of residence.

For 59,950 women across Italy who were at high risk for fracture, the researchers found 64.5% with bone mineral density that was defined as osteoporotic. At PM10 concentrations of 30 mcg/m3 or greater, there was a significantly higher likelihood of osteoporosis at both the femoral neck (odds ratio, 1.15) and lumbar spine (OR, 1.17).

The likelihood of osteoporosis was slightly greater with PM2.5 at concentrations of 25 mcg/m3 or more at the femoral neck (OR, 1.22) and lumbar spine (OR, 1.18). These comparisons were adjusted for age, body mass index (BMI), presence of prevalent fragility fractures, family history of osteoporosis, menopause, glucocorticoid use, comorbidities, and for residency in northern, central, or southern Italy.

Both thresholds of PM10 > 30 mcg/m3 and PM2.5 > 25 mcg/m3 “are considered safe … by the World Health Organization,” Dr. Adami pointed out.

“If you live in a place where the chronic exposure is less than 30 mcg/m3, you probably have slightly lower risk of osteoporosis as compared to those who live in a highly industrialized, polluted zone,” he explained.

“The cortical bone – femoral neck – seemed to be more susceptible, compared to trabecular bone, which is the lumbar spine. We have no idea why this is true, but we might speculate that somehow chronic inflammation like the [kind] seen in rheumatoid arthritis might be responsible for cortical bone impairment and not trabecular bone impairment,” Dr. Adami said.

One audience member, Kenneth Poole, BM, PhD, senior lecturer and honorary consultant in Metabolic Bone Disease and Rheumatology at the University of Cambridge (England), asked whether it was possible to account for the possibility of confounding caused by areas with dense housing in places where the particulate matter would be highest, and where residents may be less active and use stairs less often.

Dr. Adami noted that confounding is indeed a possibility, but he said Italy is unique in that its most polluted area – the Po River valley – is also its most wealthy area and in general has less crowded living situations with a healthier population, which could have attenuated, rather than reinforced, the results.

Does air pollution have an immunologic effect?

In interviews with this news organization, session comoderators Filipe Araújo, MD, and Irene Bultink, MD, PhD, said that the growth in evidence for the impact of air pollution on risk for, and severity of, various diseases suggests air pollution might have an immunologic effect.

“I think it’s very important to point this out. I also think it’s very hard to rule out confounding, because when you’re living in a city with crowded housing you may not walk or ride your bike but instead go by car or metro, and [the lifestyle is different],” said Dr. Bultink of Amsterdam University Medical Centers.

“It stresses that these diseases [that are associated with air pollution] although they are different in their pathophysiology, it points toward the systemic nature of rheumatic diseases, including osteoporosis,” said Dr. Araújo of Hospital Cuf Cascais (Portugal) and Hospital Ortopédico de Sant’Ana, Parede, Portugal.

The study was independently supported.Dr. Adami disclosed being a shareholder of Galapagos and Theramex.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

COPENHAGEN – Chronic exposure to high levels of particulate matter (PM) air pollution 2.5 mcm (PM2.5) or larger, and 10 mcm (PM10) or larger, in size is associated with a significantly higher likelihood of having osteoporosis, according to research presented at the annual European Congress of Rheumatology.

The results of the 7-year longitudinal study carried out across Italy from 2013 to 2019 dovetail with other recent published accounts from the same team of Italian researchers, led by Giovanni Adami, MD, of the rheumatology unit at the University of Verona (Italy). In addition to the current report presented at EULAR 2022, Dr. Adami and associates have reported an increased risk of flares of both rheumatoid arthritis and psoriasis following periods of elevated pollution, as well as an overall elevated risk for autoimmune diseases with higher concentrations of PM2.5 and PM10.



The pathogenesis of osteoporosis is thought to involve both genetic and environmental input, such as smoking, which is itself environmental air pollution, Dr. Adami said. The biological rationale for why air pollution might contribute to risk for osteoporosis comes from studies showing that exposure to indoor air pollution from biomass combustion raises serum levels of RANKL (receptor activator of nuclear factor-kappa ligand 1) but lowers serum osteoprotegerin – suggesting an increased risk of bone resorption – and that toxic metals such as lead, cadmium, mercury, and aluminum accumulate in the skeleton and negatively affect bone health.

In their study, Dr. Adami and colleagues found that, overall, the average exposure during the period 2013-2019 across Italy was 16.0 mcg/m3 for PM2.5 and 25.0 mcg/m3 for PM10.

“I can tell you that [25.0 mcg/m3 for PM10] is a very high exposure. It’s not very good for your health,” Dr. Adami said.

Data on more than 59,000 Italian women

Dr. Adami and colleagues used clinical characteristics and densitometric data from Italy’s osteoporosis fracture risk and osteoporosis screening reimbursement tool known as DeFRAcalc79, which has amassed variables from more than 59,000 women across the country. They used long-term average PM concentrations across Italy during 2013-2019 that were obtained from the Italian Institute for Environmental Protection and Research’s 617 air quality stations in 110 Italian provinces. The researchers linked individuals to a PM exposure value determined from the average concentration of urban, rural, and near-traffic stations in each person’s province of residence.

For 59,950 women across Italy who were at high risk for fracture, the researchers found 64.5% with bone mineral density that was defined as osteoporotic. At PM10 concentrations of 30 mcg/m3 or greater, there was a significantly higher likelihood of osteoporosis at both the femoral neck (odds ratio, 1.15) and lumbar spine (OR, 1.17).

The likelihood of osteoporosis was slightly greater with PM2.5 at concentrations of 25 mcg/m3 or more at the femoral neck (OR, 1.22) and lumbar spine (OR, 1.18). These comparisons were adjusted for age, body mass index (BMI), presence of prevalent fragility fractures, family history of osteoporosis, menopause, glucocorticoid use, comorbidities, and for residency in northern, central, or southern Italy.

Both thresholds of PM10 > 30 mcg/m3 and PM2.5 > 25 mcg/m3 “are considered safe … by the World Health Organization,” Dr. Adami pointed out.

“If you live in a place where the chronic exposure is less than 30 mcg/m3, you probably have slightly lower risk of osteoporosis as compared to those who live in a highly industrialized, polluted zone,” he explained.

“The cortical bone – femoral neck – seemed to be more susceptible, compared to trabecular bone, which is the lumbar spine. We have no idea why this is true, but we might speculate that somehow chronic inflammation like the [kind] seen in rheumatoid arthritis might be responsible for cortical bone impairment and not trabecular bone impairment,” Dr. Adami said.

One audience member, Kenneth Poole, BM, PhD, senior lecturer and honorary consultant in Metabolic Bone Disease and Rheumatology at the University of Cambridge (England), asked whether it was possible to account for the possibility of confounding caused by areas with dense housing in places where the particulate matter would be highest, and where residents may be less active and use stairs less often.

Dr. Adami noted that confounding is indeed a possibility, but he said Italy is unique in that its most polluted area – the Po River valley – is also its most wealthy area and in general has less crowded living situations with a healthier population, which could have attenuated, rather than reinforced, the results.

Does air pollution have an immunologic effect?

In interviews with this news organization, session comoderators Filipe Araújo, MD, and Irene Bultink, MD, PhD, said that the growth in evidence for the impact of air pollution on risk for, and severity of, various diseases suggests air pollution might have an immunologic effect.

“I think it’s very important to point this out. I also think it’s very hard to rule out confounding, because when you’re living in a city with crowded housing you may not walk or ride your bike but instead go by car or metro, and [the lifestyle is different],” said Dr. Bultink of Amsterdam University Medical Centers.

“It stresses that these diseases [that are associated with air pollution] although they are different in their pathophysiology, it points toward the systemic nature of rheumatic diseases, including osteoporosis,” said Dr. Araújo of Hospital Cuf Cascais (Portugal) and Hospital Ortopédico de Sant’Ana, Parede, Portugal.

The study was independently supported.Dr. Adami disclosed being a shareholder of Galapagos and Theramex.

A version of this article first appeared on Medscape.com.

COPENHAGEN – Chronic exposure to high levels of particulate matter (PM) air pollution 2.5 mcm (PM2.5) or larger, and 10 mcm (PM10) or larger, in size is associated with a significantly higher likelihood of having osteoporosis, according to research presented at the annual European Congress of Rheumatology.

The results of the 7-year longitudinal study carried out across Italy from 2013 to 2019 dovetail with other recent published accounts from the same team of Italian researchers, led by Giovanni Adami, MD, of the rheumatology unit at the University of Verona (Italy). In addition to the current report presented at EULAR 2022, Dr. Adami and associates have reported an increased risk of flares of both rheumatoid arthritis and psoriasis following periods of elevated pollution, as well as an overall elevated risk for autoimmune diseases with higher concentrations of PM2.5 and PM10.



The pathogenesis of osteoporosis is thought to involve both genetic and environmental input, such as smoking, which is itself environmental air pollution, Dr. Adami said. The biological rationale for why air pollution might contribute to risk for osteoporosis comes from studies showing that exposure to indoor air pollution from biomass combustion raises serum levels of RANKL (receptor activator of nuclear factor-kappa ligand 1) but lowers serum osteoprotegerin – suggesting an increased risk of bone resorption – and that toxic metals such as lead, cadmium, mercury, and aluminum accumulate in the skeleton and negatively affect bone health.

In their study, Dr. Adami and colleagues found that, overall, the average exposure during the period 2013-2019 across Italy was 16.0 mcg/m3 for PM2.5 and 25.0 mcg/m3 for PM10.

“I can tell you that [25.0 mcg/m3 for PM10] is a very high exposure. It’s not very good for your health,” Dr. Adami said.

Data on more than 59,000 Italian women

Dr. Adami and colleagues used clinical characteristics and densitometric data from Italy’s osteoporosis fracture risk and osteoporosis screening reimbursement tool known as DeFRAcalc79, which has amassed variables from more than 59,000 women across the country. They used long-term average PM concentrations across Italy during 2013-2019 that were obtained from the Italian Institute for Environmental Protection and Research’s 617 air quality stations in 110 Italian provinces. The researchers linked individuals to a PM exposure value determined from the average concentration of urban, rural, and near-traffic stations in each person’s province of residence.

For 59,950 women across Italy who were at high risk for fracture, the researchers found 64.5% with bone mineral density that was defined as osteoporotic. At PM10 concentrations of 30 mcg/m3 or greater, there was a significantly higher likelihood of osteoporosis at both the femoral neck (odds ratio, 1.15) and lumbar spine (OR, 1.17).

The likelihood of osteoporosis was slightly greater with PM2.5 at concentrations of 25 mcg/m3 or more at the femoral neck (OR, 1.22) and lumbar spine (OR, 1.18). These comparisons were adjusted for age, body mass index (BMI), presence of prevalent fragility fractures, family history of osteoporosis, menopause, glucocorticoid use, comorbidities, and for residency in northern, central, or southern Italy.

Both thresholds of PM10 > 30 mcg/m3 and PM2.5 > 25 mcg/m3 “are considered safe … by the World Health Organization,” Dr. Adami pointed out.

“If you live in a place where the chronic exposure is less than 30 mcg/m3, you probably have slightly lower risk of osteoporosis as compared to those who live in a highly industrialized, polluted zone,” he explained.

“The cortical bone – femoral neck – seemed to be more susceptible, compared to trabecular bone, which is the lumbar spine. We have no idea why this is true, but we might speculate that somehow chronic inflammation like the [kind] seen in rheumatoid arthritis might be responsible for cortical bone impairment and not trabecular bone impairment,” Dr. Adami said.

One audience member, Kenneth Poole, BM, PhD, senior lecturer and honorary consultant in Metabolic Bone Disease and Rheumatology at the University of Cambridge (England), asked whether it was possible to account for the possibility of confounding caused by areas with dense housing in places where the particulate matter would be highest, and where residents may be less active and use stairs less often.

Dr. Adami noted that confounding is indeed a possibility, but he said Italy is unique in that its most polluted area – the Po River valley – is also its most wealthy area and in general has less crowded living situations with a healthier population, which could have attenuated, rather than reinforced, the results.

Does air pollution have an immunologic effect?

In interviews with this news organization, session comoderators Filipe Araújo, MD, and Irene Bultink, MD, PhD, said that the growth in evidence for the impact of air pollution on risk for, and severity of, various diseases suggests air pollution might have an immunologic effect.

“I think it’s very important to point this out. I also think it’s very hard to rule out confounding, because when you’re living in a city with crowded housing you may not walk or ride your bike but instead go by car or metro, and [the lifestyle is different],” said Dr. Bultink of Amsterdam University Medical Centers.

“It stresses that these diseases [that are associated with air pollution] although they are different in their pathophysiology, it points toward the systemic nature of rheumatic diseases, including osteoporosis,” said Dr. Araújo of Hospital Cuf Cascais (Portugal) and Hospital Ortopédico de Sant’Ana, Parede, Portugal.

The study was independently supported.Dr. Adami disclosed being a shareholder of Galapagos and Theramex.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE EULAR 2022 CONGRESS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could a type 2 diabetes drug tackle kidney stones?

Article Type
Changed
Tue, 07/05/2022 - 13:59

Patients with type 2 diabetes who received empagliflozin, a sodium glucose cotransporter-2 (SGLT2) inhibitor, were almost 40% less likely to have a kidney stone than patients who received placebo during a median 1.5 years of treatment.

These findings are from an analysis of pooled data from phase 1-4 clinical trials of empagliflozin for blood glucose control in 15,081 patients with type 2 diabetes.  

Priyadarshini Balasubramanian, MD, presented the study as a poster at the annual meeting of the Endocrine Society; the study also was published online in the Journal of Clinical Endocrinology & Metabolism.

The researchers acknowledge this was a retrospective, post-hoc analysis and that urolithiasis – a stone in the urinary tract, which includes nephrolithiasis, a kidney stone – was an adverse event, not a primary or secondary outcome.

Also, the stone composition, which might help explain how the drug may affect stone formation, is unknown.

Therefore, “dedicated randomized prospective clinical trials are needed to confirm these initial observations in patients both with and without type 2 diabetes,” said Dr. Balasubramanian, a clinical fellow in the section of endocrinology & metabolism, department of internal medicine at Yale University, New Haven, Conn.

However, “if this association is proven, empagliflozin may be used to decrease the risk of kidney stones at least in those with type 2 diabetes, but maybe also in those without diabetes,” Dr. Balasubramanian said in an interview.

Further trials are also needed to determine if this is a class effect, which is likely, she speculated, and to unravel the potential mechanism.

This is important because of the prevalence of kidney stones, which affect up to 15% of the general population and 15%-20% of patients with diabetes, she explained.
 

‘Provocative’ earlier findings

The current study was prompted by a recent observational study by Kasper B. Kristensen, MD, PhD, and colleagues.

Because SGLT2 inhibitors increase urinary glucose excretion through reduced renal reabsorption of glucose leading to osmotic diuresis and increased urinary flow, they hypothesized that these therapies “may reduce the risk of upper urinary tract stones (nephrolithiasis) by reducing the concentration of lithogenic substances in urine.” 

Using data from Danish Health registries, they matched 12,325 individuals newly prescribed an SGLT2 inhibitor 1:1 with patients newly prescribed a glucagonlike peptide-1 (GLP1) agonist, another new class of drugs for treating type 2 diabetes.

They found a hazard ratio of 0.51 (95% confidence interval, 0.37-0.71) for incident nephrolithiasis and a hazard ratio of 0.68 (95% CI, 0.48-0.97) for recurrent nephrolithiasis for patients taking SGLT2 inhibitors versus GLP-1 agonists.

These findings are “striking,” according to Dr. Balasubramanian and colleagues. However, “these data, while provocative, were entirely retrospective and therefore possibly prone to bias,” they add.
 

Pooled data from 20 trials

The current study analyzed data from 20 randomized controlled trials of glycemic control in type 2 diabetes, in which 10,177 patients had received empagliflozin 10 mg or 25 mg and 4,904 patients had received placebo.

Most patients (46.5%) had participated in the EMPA-REG OUTCOMES trial, which also had the longest follow-up (2.6 years).

The researchers identified patients with a new stone from the urinary tract (including the kidney, ureter, and urethra). Patients had received either the study drug for a median of 543 days or placebo for a median of 549 days.

During treatment, 104 of 10,177 patients in the pooled empagliflozin groups and 79 of 4,904 patients in the pooled placebo groups developed a stone in the urinary tract.

This was equivalent to 0.63 new urinary-tract stones per 100 patient-years in the pooled empagliflozin groups versus 1.01 new urinary-tract stones per 100 patient-years in the pooled placebo groups.

The incidence rate ratio was 0.64 (95% CI, 0.48-0.86), in favor of empagliflozin.

When the analysis was restricted to new kidney stones, the results were similar: 75 of 10,177 patients in the pooled empagliflozin groups and 57 of 4,904 patients in the pooled placebo groups developed a kidney stone.

This was equivalent to 0.45 new kidney stones per 100 patient-years in the pooled empagliflozin groups versus 0.72 new kidney stones per 100 patient-years in the pooled placebo groups.

The IRR was 0.65 (95% CI, 0.46-0.92), in favor of empagliflozin.
 

 

 

Upcoming small RCT in adults without diabetes

Invited to comment on the new study, Dr. Kristensen said: “The reduced risk of SGLT2 inhibitors towards nephrolithiasis is now reported in at least two studies with different methodology, different populations, and different exposure and outcome definitions.”

“I agree that randomized clinical trials designed specifically to confirm these findings appear warranted,” added Dr. Kristensen, from the Institute of Public Health, Clinical Pharmacology, Pharmacy, and Environmental Medicine, University of Southern Denmark in Odense.

There is a need for studies in patients with and without diabetes, he added, especially ones that focus on prevention of nephrolithiasis in patients with kidney stone disease.

A new trial should shed further light on this.

Results are expected by the end of 2022 for SWEETSTONE (Impact of the SGLT2 Inhibitor Empagliflozin on Urinary Supersaturations in Kidney Stone Formers), a randomized, double-blind crossover exploratory study in 46 patients without diabetes.

This should provide preliminary data to “establish the relevance for larger trials assessing the prophylactic potential of empagliflozin in kidney stone disease,” according to an article on the trial protocol recently published in BMJ.

The trials included in the pooled dataset were funded by Boehringer Ingelheim or the Boehringer Ingelheim and Eli Lilly Diabetes Alliance. Dr. Balasubramanian has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Patients with type 2 diabetes who received empagliflozin, a sodium glucose cotransporter-2 (SGLT2) inhibitor, were almost 40% less likely to have a kidney stone than patients who received placebo during a median 1.5 years of treatment.

These findings are from an analysis of pooled data from phase 1-4 clinical trials of empagliflozin for blood glucose control in 15,081 patients with type 2 diabetes.  

Priyadarshini Balasubramanian, MD, presented the study as a poster at the annual meeting of the Endocrine Society; the study also was published online in the Journal of Clinical Endocrinology & Metabolism.

The researchers acknowledge this was a retrospective, post-hoc analysis and that urolithiasis – a stone in the urinary tract, which includes nephrolithiasis, a kidney stone – was an adverse event, not a primary or secondary outcome.

Also, the stone composition, which might help explain how the drug may affect stone formation, is unknown.

Therefore, “dedicated randomized prospective clinical trials are needed to confirm these initial observations in patients both with and without type 2 diabetes,” said Dr. Balasubramanian, a clinical fellow in the section of endocrinology & metabolism, department of internal medicine at Yale University, New Haven, Conn.

However, “if this association is proven, empagliflozin may be used to decrease the risk of kidney stones at least in those with type 2 diabetes, but maybe also in those without diabetes,” Dr. Balasubramanian said in an interview.

Further trials are also needed to determine if this is a class effect, which is likely, she speculated, and to unravel the potential mechanism.

This is important because of the prevalence of kidney stones, which affect up to 15% of the general population and 15%-20% of patients with diabetes, she explained.
 

‘Provocative’ earlier findings

The current study was prompted by a recent observational study by Kasper B. Kristensen, MD, PhD, and colleagues.

Because SGLT2 inhibitors increase urinary glucose excretion through reduced renal reabsorption of glucose leading to osmotic diuresis and increased urinary flow, they hypothesized that these therapies “may reduce the risk of upper urinary tract stones (nephrolithiasis) by reducing the concentration of lithogenic substances in urine.” 

Using data from Danish Health registries, they matched 12,325 individuals newly prescribed an SGLT2 inhibitor 1:1 with patients newly prescribed a glucagonlike peptide-1 (GLP1) agonist, another new class of drugs for treating type 2 diabetes.

They found a hazard ratio of 0.51 (95% confidence interval, 0.37-0.71) for incident nephrolithiasis and a hazard ratio of 0.68 (95% CI, 0.48-0.97) for recurrent nephrolithiasis for patients taking SGLT2 inhibitors versus GLP-1 agonists.

These findings are “striking,” according to Dr. Balasubramanian and colleagues. However, “these data, while provocative, were entirely retrospective and therefore possibly prone to bias,” they add.
 

Pooled data from 20 trials

The current study analyzed data from 20 randomized controlled trials of glycemic control in type 2 diabetes, in which 10,177 patients had received empagliflozin 10 mg or 25 mg and 4,904 patients had received placebo.

Most patients (46.5%) had participated in the EMPA-REG OUTCOMES trial, which also had the longest follow-up (2.6 years).

The researchers identified patients with a new stone from the urinary tract (including the kidney, ureter, and urethra). Patients had received either the study drug for a median of 543 days or placebo for a median of 549 days.

During treatment, 104 of 10,177 patients in the pooled empagliflozin groups and 79 of 4,904 patients in the pooled placebo groups developed a stone in the urinary tract.

This was equivalent to 0.63 new urinary-tract stones per 100 patient-years in the pooled empagliflozin groups versus 1.01 new urinary-tract stones per 100 patient-years in the pooled placebo groups.

The incidence rate ratio was 0.64 (95% CI, 0.48-0.86), in favor of empagliflozin.

When the analysis was restricted to new kidney stones, the results were similar: 75 of 10,177 patients in the pooled empagliflozin groups and 57 of 4,904 patients in the pooled placebo groups developed a kidney stone.

This was equivalent to 0.45 new kidney stones per 100 patient-years in the pooled empagliflozin groups versus 0.72 new kidney stones per 100 patient-years in the pooled placebo groups.

The IRR was 0.65 (95% CI, 0.46-0.92), in favor of empagliflozin.
 

 

 

Upcoming small RCT in adults without diabetes

Invited to comment on the new study, Dr. Kristensen said: “The reduced risk of SGLT2 inhibitors towards nephrolithiasis is now reported in at least two studies with different methodology, different populations, and different exposure and outcome definitions.”

“I agree that randomized clinical trials designed specifically to confirm these findings appear warranted,” added Dr. Kristensen, from the Institute of Public Health, Clinical Pharmacology, Pharmacy, and Environmental Medicine, University of Southern Denmark in Odense.

There is a need for studies in patients with and without diabetes, he added, especially ones that focus on prevention of nephrolithiasis in patients with kidney stone disease.

A new trial should shed further light on this.

Results are expected by the end of 2022 for SWEETSTONE (Impact of the SGLT2 Inhibitor Empagliflozin on Urinary Supersaturations in Kidney Stone Formers), a randomized, double-blind crossover exploratory study in 46 patients without diabetes.

This should provide preliminary data to “establish the relevance for larger trials assessing the prophylactic potential of empagliflozin in kidney stone disease,” according to an article on the trial protocol recently published in BMJ.

The trials included in the pooled dataset were funded by Boehringer Ingelheim or the Boehringer Ingelheim and Eli Lilly Diabetes Alliance. Dr. Balasubramanian has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Patients with type 2 diabetes who received empagliflozin, a sodium glucose cotransporter-2 (SGLT2) inhibitor, were almost 40% less likely to have a kidney stone than patients who received placebo during a median 1.5 years of treatment.

These findings are from an analysis of pooled data from phase 1-4 clinical trials of empagliflozin for blood glucose control in 15,081 patients with type 2 diabetes.  

Priyadarshini Balasubramanian, MD, presented the study as a poster at the annual meeting of the Endocrine Society; the study also was published online in the Journal of Clinical Endocrinology & Metabolism.

The researchers acknowledge this was a retrospective, post-hoc analysis and that urolithiasis – a stone in the urinary tract, which includes nephrolithiasis, a kidney stone – was an adverse event, not a primary or secondary outcome.

Also, the stone composition, which might help explain how the drug may affect stone formation, is unknown.

Therefore, “dedicated randomized prospective clinical trials are needed to confirm these initial observations in patients both with and without type 2 diabetes,” said Dr. Balasubramanian, a clinical fellow in the section of endocrinology & metabolism, department of internal medicine at Yale University, New Haven, Conn.

However, “if this association is proven, empagliflozin may be used to decrease the risk of kidney stones at least in those with type 2 diabetes, but maybe also in those without diabetes,” Dr. Balasubramanian said in an interview.

Further trials are also needed to determine if this is a class effect, which is likely, she speculated, and to unravel the potential mechanism.

This is important because of the prevalence of kidney stones, which affect up to 15% of the general population and 15%-20% of patients with diabetes, she explained.
 

‘Provocative’ earlier findings

The current study was prompted by a recent observational study by Kasper B. Kristensen, MD, PhD, and colleagues.

Because SGLT2 inhibitors increase urinary glucose excretion through reduced renal reabsorption of glucose leading to osmotic diuresis and increased urinary flow, they hypothesized that these therapies “may reduce the risk of upper urinary tract stones (nephrolithiasis) by reducing the concentration of lithogenic substances in urine.” 

Using data from Danish Health registries, they matched 12,325 individuals newly prescribed an SGLT2 inhibitor 1:1 with patients newly prescribed a glucagonlike peptide-1 (GLP1) agonist, another new class of drugs for treating type 2 diabetes.

They found a hazard ratio of 0.51 (95% confidence interval, 0.37-0.71) for incident nephrolithiasis and a hazard ratio of 0.68 (95% CI, 0.48-0.97) for recurrent nephrolithiasis for patients taking SGLT2 inhibitors versus GLP-1 agonists.

These findings are “striking,” according to Dr. Balasubramanian and colleagues. However, “these data, while provocative, were entirely retrospective and therefore possibly prone to bias,” they add.
 

Pooled data from 20 trials

The current study analyzed data from 20 randomized controlled trials of glycemic control in type 2 diabetes, in which 10,177 patients had received empagliflozin 10 mg or 25 mg and 4,904 patients had received placebo.

Most patients (46.5%) had participated in the EMPA-REG OUTCOMES trial, which also had the longest follow-up (2.6 years).

The researchers identified patients with a new stone from the urinary tract (including the kidney, ureter, and urethra). Patients had received either the study drug for a median of 543 days or placebo for a median of 549 days.

During treatment, 104 of 10,177 patients in the pooled empagliflozin groups and 79 of 4,904 patients in the pooled placebo groups developed a stone in the urinary tract.

This was equivalent to 0.63 new urinary-tract stones per 100 patient-years in the pooled empagliflozin groups versus 1.01 new urinary-tract stones per 100 patient-years in the pooled placebo groups.

The incidence rate ratio was 0.64 (95% CI, 0.48-0.86), in favor of empagliflozin.

When the analysis was restricted to new kidney stones, the results were similar: 75 of 10,177 patients in the pooled empagliflozin groups and 57 of 4,904 patients in the pooled placebo groups developed a kidney stone.

This was equivalent to 0.45 new kidney stones per 100 patient-years in the pooled empagliflozin groups versus 0.72 new kidney stones per 100 patient-years in the pooled placebo groups.

The IRR was 0.65 (95% CI, 0.46-0.92), in favor of empagliflozin.
 

 

 

Upcoming small RCT in adults without diabetes

Invited to comment on the new study, Dr. Kristensen said: “The reduced risk of SGLT2 inhibitors towards nephrolithiasis is now reported in at least two studies with different methodology, different populations, and different exposure and outcome definitions.”

“I agree that randomized clinical trials designed specifically to confirm these findings appear warranted,” added Dr. Kristensen, from the Institute of Public Health, Clinical Pharmacology, Pharmacy, and Environmental Medicine, University of Southern Denmark in Odense.

There is a need for studies in patients with and without diabetes, he added, especially ones that focus on prevention of nephrolithiasis in patients with kidney stone disease.

A new trial should shed further light on this.

Results are expected by the end of 2022 for SWEETSTONE (Impact of the SGLT2 Inhibitor Empagliflozin on Urinary Supersaturations in Kidney Stone Formers), a randomized, double-blind crossover exploratory study in 46 patients without diabetes.

This should provide preliminary data to “establish the relevance for larger trials assessing the prophylactic potential of empagliflozin in kidney stone disease,” according to an article on the trial protocol recently published in BMJ.

The trials included in the pooled dataset were funded by Boehringer Ingelheim or the Boehringer Ingelheim and Eli Lilly Diabetes Alliance. Dr. Balasubramanian has reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ENDO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

This breast tumor subtype disproportionately affects Black women

Article Type
Changed
Wed, 01/04/2023 - 16:57

Hormone receptor positive (HR+) basal tumors are biologically analogous to triple negative breast cancer (TNBC), independent of race. That finding, suggested by transcriptomic analyses of a racially diverse cohort that identified racial disparities in the proportion of HR-positive basal tumors, underscores a need for diverse racial representation in clinical trials, researchers recently reported at the annual meeting of the American Society of Clinical Oncology.

The leading cause of cancer-associated death among Black women is breast cancer, and compared with White women, Black women are 41% more likely to die from breast cancer, said Sonya A. Reid, MD, MPH, a medical oncologist with the Vanderbilt-Ingram Cancer Center, Nashville, Tenn., and the study author.

Few studies, Dr. Reid said, have evaluated if tumor biology differences contribute to the racial outcome disparity. Hormone receptor-positive tumors classified as basal-type with Blueprint genomic analysis (HR+/Basal) are overrepresented among Black women. These tumors are thought to be similar to triple negative breast cancer tumors (TNBC) which are more aggressive and tend to have worse outcomes.

TNBC, Dr. Reid said, is associated with low ACKR1 expression, which encodes the Duffy antigen and correlates with worse breast cancer outcomes. Given the overrepresentation and worse outcomes among Black women with HR-positive basal tumors, Dr. Reid and colleagues compared differentially expressed genes (DEGs) by race and subtype.

Their analysis of data from 2,657 women with stage 1, 2, and 3 breast cancer, showed that among 455 Black women, 315 had luminal (HR-positive luminal) and 140 had basal tumors (66 HR-positive basal and 74 HR-negative basal). Among White women included as a reference group (n = 2,202), tumors were were HR-positive luminal in 1,825 and HR-positive basal or HR-negative basal in 158 and 219, respectively. The proportion of Black women with HR-positive basal tumors was significantly higher, compared with White women (15% versus 7%; P <0.001) as was the proportion of Black women with HR-negative basal tumors, compared with White women (16% versus 10%; P <0.001).

Women included in the study were participants in the ongoing BEST study (5R01CA204819) at Vanderbilt University Medical Center, Nashvlile, Tenn., or FLEX study (NCT03053193). In a multidimensional scaling analysis, HR-positive basal tumors clustered with TNBC rather than with HR-positive luminal tumors. While a differential gene expression analysis comparing HR-positive basal with HR-positive luminal tumors resulted in over 700 differently expressed genes in Black women, no such genes were identified when comparing HR-positive basal tumors with TNBC. ACKR1 expression in HR-positive basal tumors was comparable to TNBC in Black women (P = 0.81) and White women (P = 0.46). In contrast, HR-positive basal tumors had significantly lower ACKR1 expression than HR-positive luminal tumors in Black (P < 0.01) and White women (P < 0.01).

The findings highlight the importance of further genomic classification for patients with HR-positive tumors, Dr. Reid said.

“Molecular subtype classification is not standard of care for patients with localized breast cancer. However, the current analysis suggests that genomic classification could have important clinical implications. Women with HR-positive basal tumors should not be treated uniformly with HR-poisitive luminal tumors. Our data suggest that HR-positive basal tumors are transcriptomically similar to TNBC tumors and should potentially be treated similar to TNBC,” she said.

There are several genomic tests that are widely available clinically to guide treatment decisions and are covered by insurance, Dr. Reid said. Prior studies have shown racial disparity in the omission of genomic tests to guide treatment decisions, however. “Increasing access [to] and awareness of genomic testing will improve guideline-adherent care for all patients. We must intentionally recruit minority patients into clinical trials, knowing that Black women are more likely to die of their breast cancer,” she said.

A further impediment lies in the fact that while most minority patients receive their care in the community, most clinical trials are offered at large academic centers, Dr. Reid said. Future trials, she urged, should include a predetermined percentage of racial/ethnic groups in the clinical trial design to reflect the breast cancer population.

Limitations of the study included that race was self-reported. She noted further that the data for survival are not yet mature. She added, “We will also be evaluating the association of different systemic treatment options across the different molecular subtypes.”

Dr. Reid reported no relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Hormone receptor positive (HR+) basal tumors are biologically analogous to triple negative breast cancer (TNBC), independent of race. That finding, suggested by transcriptomic analyses of a racially diverse cohort that identified racial disparities in the proportion of HR-positive basal tumors, underscores a need for diverse racial representation in clinical trials, researchers recently reported at the annual meeting of the American Society of Clinical Oncology.

The leading cause of cancer-associated death among Black women is breast cancer, and compared with White women, Black women are 41% more likely to die from breast cancer, said Sonya A. Reid, MD, MPH, a medical oncologist with the Vanderbilt-Ingram Cancer Center, Nashville, Tenn., and the study author.

Few studies, Dr. Reid said, have evaluated if tumor biology differences contribute to the racial outcome disparity. Hormone receptor-positive tumors classified as basal-type with Blueprint genomic analysis (HR+/Basal) are overrepresented among Black women. These tumors are thought to be similar to triple negative breast cancer tumors (TNBC) which are more aggressive and tend to have worse outcomes.

TNBC, Dr. Reid said, is associated with low ACKR1 expression, which encodes the Duffy antigen and correlates with worse breast cancer outcomes. Given the overrepresentation and worse outcomes among Black women with HR-positive basal tumors, Dr. Reid and colleagues compared differentially expressed genes (DEGs) by race and subtype.

Their analysis of data from 2,657 women with stage 1, 2, and 3 breast cancer, showed that among 455 Black women, 315 had luminal (HR-positive luminal) and 140 had basal tumors (66 HR-positive basal and 74 HR-negative basal). Among White women included as a reference group (n = 2,202), tumors were were HR-positive luminal in 1,825 and HR-positive basal or HR-negative basal in 158 and 219, respectively. The proportion of Black women with HR-positive basal tumors was significantly higher, compared with White women (15% versus 7%; P <0.001) as was the proportion of Black women with HR-negative basal tumors, compared with White women (16% versus 10%; P <0.001).

Women included in the study were participants in the ongoing BEST study (5R01CA204819) at Vanderbilt University Medical Center, Nashvlile, Tenn., or FLEX study (NCT03053193). In a multidimensional scaling analysis, HR-positive basal tumors clustered with TNBC rather than with HR-positive luminal tumors. While a differential gene expression analysis comparing HR-positive basal with HR-positive luminal tumors resulted in over 700 differently expressed genes in Black women, no such genes were identified when comparing HR-positive basal tumors with TNBC. ACKR1 expression in HR-positive basal tumors was comparable to TNBC in Black women (P = 0.81) and White women (P = 0.46). In contrast, HR-positive basal tumors had significantly lower ACKR1 expression than HR-positive luminal tumors in Black (P < 0.01) and White women (P < 0.01).

The findings highlight the importance of further genomic classification for patients with HR-positive tumors, Dr. Reid said.

“Molecular subtype classification is not standard of care for patients with localized breast cancer. However, the current analysis suggests that genomic classification could have important clinical implications. Women with HR-positive basal tumors should not be treated uniformly with HR-poisitive luminal tumors. Our data suggest that HR-positive basal tumors are transcriptomically similar to TNBC tumors and should potentially be treated similar to TNBC,” she said.

There are several genomic tests that are widely available clinically to guide treatment decisions and are covered by insurance, Dr. Reid said. Prior studies have shown racial disparity in the omission of genomic tests to guide treatment decisions, however. “Increasing access [to] and awareness of genomic testing will improve guideline-adherent care for all patients. We must intentionally recruit minority patients into clinical trials, knowing that Black women are more likely to die of their breast cancer,” she said.

A further impediment lies in the fact that while most minority patients receive their care in the community, most clinical trials are offered at large academic centers, Dr. Reid said. Future trials, she urged, should include a predetermined percentage of racial/ethnic groups in the clinical trial design to reflect the breast cancer population.

Limitations of the study included that race was self-reported. She noted further that the data for survival are not yet mature. She added, “We will also be evaluating the association of different systemic treatment options across the different molecular subtypes.”

Dr. Reid reported no relevant disclosures.

Hormone receptor positive (HR+) basal tumors are biologically analogous to triple negative breast cancer (TNBC), independent of race. That finding, suggested by transcriptomic analyses of a racially diverse cohort that identified racial disparities in the proportion of HR-positive basal tumors, underscores a need for diverse racial representation in clinical trials, researchers recently reported at the annual meeting of the American Society of Clinical Oncology.

The leading cause of cancer-associated death among Black women is breast cancer, and compared with White women, Black women are 41% more likely to die from breast cancer, said Sonya A. Reid, MD, MPH, a medical oncologist with the Vanderbilt-Ingram Cancer Center, Nashville, Tenn., and the study author.

Few studies, Dr. Reid said, have evaluated if tumor biology differences contribute to the racial outcome disparity. Hormone receptor-positive tumors classified as basal-type with Blueprint genomic analysis (HR+/Basal) are overrepresented among Black women. These tumors are thought to be similar to triple negative breast cancer tumors (TNBC) which are more aggressive and tend to have worse outcomes.

TNBC, Dr. Reid said, is associated with low ACKR1 expression, which encodes the Duffy antigen and correlates with worse breast cancer outcomes. Given the overrepresentation and worse outcomes among Black women with HR-positive basal tumors, Dr. Reid and colleagues compared differentially expressed genes (DEGs) by race and subtype.

Their analysis of data from 2,657 women with stage 1, 2, and 3 breast cancer, showed that among 455 Black women, 315 had luminal (HR-positive luminal) and 140 had basal tumors (66 HR-positive basal and 74 HR-negative basal). Among White women included as a reference group (n = 2,202), tumors were were HR-positive luminal in 1,825 and HR-positive basal or HR-negative basal in 158 and 219, respectively. The proportion of Black women with HR-positive basal tumors was significantly higher, compared with White women (15% versus 7%; P <0.001) as was the proportion of Black women with HR-negative basal tumors, compared with White women (16% versus 10%; P <0.001).

Women included in the study were participants in the ongoing BEST study (5R01CA204819) at Vanderbilt University Medical Center, Nashvlile, Tenn., or FLEX study (NCT03053193). In a multidimensional scaling analysis, HR-positive basal tumors clustered with TNBC rather than with HR-positive luminal tumors. While a differential gene expression analysis comparing HR-positive basal with HR-positive luminal tumors resulted in over 700 differently expressed genes in Black women, no such genes were identified when comparing HR-positive basal tumors with TNBC. ACKR1 expression in HR-positive basal tumors was comparable to TNBC in Black women (P = 0.81) and White women (P = 0.46). In contrast, HR-positive basal tumors had significantly lower ACKR1 expression than HR-positive luminal tumors in Black (P < 0.01) and White women (P < 0.01).

The findings highlight the importance of further genomic classification for patients with HR-positive tumors, Dr. Reid said.

“Molecular subtype classification is not standard of care for patients with localized breast cancer. However, the current analysis suggests that genomic classification could have important clinical implications. Women with HR-positive basal tumors should not be treated uniformly with HR-poisitive luminal tumors. Our data suggest that HR-positive basal tumors are transcriptomically similar to TNBC tumors and should potentially be treated similar to TNBC,” she said.

There are several genomic tests that are widely available clinically to guide treatment decisions and are covered by insurance, Dr. Reid said. Prior studies have shown racial disparity in the omission of genomic tests to guide treatment decisions, however. “Increasing access [to] and awareness of genomic testing will improve guideline-adherent care for all patients. We must intentionally recruit minority patients into clinical trials, knowing that Black women are more likely to die of their breast cancer,” she said.

A further impediment lies in the fact that while most minority patients receive their care in the community, most clinical trials are offered at large academic centers, Dr. Reid said. Future trials, she urged, should include a predetermined percentage of racial/ethnic groups in the clinical trial design to reflect the breast cancer population.

Limitations of the study included that race was self-reported. She noted further that the data for survival are not yet mature. She added, “We will also be evaluating the association of different systemic treatment options across the different molecular subtypes.”

Dr. Reid reported no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Venetoclax combos prolong progression-free CLL survival

Article Type
Changed
Thu, 01/12/2023 - 10:44

Use of the targeted therapy combination of venetoclax plus obinutuzumab for fit patients with chronic lymphocytic leukemia (CLL) significantly improved progression-free survival (PFS) at 3 years, compared with standard chemoimmunotherapy, new phase 3 data show.

Adding the Bruton tyrosine kinase inhibitor ibrutinib to the two-drug combination pushed the 3-year PFS even higher, but the risk of severe adverse events may outweigh the benefits of the triple combination for some higher-risk patients.

“Time-limited targeted therapy with venetoclax plus obinutuzumab, with or without ibrutinib, is superior to chemoimmunotherapy with respect to progression-free survival,” said first author Barbara Eichhorst, MD, of the University of Cologne (Germany).

However, given higher rates of infection and other adverse events observed when adding ibrutinib, “I would say, based on this data, not to use the triple combination in clinical practice,” Dr. Eichhorst cautioned.

Dr. Eichhorst presented these late-breaking results at the European Hematology Association annual congress.

For patients considered unfit for chemoimmunotherapy, the fixed-duration therapy of venetoclax plus obinutuzumab has become standard treatment for CLL. For those deemed fit to withstand chemoimmunotherapy, this option remains the standard of care.

However, no studies have compared the targeted combination with chemoimmunotherapy for fit patients with CLL.

Dr. Eichhorst and colleagues conducted the GAIA/CLL13 trial to determine how the two- or three-drug targeted combinations stack up against standard chemoimmunotherapy for fit patients.

In the phase 3 study, 920 treatment-naive, fit patients with CLL in which there were no TP53 aberrations were randomly assigned to one of four treatment groups that each had 230 patients – standard chemoimmunotherapy or one of three time-limited venetoclax arms.

The regimen for the chemoimmunotherapy group included fludarabine, cyclophosphamide, and rituximab for those aged 65 and younger, and bendamustine and rituximab for those over 65. The patients who received venetoclax were divided into groups that received either venetoclax plus rituximab, venetoclax plus obinutuzumab, or triple therapy of venetoclax, obinutuzumab, and ibrutinib.

The median age was 61, and follow-up was just over 3 years (38.8 months). Nearly 40% of patients were in advanced Binet stages, and more than half (56%) were of unmutated immunoglobulin heavy chain gene (IgVH) status, which is associated with worse outcomes in CLL.

Compared with chemotherapy, the two-drug combination demonstrated significantly better PFS (hazard ratio, 0.32; P < .000001), as did the triple therapy (HR, 0.42; P < .001), though the venetoclax-rituximab combination did not (HR, 0.79; P = .183).

The 3-year PFS rates were highest in the triple-therapy group (90.5%), followed by the venetoclax and obinutuzumab group (87.7%). The chemoimmunotherapy (75.5%) and venetoclax plus rituximab groups (80.8%) had the lowest 3-year PFS rates.

Overall, 3-year PFS rates for patients with unmutated IgVH were slightly lower, compared with those who had mutated IgVH.

The best PFS rate was among patients who received the 3-drug combination, although one interesting caveat emerged among the under-65 subset of patients in the mutated IgVH group: the chemotherapy arm achieved a slightly better PFS rate (95%) compared with the triple-therapy arm (93.6%).

Notably, overall survival was similar among all groups; about 96% of patients were alive at 3 years.

Several adverse events were more pronounced in the triple-therapy group. The highest rate of grade 3-4 infections was among those who received ibrutinib (22.1% vs. 20.4% for chemotherapy, 11.4% for venetoclax/rituximab, and 14.9% for venetoclax/obinutuzumab). The triple-therapy group also had the highest rate of hypertension (5.6% vs. 1.4% for chemotherapy, 2.1% for venetoclax/rituximab, and 1.8% for venetoclax/obinutuzumab).

Rates of febrile neutropenia and secondary primary malignancies, however, were highest in the chemoimmunotherapy group. More than 11% of patients in the chemoimmunotherapy group had febrile neutropenia, compared with 7.8% of those who received triple therapy, 4.2% in the venetoclax/rituximab group, and 3.1% of those who received venetoclax/obinutuzumab. Almost half of patients in the chemoimmunotherapy group had secondary primary malignancies versus fewer than 30% in the other arms.

EHA President-Elect António Almeida, MD, noted that the research sheds important light on evolving treatment options for CLL.

“The first is that the triple combination appears better than the double combinations, and I think that’s an important message because of longer treatment-free remission and progression-free remissions,” Dr. Almeida, of the Hospital da Luz, Lisbon, said in an interview.

The second important message: Given the time-limited administration of the venetoclax combinations, the data show that “we can stop ibrutinib and that is safe,” he added. “That’s quite important.”

Third, the findings can help guide treatment choices. “We’ve already had an indication that obinutuzumab is better than rituximab in the CLL setting, but this again solidifies that notion,” Dr. Almeida added.

Dr. Eichhorst has relationships with Janssen, Gilead, F. Hoffmann–La Roche, AbbVie, BeiGene, AstraZeneca, MSD, Adaptive Biotechnologies, and Hexal. Dr. Almeida disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Use of the targeted therapy combination of venetoclax plus obinutuzumab for fit patients with chronic lymphocytic leukemia (CLL) significantly improved progression-free survival (PFS) at 3 years, compared with standard chemoimmunotherapy, new phase 3 data show.

Adding the Bruton tyrosine kinase inhibitor ibrutinib to the two-drug combination pushed the 3-year PFS even higher, but the risk of severe adverse events may outweigh the benefits of the triple combination for some higher-risk patients.

“Time-limited targeted therapy with venetoclax plus obinutuzumab, with or without ibrutinib, is superior to chemoimmunotherapy with respect to progression-free survival,” said first author Barbara Eichhorst, MD, of the University of Cologne (Germany).

However, given higher rates of infection and other adverse events observed when adding ibrutinib, “I would say, based on this data, not to use the triple combination in clinical practice,” Dr. Eichhorst cautioned.

Dr. Eichhorst presented these late-breaking results at the European Hematology Association annual congress.

For patients considered unfit for chemoimmunotherapy, the fixed-duration therapy of venetoclax plus obinutuzumab has become standard treatment for CLL. For those deemed fit to withstand chemoimmunotherapy, this option remains the standard of care.

However, no studies have compared the targeted combination with chemoimmunotherapy for fit patients with CLL.

Dr. Eichhorst and colleagues conducted the GAIA/CLL13 trial to determine how the two- or three-drug targeted combinations stack up against standard chemoimmunotherapy for fit patients.

In the phase 3 study, 920 treatment-naive, fit patients with CLL in which there were no TP53 aberrations were randomly assigned to one of four treatment groups that each had 230 patients – standard chemoimmunotherapy or one of three time-limited venetoclax arms.

The regimen for the chemoimmunotherapy group included fludarabine, cyclophosphamide, and rituximab for those aged 65 and younger, and bendamustine and rituximab for those over 65. The patients who received venetoclax were divided into groups that received either venetoclax plus rituximab, venetoclax plus obinutuzumab, or triple therapy of venetoclax, obinutuzumab, and ibrutinib.

The median age was 61, and follow-up was just over 3 years (38.8 months). Nearly 40% of patients were in advanced Binet stages, and more than half (56%) were of unmutated immunoglobulin heavy chain gene (IgVH) status, which is associated with worse outcomes in CLL.

Compared with chemotherapy, the two-drug combination demonstrated significantly better PFS (hazard ratio, 0.32; P < .000001), as did the triple therapy (HR, 0.42; P < .001), though the venetoclax-rituximab combination did not (HR, 0.79; P = .183).

The 3-year PFS rates were highest in the triple-therapy group (90.5%), followed by the venetoclax and obinutuzumab group (87.7%). The chemoimmunotherapy (75.5%) and venetoclax plus rituximab groups (80.8%) had the lowest 3-year PFS rates.

Overall, 3-year PFS rates for patients with unmutated IgVH were slightly lower, compared with those who had mutated IgVH.

The best PFS rate was among patients who received the 3-drug combination, although one interesting caveat emerged among the under-65 subset of patients in the mutated IgVH group: the chemotherapy arm achieved a slightly better PFS rate (95%) compared with the triple-therapy arm (93.6%).

Notably, overall survival was similar among all groups; about 96% of patients were alive at 3 years.

Several adverse events were more pronounced in the triple-therapy group. The highest rate of grade 3-4 infections was among those who received ibrutinib (22.1% vs. 20.4% for chemotherapy, 11.4% for venetoclax/rituximab, and 14.9% for venetoclax/obinutuzumab). The triple-therapy group also had the highest rate of hypertension (5.6% vs. 1.4% for chemotherapy, 2.1% for venetoclax/rituximab, and 1.8% for venetoclax/obinutuzumab).

Rates of febrile neutropenia and secondary primary malignancies, however, were highest in the chemoimmunotherapy group. More than 11% of patients in the chemoimmunotherapy group had febrile neutropenia, compared with 7.8% of those who received triple therapy, 4.2% in the venetoclax/rituximab group, and 3.1% of those who received venetoclax/obinutuzumab. Almost half of patients in the chemoimmunotherapy group had secondary primary malignancies versus fewer than 30% in the other arms.

EHA President-Elect António Almeida, MD, noted that the research sheds important light on evolving treatment options for CLL.

“The first is that the triple combination appears better than the double combinations, and I think that’s an important message because of longer treatment-free remission and progression-free remissions,” Dr. Almeida, of the Hospital da Luz, Lisbon, said in an interview.

The second important message: Given the time-limited administration of the venetoclax combinations, the data show that “we can stop ibrutinib and that is safe,” he added. “That’s quite important.”

Third, the findings can help guide treatment choices. “We’ve already had an indication that obinutuzumab is better than rituximab in the CLL setting, but this again solidifies that notion,” Dr. Almeida added.

Dr. Eichhorst has relationships with Janssen, Gilead, F. Hoffmann–La Roche, AbbVie, BeiGene, AstraZeneca, MSD, Adaptive Biotechnologies, and Hexal. Dr. Almeida disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Use of the targeted therapy combination of venetoclax plus obinutuzumab for fit patients with chronic lymphocytic leukemia (CLL) significantly improved progression-free survival (PFS) at 3 years, compared with standard chemoimmunotherapy, new phase 3 data show.

Adding the Bruton tyrosine kinase inhibitor ibrutinib to the two-drug combination pushed the 3-year PFS even higher, but the risk of severe adverse events may outweigh the benefits of the triple combination for some higher-risk patients.

“Time-limited targeted therapy with venetoclax plus obinutuzumab, with or without ibrutinib, is superior to chemoimmunotherapy with respect to progression-free survival,” said first author Barbara Eichhorst, MD, of the University of Cologne (Germany).

However, given higher rates of infection and other adverse events observed when adding ibrutinib, “I would say, based on this data, not to use the triple combination in clinical practice,” Dr. Eichhorst cautioned.

Dr. Eichhorst presented these late-breaking results at the European Hematology Association annual congress.

For patients considered unfit for chemoimmunotherapy, the fixed-duration therapy of venetoclax plus obinutuzumab has become standard treatment for CLL. For those deemed fit to withstand chemoimmunotherapy, this option remains the standard of care.

However, no studies have compared the targeted combination with chemoimmunotherapy for fit patients with CLL.

Dr. Eichhorst and colleagues conducted the GAIA/CLL13 trial to determine how the two- or three-drug targeted combinations stack up against standard chemoimmunotherapy for fit patients.

In the phase 3 study, 920 treatment-naive, fit patients with CLL in which there were no TP53 aberrations were randomly assigned to one of four treatment groups that each had 230 patients – standard chemoimmunotherapy or one of three time-limited venetoclax arms.

The regimen for the chemoimmunotherapy group included fludarabine, cyclophosphamide, and rituximab for those aged 65 and younger, and bendamustine and rituximab for those over 65. The patients who received venetoclax were divided into groups that received either venetoclax plus rituximab, venetoclax plus obinutuzumab, or triple therapy of venetoclax, obinutuzumab, and ibrutinib.

The median age was 61, and follow-up was just over 3 years (38.8 months). Nearly 40% of patients were in advanced Binet stages, and more than half (56%) were of unmutated immunoglobulin heavy chain gene (IgVH) status, which is associated with worse outcomes in CLL.

Compared with chemotherapy, the two-drug combination demonstrated significantly better PFS (hazard ratio, 0.32; P < .000001), as did the triple therapy (HR, 0.42; P < .001), though the venetoclax-rituximab combination did not (HR, 0.79; P = .183).

The 3-year PFS rates were highest in the triple-therapy group (90.5%), followed by the venetoclax and obinutuzumab group (87.7%). The chemoimmunotherapy (75.5%) and venetoclax plus rituximab groups (80.8%) had the lowest 3-year PFS rates.

Overall, 3-year PFS rates for patients with unmutated IgVH were slightly lower, compared with those who had mutated IgVH.

The best PFS rate was among patients who received the 3-drug combination, although one interesting caveat emerged among the under-65 subset of patients in the mutated IgVH group: the chemotherapy arm achieved a slightly better PFS rate (95%) compared with the triple-therapy arm (93.6%).

Notably, overall survival was similar among all groups; about 96% of patients were alive at 3 years.

Several adverse events were more pronounced in the triple-therapy group. The highest rate of grade 3-4 infections was among those who received ibrutinib (22.1% vs. 20.4% for chemotherapy, 11.4% for venetoclax/rituximab, and 14.9% for venetoclax/obinutuzumab). The triple-therapy group also had the highest rate of hypertension (5.6% vs. 1.4% for chemotherapy, 2.1% for venetoclax/rituximab, and 1.8% for venetoclax/obinutuzumab).

Rates of febrile neutropenia and secondary primary malignancies, however, were highest in the chemoimmunotherapy group. More than 11% of patients in the chemoimmunotherapy group had febrile neutropenia, compared with 7.8% of those who received triple therapy, 4.2% in the venetoclax/rituximab group, and 3.1% of those who received venetoclax/obinutuzumab. Almost half of patients in the chemoimmunotherapy group had secondary primary malignancies versus fewer than 30% in the other arms.

EHA President-Elect António Almeida, MD, noted that the research sheds important light on evolving treatment options for CLL.

“The first is that the triple combination appears better than the double combinations, and I think that’s an important message because of longer treatment-free remission and progression-free remissions,” Dr. Almeida, of the Hospital da Luz, Lisbon, said in an interview.

The second important message: Given the time-limited administration of the venetoclax combinations, the data show that “we can stop ibrutinib and that is safe,” he added. “That’s quite important.”

Third, the findings can help guide treatment choices. “We’ve already had an indication that obinutuzumab is better than rituximab in the CLL setting, but this again solidifies that notion,” Dr. Almeida added.

Dr. Eichhorst has relationships with Janssen, Gilead, F. Hoffmann–La Roche, AbbVie, BeiGene, AstraZeneca, MSD, Adaptive Biotechnologies, and Hexal. Dr. Almeida disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EHA 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Precision medicine vs. antibiotic resistance

Article Type
Changed
Tue, 06/21/2022 - 08:49

Diversity is an omnipresent element in clinical practice: in the genome, in the environment, in patients’ lifestyles and habits. Precision medicine addresses the variability of the individual to improve diagnosis and treatment. It is increasingly used in specialties such as oncology, neurology, and cardiology. A personalized approach has many objectives, including to optimize treatment, minimize the risk of adverse effects, facilitate early diagnosis, and determine predisposition to disease. Genomic technologies, such as massive sequencing techniques, and tools such as CRISPR-Cas9 are key to the future of personalized medicine.

Jesús Oteo Iglesias, MD, PhD, a specialist in microbiology and director of Spain’s National Center for Microbiology, spoke at the Spanish Association of Infectious Diseases and Clinical Microbiology’s recent conference. He discussed various precision medicine projects aimed at reinforcing the fight against antibiotic resistance.

Infectious diseases are complex because the diversity of the pathogenic microorganism combines with the patient’s own diversity, which influences the interaction between the two, said Dr. Oteo. Thus, the antibiogram and targeted antibiotic treatments (which are chosen according to the species, sensitivity to antimicrobials, type of infection, and patient characteristics) have been established applications of precision medicine for decades. However, multiple tools could further strengthen personalized medicine against multiresistant pathogens.

Therapeutic drug monitoring, in which multiple pharmacokinetic and pharmacodynamic factors are considered, is a strategy with great potential to increase the effectiveness of antibiotics and minimize toxicity. Owing to its costs and the need for trained staff, this tool would be especially indicated in the treatment of patients with more complex conditions, such as those suffering from obesity, complex infections, or infections with multiresistant bacteria, as well as those in critical condition. Multiple computer programs are available to help determine the dosage of antibiotics by estimating drug exposure and to provide recommendations. However, clinical trials are needed to assess the pros and cons of applying therapeutic monitoring for types of antibiotics other than those for which a given type is already used (for example, aminoglycosides and glycopeptides).

One technology that could help in antibiotic use optimization programs is microneedle-based biosensors, which could be implanted in the skin for real-time antibiotic monitoring. This tool “could be the first step in establishing automated antibiotic administration systems, with infusion pumps and feedback systems, like those already used in diabetes for insulin administration,” said Dr. Oteo.

Artificial intelligence could also be a valuable technology for optimization programs. “We should go a step further in the implementation of artificial intelligence through clinical decision support systems,” said Dr. Oteo. This technology would guide the administration of antimicrobials using data extracted from the electronic medical record. However, there are great challenges to overcome in creating these tools, such as the risk of entering erroneous data; the difficulty in entering complex data, such as data relevant to antibiotic resistance; and the variability at the geographic and institutional levels.

Genomics is also a tool with great potential for identifying bacteria’s degree of resistance to antibiotics by studying mutations in chromosomal and acquired genes. A proof-of-concept study evaluated the sensitivity of different Pseudomonas aeruginosa strains to several antibiotics by analyzing genome sequences associated with resistance, said Dr. Otero. The researchers found that this system was effective at predicting the sensitivity of bacteria from genomic data.

In the United States, the PATRIC bioinformatics center, which is financed by the National Institute of Allergy and Infectious Diseases, works with automated learning models to predict the antimicrobial resistance of different species of bacteria, including Staphylococcus aureus, Streptococcus pneumoniae, and Mycobacterium tuberculosis. These models, which work with genomic data associated with antibiotic resistance phenotypes, are able to identify resistance without prior knowledge of the underlying mechanisms.

Another factor to consider with regard to the use of precision medicine for infectious diseases is the microbiota. Dr. Oteo explained that the pathogenic microorganism interacts not only with the host but also with its microbiota, “which can be diverse, is manifold, and can be very different, depending on the circumstances. These interactions can be translated into ecological and evolutionary pressures that may have clinical significance.” One of the best-known examples is the possibility that a beta-lactamase–producing bacterium benefits other bacteria around it by secreting these enzymes. Furthermore, some known forms of bacterial interaction (such as plasmid transfer) are directly related to antibiotic resistance. Metagenomics, which involves the genetic study of communities of microbes, could provide more information for predicting and avoiding infections by multiresistant pathogens by monitoring the microbiome.

The CRISPR-Cas9 gene editing tool could also be an ally in the fight against antibiotic resistance by eliminating resistance genes and thus making bacteria sensitive to certain antibiotics. Several published preliminary studies indicate that this is possible in vitro. The main challenge for the clinical application of CRISPR is in introducing it into the target microbial population. Use of conjugative plasmids and bacteriophages could perhaps be an option for overcoming this obstacle in the future.

Exploiting the possibilities of precision medicine through use of the most innovative tools in addressing antibiotic resistance is a great challenge, said Dr. Oteo, but the situation demands it, and it is necessary to take small steps to achieve this goal.

A version of this article appeared on Medscape.com. This article was translated from Univadis Spain.

Publications
Topics
Sections

Diversity is an omnipresent element in clinical practice: in the genome, in the environment, in patients’ lifestyles and habits. Precision medicine addresses the variability of the individual to improve diagnosis and treatment. It is increasingly used in specialties such as oncology, neurology, and cardiology. A personalized approach has many objectives, including to optimize treatment, minimize the risk of adverse effects, facilitate early diagnosis, and determine predisposition to disease. Genomic technologies, such as massive sequencing techniques, and tools such as CRISPR-Cas9 are key to the future of personalized medicine.

Jesús Oteo Iglesias, MD, PhD, a specialist in microbiology and director of Spain’s National Center for Microbiology, spoke at the Spanish Association of Infectious Diseases and Clinical Microbiology’s recent conference. He discussed various precision medicine projects aimed at reinforcing the fight against antibiotic resistance.

Infectious diseases are complex because the diversity of the pathogenic microorganism combines with the patient’s own diversity, which influences the interaction between the two, said Dr. Oteo. Thus, the antibiogram and targeted antibiotic treatments (which are chosen according to the species, sensitivity to antimicrobials, type of infection, and patient characteristics) have been established applications of precision medicine for decades. However, multiple tools could further strengthen personalized medicine against multiresistant pathogens.

Therapeutic drug monitoring, in which multiple pharmacokinetic and pharmacodynamic factors are considered, is a strategy with great potential to increase the effectiveness of antibiotics and minimize toxicity. Owing to its costs and the need for trained staff, this tool would be especially indicated in the treatment of patients with more complex conditions, such as those suffering from obesity, complex infections, or infections with multiresistant bacteria, as well as those in critical condition. Multiple computer programs are available to help determine the dosage of antibiotics by estimating drug exposure and to provide recommendations. However, clinical trials are needed to assess the pros and cons of applying therapeutic monitoring for types of antibiotics other than those for which a given type is already used (for example, aminoglycosides and glycopeptides).

One technology that could help in antibiotic use optimization programs is microneedle-based biosensors, which could be implanted in the skin for real-time antibiotic monitoring. This tool “could be the first step in establishing automated antibiotic administration systems, with infusion pumps and feedback systems, like those already used in diabetes for insulin administration,” said Dr. Oteo.

Artificial intelligence could also be a valuable technology for optimization programs. “We should go a step further in the implementation of artificial intelligence through clinical decision support systems,” said Dr. Oteo. This technology would guide the administration of antimicrobials using data extracted from the electronic medical record. However, there are great challenges to overcome in creating these tools, such as the risk of entering erroneous data; the difficulty in entering complex data, such as data relevant to antibiotic resistance; and the variability at the geographic and institutional levels.

Genomics is also a tool with great potential for identifying bacteria’s degree of resistance to antibiotics by studying mutations in chromosomal and acquired genes. A proof-of-concept study evaluated the sensitivity of different Pseudomonas aeruginosa strains to several antibiotics by analyzing genome sequences associated with resistance, said Dr. Otero. The researchers found that this system was effective at predicting the sensitivity of bacteria from genomic data.

In the United States, the PATRIC bioinformatics center, which is financed by the National Institute of Allergy and Infectious Diseases, works with automated learning models to predict the antimicrobial resistance of different species of bacteria, including Staphylococcus aureus, Streptococcus pneumoniae, and Mycobacterium tuberculosis. These models, which work with genomic data associated with antibiotic resistance phenotypes, are able to identify resistance without prior knowledge of the underlying mechanisms.

Another factor to consider with regard to the use of precision medicine for infectious diseases is the microbiota. Dr. Oteo explained that the pathogenic microorganism interacts not only with the host but also with its microbiota, “which can be diverse, is manifold, and can be very different, depending on the circumstances. These interactions can be translated into ecological and evolutionary pressures that may have clinical significance.” One of the best-known examples is the possibility that a beta-lactamase–producing bacterium benefits other bacteria around it by secreting these enzymes. Furthermore, some known forms of bacterial interaction (such as plasmid transfer) are directly related to antibiotic resistance. Metagenomics, which involves the genetic study of communities of microbes, could provide more information for predicting and avoiding infections by multiresistant pathogens by monitoring the microbiome.

The CRISPR-Cas9 gene editing tool could also be an ally in the fight against antibiotic resistance by eliminating resistance genes and thus making bacteria sensitive to certain antibiotics. Several published preliminary studies indicate that this is possible in vitro. The main challenge for the clinical application of CRISPR is in introducing it into the target microbial population. Use of conjugative plasmids and bacteriophages could perhaps be an option for overcoming this obstacle in the future.

Exploiting the possibilities of precision medicine through use of the most innovative tools in addressing antibiotic resistance is a great challenge, said Dr. Oteo, but the situation demands it, and it is necessary to take small steps to achieve this goal.

A version of this article appeared on Medscape.com. This article was translated from Univadis Spain.

Diversity is an omnipresent element in clinical practice: in the genome, in the environment, in patients’ lifestyles and habits. Precision medicine addresses the variability of the individual to improve diagnosis and treatment. It is increasingly used in specialties such as oncology, neurology, and cardiology. A personalized approach has many objectives, including to optimize treatment, minimize the risk of adverse effects, facilitate early diagnosis, and determine predisposition to disease. Genomic technologies, such as massive sequencing techniques, and tools such as CRISPR-Cas9 are key to the future of personalized medicine.

Jesús Oteo Iglesias, MD, PhD, a specialist in microbiology and director of Spain’s National Center for Microbiology, spoke at the Spanish Association of Infectious Diseases and Clinical Microbiology’s recent conference. He discussed various precision medicine projects aimed at reinforcing the fight against antibiotic resistance.

Infectious diseases are complex because the diversity of the pathogenic microorganism combines with the patient’s own diversity, which influences the interaction between the two, said Dr. Oteo. Thus, the antibiogram and targeted antibiotic treatments (which are chosen according to the species, sensitivity to antimicrobials, type of infection, and patient characteristics) have been established applications of precision medicine for decades. However, multiple tools could further strengthen personalized medicine against multiresistant pathogens.

Therapeutic drug monitoring, in which multiple pharmacokinetic and pharmacodynamic factors are considered, is a strategy with great potential to increase the effectiveness of antibiotics and minimize toxicity. Owing to its costs and the need for trained staff, this tool would be especially indicated in the treatment of patients with more complex conditions, such as those suffering from obesity, complex infections, or infections with multiresistant bacteria, as well as those in critical condition. Multiple computer programs are available to help determine the dosage of antibiotics by estimating drug exposure and to provide recommendations. However, clinical trials are needed to assess the pros and cons of applying therapeutic monitoring for types of antibiotics other than those for which a given type is already used (for example, aminoglycosides and glycopeptides).

One technology that could help in antibiotic use optimization programs is microneedle-based biosensors, which could be implanted in the skin for real-time antibiotic monitoring. This tool “could be the first step in establishing automated antibiotic administration systems, with infusion pumps and feedback systems, like those already used in diabetes for insulin administration,” said Dr. Oteo.

Artificial intelligence could also be a valuable technology for optimization programs. “We should go a step further in the implementation of artificial intelligence through clinical decision support systems,” said Dr. Oteo. This technology would guide the administration of antimicrobials using data extracted from the electronic medical record. However, there are great challenges to overcome in creating these tools, such as the risk of entering erroneous data; the difficulty in entering complex data, such as data relevant to antibiotic resistance; and the variability at the geographic and institutional levels.

Genomics is also a tool with great potential for identifying bacteria’s degree of resistance to antibiotics by studying mutations in chromosomal and acquired genes. A proof-of-concept study evaluated the sensitivity of different Pseudomonas aeruginosa strains to several antibiotics by analyzing genome sequences associated with resistance, said Dr. Otero. The researchers found that this system was effective at predicting the sensitivity of bacteria from genomic data.

In the United States, the PATRIC bioinformatics center, which is financed by the National Institute of Allergy and Infectious Diseases, works with automated learning models to predict the antimicrobial resistance of different species of bacteria, including Staphylococcus aureus, Streptococcus pneumoniae, and Mycobacterium tuberculosis. These models, which work with genomic data associated with antibiotic resistance phenotypes, are able to identify resistance without prior knowledge of the underlying mechanisms.

Another factor to consider with regard to the use of precision medicine for infectious diseases is the microbiota. Dr. Oteo explained that the pathogenic microorganism interacts not only with the host but also with its microbiota, “which can be diverse, is manifold, and can be very different, depending on the circumstances. These interactions can be translated into ecological and evolutionary pressures that may have clinical significance.” One of the best-known examples is the possibility that a beta-lactamase–producing bacterium benefits other bacteria around it by secreting these enzymes. Furthermore, some known forms of bacterial interaction (such as plasmid transfer) are directly related to antibiotic resistance. Metagenomics, which involves the genetic study of communities of microbes, could provide more information for predicting and avoiding infections by multiresistant pathogens by monitoring the microbiome.

The CRISPR-Cas9 gene editing tool could also be an ally in the fight against antibiotic resistance by eliminating resistance genes and thus making bacteria sensitive to certain antibiotics. Several published preliminary studies indicate that this is possible in vitro. The main challenge for the clinical application of CRISPR is in introducing it into the target microbial population. Use of conjugative plasmids and bacteriophages could perhaps be an option for overcoming this obstacle in the future.

Exploiting the possibilities of precision medicine through use of the most innovative tools in addressing antibiotic resistance is a great challenge, said Dr. Oteo, but the situation demands it, and it is necessary to take small steps to achieve this goal.

A version of this article appeared on Medscape.com. This article was translated from Univadis Spain.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Metastatic lobular, ductal cancers respond similarly

Article Type
Changed
Wed, 01/04/2023 - 16:57

Metastatic invasive lobular breast cancers (ILC) that are hormone receptor (HR)-positive and HER2-negative have therapeutic outcomes similar to those of invasive ductal cancer (IDC) following treatment with endocrine therapy combined with a CDK4/6 inhibitor, mTOR inhibitor, or PI3K inhibitor, according to a new retrospective analysis of patients treated at MD Anderson Cancer Center.

The two conditions have historically been lumped together when studying treatment outcomes, but more recent research has shown key differences between the two subtypes, according to Jason A. Mouabbi, MD, who presented the results at the annual meeting of the American Society of Clinical Oncology.

“All the studies that were done were driven by ductal patients, so you can never take conclusions for the lobular patients. We have a big database at MD Anderson, so we can really study a large number of patients and get some signals whether or not patients would benefit from that therapy or not,” said Dr. Mouabbi, a lobular breast cancer specialist at MD Anderson Cancer Center.

The results of the study are important since patients often come to physicians with sophisticated understanding of their disease, he said. Patients with lobular cancer naturally wonder if a therapeutic regimen tested primarily in IDC will benefit them. “For the longest time, we said, ‘we have no data,’ ” said Dr. Mouabbi.

The new study should offer patients and physicians some reassurance. “We found that all of them benefit from it and most importantly, they all benefit from it (with) the same magnitude,” Dr. Mouabbi said.

The researchers analyzed data from 2,971 patients (82% IDC, 14% ILC, 4% mixed) treated between 2010 and 2021. The median age was 50 in all groups. Eighty percent were White, 10% were Hispanic, and 5% were Black. Ninety-nine percent had estrogen receptor (ER) + tumors, and 88% had progesterone positive (PR) + tumors.

A total of 1,895 patients received CDK4/6 inhibitors, 1,027 received everolimus, and 49 received alpelisib. There was no statistically significant difference in overall survival or progression-free survival between the two cancer types in any of the treatment groups.

Despite the similar outcomes, the two conditions remain unique. IDC is a disease of cells from the ducts that deliver milk, while ILC arises in cells that produce milk. Nearly 95% of ILC cases are hormone-positive, compared to 50%-55% of IDC.

So, while existing treatments seem to benefit both groups, there are nonetheless plans to develop therapeutic strategies tailored to lobular cancer.

Dr. Mouabbi’s group has compared molecular profiles of ILC and IDC tumors to better understand how to target them individually. Almost all ILC cancers have a mutation in a gene called CDH1, which leads to loss of an anchoring protein. They believe this causes a unique growth pattern of thin tendrils, rather than the onion-like growths of IDC. A therapy targeting this mutation could provide a specific benefit for lobular breast cancer.

There are other differences: PI3 kinases are mutated in about 60% of ILC tumors, versus about 30% of IDC tumors, and other genes mutated at lower frequencies are also different between the two subtypes. “So there are a lot of (approaches) we are trying to initiate in lobular cancer because we have awareness now that they are different,” Dr. Mouabbi said.

The study received no external funding.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Metastatic invasive lobular breast cancers (ILC) that are hormone receptor (HR)-positive and HER2-negative have therapeutic outcomes similar to those of invasive ductal cancer (IDC) following treatment with endocrine therapy combined with a CDK4/6 inhibitor, mTOR inhibitor, or PI3K inhibitor, according to a new retrospective analysis of patients treated at MD Anderson Cancer Center.

The two conditions have historically been lumped together when studying treatment outcomes, but more recent research has shown key differences between the two subtypes, according to Jason A. Mouabbi, MD, who presented the results at the annual meeting of the American Society of Clinical Oncology.

“All the studies that were done were driven by ductal patients, so you can never take conclusions for the lobular patients. We have a big database at MD Anderson, so we can really study a large number of patients and get some signals whether or not patients would benefit from that therapy or not,” said Dr. Mouabbi, a lobular breast cancer specialist at MD Anderson Cancer Center.

The results of the study are important since patients often come to physicians with sophisticated understanding of their disease, he said. Patients with lobular cancer naturally wonder if a therapeutic regimen tested primarily in IDC will benefit them. “For the longest time, we said, ‘we have no data,’ ” said Dr. Mouabbi.

The new study should offer patients and physicians some reassurance. “We found that all of them benefit from it and most importantly, they all benefit from it (with) the same magnitude,” Dr. Mouabbi said.

The researchers analyzed data from 2,971 patients (82% IDC, 14% ILC, 4% mixed) treated between 2010 and 2021. The median age was 50 in all groups. Eighty percent were White, 10% were Hispanic, and 5% were Black. Ninety-nine percent had estrogen receptor (ER) + tumors, and 88% had progesterone positive (PR) + tumors.

A total of 1,895 patients received CDK4/6 inhibitors, 1,027 received everolimus, and 49 received alpelisib. There was no statistically significant difference in overall survival or progression-free survival between the two cancer types in any of the treatment groups.

Despite the similar outcomes, the two conditions remain unique. IDC is a disease of cells from the ducts that deliver milk, while ILC arises in cells that produce milk. Nearly 95% of ILC cases are hormone-positive, compared to 50%-55% of IDC.

So, while existing treatments seem to benefit both groups, there are nonetheless plans to develop therapeutic strategies tailored to lobular cancer.

Dr. Mouabbi’s group has compared molecular profiles of ILC and IDC tumors to better understand how to target them individually. Almost all ILC cancers have a mutation in a gene called CDH1, which leads to loss of an anchoring protein. They believe this causes a unique growth pattern of thin tendrils, rather than the onion-like growths of IDC. A therapy targeting this mutation could provide a specific benefit for lobular breast cancer.

There are other differences: PI3 kinases are mutated in about 60% of ILC tumors, versus about 30% of IDC tumors, and other genes mutated at lower frequencies are also different between the two subtypes. “So there are a lot of (approaches) we are trying to initiate in lobular cancer because we have awareness now that they are different,” Dr. Mouabbi said.

The study received no external funding.

Metastatic invasive lobular breast cancers (ILC) that are hormone receptor (HR)-positive and HER2-negative have therapeutic outcomes similar to those of invasive ductal cancer (IDC) following treatment with endocrine therapy combined with a CDK4/6 inhibitor, mTOR inhibitor, or PI3K inhibitor, according to a new retrospective analysis of patients treated at MD Anderson Cancer Center.

The two conditions have historically been lumped together when studying treatment outcomes, but more recent research has shown key differences between the two subtypes, according to Jason A. Mouabbi, MD, who presented the results at the annual meeting of the American Society of Clinical Oncology.

“All the studies that were done were driven by ductal patients, so you can never take conclusions for the lobular patients. We have a big database at MD Anderson, so we can really study a large number of patients and get some signals whether or not patients would benefit from that therapy or not,” said Dr. Mouabbi, a lobular breast cancer specialist at MD Anderson Cancer Center.

The results of the study are important since patients often come to physicians with sophisticated understanding of their disease, he said. Patients with lobular cancer naturally wonder if a therapeutic regimen tested primarily in IDC will benefit them. “For the longest time, we said, ‘we have no data,’ ” said Dr. Mouabbi.

The new study should offer patients and physicians some reassurance. “We found that all of them benefit from it and most importantly, they all benefit from it (with) the same magnitude,” Dr. Mouabbi said.

The researchers analyzed data from 2,971 patients (82% IDC, 14% ILC, 4% mixed) treated between 2010 and 2021. The median age was 50 in all groups. Eighty percent were White, 10% were Hispanic, and 5% were Black. Ninety-nine percent had estrogen receptor (ER) + tumors, and 88% had progesterone positive (PR) + tumors.

A total of 1,895 patients received CDK4/6 inhibitors, 1,027 received everolimus, and 49 received alpelisib. There was no statistically significant difference in overall survival or progression-free survival between the two cancer types in any of the treatment groups.

Despite the similar outcomes, the two conditions remain unique. IDC is a disease of cells from the ducts that deliver milk, while ILC arises in cells that produce milk. Nearly 95% of ILC cases are hormone-positive, compared to 50%-55% of IDC.

So, while existing treatments seem to benefit both groups, there are nonetheless plans to develop therapeutic strategies tailored to lobular cancer.

Dr. Mouabbi’s group has compared molecular profiles of ILC and IDC tumors to better understand how to target them individually. Almost all ILC cancers have a mutation in a gene called CDH1, which leads to loss of an anchoring protein. They believe this causes a unique growth pattern of thin tendrils, rather than the onion-like growths of IDC. A therapy targeting this mutation could provide a specific benefit for lobular breast cancer.

There are other differences: PI3 kinases are mutated in about 60% of ILC tumors, versus about 30% of IDC tumors, and other genes mutated at lower frequencies are also different between the two subtypes. “So there are a lot of (approaches) we are trying to initiate in lobular cancer because we have awareness now that they are different,” Dr. Mouabbi said.

The study received no external funding.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ASCO 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article