User login
Breathing easier: The growing adoption of indwelling pleural catheters
Thoracic Oncology Network
Interventional Procedures Section
The management of recurrent pleural effusions is challenging. 2018;198[7]:839) and when talc pleurodesis is unsuccessful in patients with an expandable lung (Dresler CM, et al. Chest. 2005;127[3]:909).
These devices have become an important treatment option in patients with malignant pleural effusions (MPE), particularly those with a nonexpandable lung (Feller-Kopman DJ, et al. Am J Respir Crit Care Med.Over the last 5 years, studies evaluating the use of IPCs in treating nonmalignant pleural disease have proliferated. These studies have included and shown the successful treatment of pleural effusions due to end-stage renal disease, advanced heart failure (Walker SP, et al. Eur Respir J. 2022;59[2]:2101362), and cirrhosis, especially when a transjugular intrahepatic portosystemic shunt or liver transplant is not an option (Shojaee S, et al., Chest. 2019;155[3]:546). Compared with MPE, the rate of pleurodesis is generally lower and takes longer when an IPC is used to manage a nonmalignant pleural disease. Infection is the most common complication; most cases can be managed without catheter removal.
With many cited advantages, the IPC is an essential tool in the armamentarium of the chest physician and interventional radiologist. Indwelling pleural catheters have proven applications beyond MPE. When applied in a multidisciplinary fashion involving subspecialists and considering the patient’s goals, using an IPC can help achieve a crucial patient-centric goal in managing a recurrent nonmalignant pleural effusion.
Samiksha Gupta, MD
2nd Year Fellow
Sameer Kaushik Avasarala, MD
Section Member-at-Large
Thoracic Oncology Network
Interventional Procedures Section
The management of recurrent pleural effusions is challenging. 2018;198[7]:839) and when talc pleurodesis is unsuccessful in patients with an expandable lung (Dresler CM, et al. Chest. 2005;127[3]:909).
These devices have become an important treatment option in patients with malignant pleural effusions (MPE), particularly those with a nonexpandable lung (Feller-Kopman DJ, et al. Am J Respir Crit Care Med.Over the last 5 years, studies evaluating the use of IPCs in treating nonmalignant pleural disease have proliferated. These studies have included and shown the successful treatment of pleural effusions due to end-stage renal disease, advanced heart failure (Walker SP, et al. Eur Respir J. 2022;59[2]:2101362), and cirrhosis, especially when a transjugular intrahepatic portosystemic shunt or liver transplant is not an option (Shojaee S, et al., Chest. 2019;155[3]:546). Compared with MPE, the rate of pleurodesis is generally lower and takes longer when an IPC is used to manage a nonmalignant pleural disease. Infection is the most common complication; most cases can be managed without catheter removal.
With many cited advantages, the IPC is an essential tool in the armamentarium of the chest physician and interventional radiologist. Indwelling pleural catheters have proven applications beyond MPE. When applied in a multidisciplinary fashion involving subspecialists and considering the patient’s goals, using an IPC can help achieve a crucial patient-centric goal in managing a recurrent nonmalignant pleural effusion.
Samiksha Gupta, MD
2nd Year Fellow
Sameer Kaushik Avasarala, MD
Section Member-at-Large
Thoracic Oncology Network
Interventional Procedures Section
The management of recurrent pleural effusions is challenging. 2018;198[7]:839) and when talc pleurodesis is unsuccessful in patients with an expandable lung (Dresler CM, et al. Chest. 2005;127[3]:909).
These devices have become an important treatment option in patients with malignant pleural effusions (MPE), particularly those with a nonexpandable lung (Feller-Kopman DJ, et al. Am J Respir Crit Care Med.Over the last 5 years, studies evaluating the use of IPCs in treating nonmalignant pleural disease have proliferated. These studies have included and shown the successful treatment of pleural effusions due to end-stage renal disease, advanced heart failure (Walker SP, et al. Eur Respir J. 2022;59[2]:2101362), and cirrhosis, especially when a transjugular intrahepatic portosystemic shunt or liver transplant is not an option (Shojaee S, et al., Chest. 2019;155[3]:546). Compared with MPE, the rate of pleurodesis is generally lower and takes longer when an IPC is used to manage a nonmalignant pleural disease. Infection is the most common complication; most cases can be managed without catheter removal.
With many cited advantages, the IPC is an essential tool in the armamentarium of the chest physician and interventional radiologist. Indwelling pleural catheters have proven applications beyond MPE. When applied in a multidisciplinary fashion involving subspecialists and considering the patient’s goals, using an IPC can help achieve a crucial patient-centric goal in managing a recurrent nonmalignant pleural effusion.
Samiksha Gupta, MD
2nd Year Fellow
Sameer Kaushik Avasarala, MD
Section Member-at-Large
Early mobility in the ICU: Working with the TEAM
Critical Care Network
Nonrespiratory Critical Care Section
2014;370:1626). This advocacy is endorsed by major societies and guidelines, like the ABCDEF bundle (Balas MC, et al. Crit Care Med. 2013;41:S116), in which “E” stands for Early mobility and exercise. In fact, the PADIS guidelines, addressing Pain, Agitation, Delirium, Immobility, and Sleep in the ICU, added Immobility and Sleep (the “I” and “S” in PADIS) to the prior PAD guidelines in the latest update in 2018, to stress the importance of early mobility in the ICU (Devlin JW, et al. Crit Care Med. 2018;46[9]:e825). Multiple studies have shown a positive impact of early mobility in the ICU on patients’ outcomes (Tipping CJ, et al. Intensive Care Med. 2017;43:171).
This is especially true for critically ill patients, in which weakness is more common and can result in worse outcomes (Kress JP, et al. N Engl J Med.The recent TEAM study examined an early mobility approach in mechanically ventilated patients and found no difference in the primary outcome of alive and out-of-hospital at 180 days (N Engl J Med. 2022;387:1747).
Before concluding, it is worth realizing that the usual care arm included mobilization that was otherwise normally provided. The intervention arm protocolized the early mobility to be done simultaneously with the minimization of sedation. Patients’ assessment occurred in 81% in the usual care arm vs 94% in the intervention arm; both numbers are much higher than reported data in the ICU (Jolley SE, et al. Crit Care Med. 2017;45:205).
Revisiting the question of early mobility in the ICU, more data are needed to clarify the best methodology, sedation, timing, amount, and type of patients who will benefit the most. Until then, it should remain a goal for ICUs and part of the daily discussion when caring for critically ill patients.
Mohammed J. Al-Jaghbeer, MBBS, FCCP
Section Member-at-Large
Salim Surani, MD, MPH, FCCP
Critical Care Network
Nonrespiratory Critical Care Section
2014;370:1626). This advocacy is endorsed by major societies and guidelines, like the ABCDEF bundle (Balas MC, et al. Crit Care Med. 2013;41:S116), in which “E” stands for Early mobility and exercise. In fact, the PADIS guidelines, addressing Pain, Agitation, Delirium, Immobility, and Sleep in the ICU, added Immobility and Sleep (the “I” and “S” in PADIS) to the prior PAD guidelines in the latest update in 2018, to stress the importance of early mobility in the ICU (Devlin JW, et al. Crit Care Med. 2018;46[9]:e825). Multiple studies have shown a positive impact of early mobility in the ICU on patients’ outcomes (Tipping CJ, et al. Intensive Care Med. 2017;43:171).
This is especially true for critically ill patients, in which weakness is more common and can result in worse outcomes (Kress JP, et al. N Engl J Med.The recent TEAM study examined an early mobility approach in mechanically ventilated patients and found no difference in the primary outcome of alive and out-of-hospital at 180 days (N Engl J Med. 2022;387:1747).
Before concluding, it is worth realizing that the usual care arm included mobilization that was otherwise normally provided. The intervention arm protocolized the early mobility to be done simultaneously with the minimization of sedation. Patients’ assessment occurred in 81% in the usual care arm vs 94% in the intervention arm; both numbers are much higher than reported data in the ICU (Jolley SE, et al. Crit Care Med. 2017;45:205).
Revisiting the question of early mobility in the ICU, more data are needed to clarify the best methodology, sedation, timing, amount, and type of patients who will benefit the most. Until then, it should remain a goal for ICUs and part of the daily discussion when caring for critically ill patients.
Mohammed J. Al-Jaghbeer, MBBS, FCCP
Section Member-at-Large
Salim Surani, MD, MPH, FCCP
Critical Care Network
Nonrespiratory Critical Care Section
2014;370:1626). This advocacy is endorsed by major societies and guidelines, like the ABCDEF bundle (Balas MC, et al. Crit Care Med. 2013;41:S116), in which “E” stands for Early mobility and exercise. In fact, the PADIS guidelines, addressing Pain, Agitation, Delirium, Immobility, and Sleep in the ICU, added Immobility and Sleep (the “I” and “S” in PADIS) to the prior PAD guidelines in the latest update in 2018, to stress the importance of early mobility in the ICU (Devlin JW, et al. Crit Care Med. 2018;46[9]:e825). Multiple studies have shown a positive impact of early mobility in the ICU on patients’ outcomes (Tipping CJ, et al. Intensive Care Med. 2017;43:171).
This is especially true for critically ill patients, in which weakness is more common and can result in worse outcomes (Kress JP, et al. N Engl J Med.The recent TEAM study examined an early mobility approach in mechanically ventilated patients and found no difference in the primary outcome of alive and out-of-hospital at 180 days (N Engl J Med. 2022;387:1747).
Before concluding, it is worth realizing that the usual care arm included mobilization that was otherwise normally provided. The intervention arm protocolized the early mobility to be done simultaneously with the minimization of sedation. Patients’ assessment occurred in 81% in the usual care arm vs 94% in the intervention arm; both numbers are much higher than reported data in the ICU (Jolley SE, et al. Crit Care Med. 2017;45:205).
Revisiting the question of early mobility in the ICU, more data are needed to clarify the best methodology, sedation, timing, amount, and type of patients who will benefit the most. Until then, it should remain a goal for ICUs and part of the daily discussion when caring for critically ill patients.
Mohammed J. Al-Jaghbeer, MBBS, FCCP
Section Member-at-Large
Salim Surani, MD, MPH, FCCP
Lebrikizumab monotherapy for AD found safe, effective during induction
The identically designed, 52-week, randomized, double-blind, placebo-controlled trials enrolled 851 adolescents and adults with moderate to severe AD and included a 16-week induction period followed by a 36-week maintenance period. At week 16, the results “show a rapid onset of action in multiple domains of the disease, such as skin clearance and itch,” wrote lead author Jonathan Silverberg, MD, PhD, director of clinical research and contact dermatitis, at George Washington University, Washington, and colleagues. “Although 16 weeks of treatment with lebrikizumab is not sufficient to assess its long-term safety, the results from the induction period of these two trials suggest a safety profile that is consistent with findings in previous trials,” they added.
Results presented at the European Academy of Dermatology and Venereology 2022 annual meeting, but not yet published, showed similar efficacy maintained through the end of the trial.
Eligible patients were randomly assigned to receive either lebrikizumab 250 mg (with a 500-mg loading dose given at baseline and at week 2) or placebo, administered subcutaneously every 2 weeks, with concomitant topical or systemic treatments prohibited through week 16 except when deemed appropriate as rescue therapy. In such cases, moderate-potency topical glucocorticoids were preferred as first-line rescue therapy, while the study drug was discontinued if systemic therapy was needed.
In both trials, the primary efficacy outcome – a score of 0 or 1 on the Investigator’s Global Assessment (IGA) – and a reduction of at least 2 points from baseline at week 16, was met by more patients treated with lebrikizumab than with placebo: 43.1% vs. 12.7% respectively in trial 1 (P < .001); and 33.2% vs. 10.8% in trial 2 (P < .001).
Similarly, in both trials, a higher percentage of the lebrikizumab than placebo patients had an EASI-75 response (75% improvement in the Eczema Area and Severity Index score): 58.8% vs. 16.2% (P < .001) in trial 1 and 52.1% vs. 18.1% (P < .001) in trial 2.
Improvement in itch was also significantly better in patients treated with lebrikizumab, compared with placebo. This was measured by a reduction of at least 4 points in the Pruritus NRS from baseline to week 16 and a reduction in the Sleep-Loss Scale score of at least 2 points from baseline to week 16 (P < .001 for both measures in both trials).
A higher percentage of placebo vs. lebrikizumab patients discontinued the trials during the induction phases (14.9% vs. 7.1% in trial 1 and 11.0% vs. 7.8% in trial 2), and the use of rescue medication was approximately three times and two times higher in both placebo groups respectively.
Conjunctivitis was the most common adverse event, occurring consistently more frequently in patients treated with lebrikizumab, compared with placebo (7.4% vs. 2.8% in trial 1 and 7.5% vs. 2.1% in trial 2).
“Although several theories have been proposed for the pathogenesis of conjunctivitis in patients with atopic dermatitis treated with this class of biologic agents, the mechanism remains unclear and warrants further study,” the investigators wrote.
Asked to comment on the new results, Zelma Chiesa Fuxench, MD, who was not involved in the research, said they “continue to demonstrate the superior efficacy and favorable safety profile” of lebrikizumab in adolescents and adults and support the results of earlier phase 2 studies. “The results of these studies thus far continue to offer more hope and the possibility of a better future for our patients with atopic dermatitis who are still struggling to achieve control of their disease.”
Dr. Chiesa Fuxench from the department of dermatology at the University of Pennsylvania, Philadelphia, said she looks forward to reviewing the full study results in which patients who achieved the primary outcomes of interest were then rerandomized to either placebo, or lebrikizumab every 2 weeks or every 4 weeks for the 36-week maintenance period “because we know that there is data for other biologics in atopic dermatitis (such as tralokinumab) that demonstrate that a decrease in the frequency of injections may be possible for patients who achieve disease control after an initial 16 weeks of therapy every 2 weeks.”
The research was supported by Dermira, a wholly owned subsidiary of Eli Lilly. Dr. Silverberg disclosed he is a consultant for Dermira and Eli Lilly, as are other coauthors on the paper who additionally disclosed grants from Dermira and other relationships with Eli Lilly such as advisory board membership and having received lecture fees. Three authors are Eli Lilly employees. Dr. Chiesa Fuxench disclosed that she is a consultant for the Asthma and Allergy Foundation of America, National Eczema Association, Pfizer, Abbvie, and Incyte for which she has received honoraria for work related to AD. Dr. Chiesa Fuxench has also been a recipient of research grants from Regeneron, Sanofi, Tioga, Vanda, Menlo Therapeutics, Leo Pharma, and Eli Lilly for work related to AD as well as honoraria for continuing medical education work related to AD sponsored through educational grants from Regeneron/Sanofi and Pfizer.
The identically designed, 52-week, randomized, double-blind, placebo-controlled trials enrolled 851 adolescents and adults with moderate to severe AD and included a 16-week induction period followed by a 36-week maintenance period. At week 16, the results “show a rapid onset of action in multiple domains of the disease, such as skin clearance and itch,” wrote lead author Jonathan Silverberg, MD, PhD, director of clinical research and contact dermatitis, at George Washington University, Washington, and colleagues. “Although 16 weeks of treatment with lebrikizumab is not sufficient to assess its long-term safety, the results from the induction period of these two trials suggest a safety profile that is consistent with findings in previous trials,” they added.
Results presented at the European Academy of Dermatology and Venereology 2022 annual meeting, but not yet published, showed similar efficacy maintained through the end of the trial.
Eligible patients were randomly assigned to receive either lebrikizumab 250 mg (with a 500-mg loading dose given at baseline and at week 2) or placebo, administered subcutaneously every 2 weeks, with concomitant topical or systemic treatments prohibited through week 16 except when deemed appropriate as rescue therapy. In such cases, moderate-potency topical glucocorticoids were preferred as first-line rescue therapy, while the study drug was discontinued if systemic therapy was needed.
In both trials, the primary efficacy outcome – a score of 0 or 1 on the Investigator’s Global Assessment (IGA) – and a reduction of at least 2 points from baseline at week 16, was met by more patients treated with lebrikizumab than with placebo: 43.1% vs. 12.7% respectively in trial 1 (P < .001); and 33.2% vs. 10.8% in trial 2 (P < .001).
Similarly, in both trials, a higher percentage of the lebrikizumab than placebo patients had an EASI-75 response (75% improvement in the Eczema Area and Severity Index score): 58.8% vs. 16.2% (P < .001) in trial 1 and 52.1% vs. 18.1% (P < .001) in trial 2.
Improvement in itch was also significantly better in patients treated with lebrikizumab, compared with placebo. This was measured by a reduction of at least 4 points in the Pruritus NRS from baseline to week 16 and a reduction in the Sleep-Loss Scale score of at least 2 points from baseline to week 16 (P < .001 for both measures in both trials).
A higher percentage of placebo vs. lebrikizumab patients discontinued the trials during the induction phases (14.9% vs. 7.1% in trial 1 and 11.0% vs. 7.8% in trial 2), and the use of rescue medication was approximately three times and two times higher in both placebo groups respectively.
Conjunctivitis was the most common adverse event, occurring consistently more frequently in patients treated with lebrikizumab, compared with placebo (7.4% vs. 2.8% in trial 1 and 7.5% vs. 2.1% in trial 2).
“Although several theories have been proposed for the pathogenesis of conjunctivitis in patients with atopic dermatitis treated with this class of biologic agents, the mechanism remains unclear and warrants further study,” the investigators wrote.
Asked to comment on the new results, Zelma Chiesa Fuxench, MD, who was not involved in the research, said they “continue to demonstrate the superior efficacy and favorable safety profile” of lebrikizumab in adolescents and adults and support the results of earlier phase 2 studies. “The results of these studies thus far continue to offer more hope and the possibility of a better future for our patients with atopic dermatitis who are still struggling to achieve control of their disease.”
Dr. Chiesa Fuxench from the department of dermatology at the University of Pennsylvania, Philadelphia, said she looks forward to reviewing the full study results in which patients who achieved the primary outcomes of interest were then rerandomized to either placebo, or lebrikizumab every 2 weeks or every 4 weeks for the 36-week maintenance period “because we know that there is data for other biologics in atopic dermatitis (such as tralokinumab) that demonstrate that a decrease in the frequency of injections may be possible for patients who achieve disease control after an initial 16 weeks of therapy every 2 weeks.”
The research was supported by Dermira, a wholly owned subsidiary of Eli Lilly. Dr. Silverberg disclosed he is a consultant for Dermira and Eli Lilly, as are other coauthors on the paper who additionally disclosed grants from Dermira and other relationships with Eli Lilly such as advisory board membership and having received lecture fees. Three authors are Eli Lilly employees. Dr. Chiesa Fuxench disclosed that she is a consultant for the Asthma and Allergy Foundation of America, National Eczema Association, Pfizer, Abbvie, and Incyte for which she has received honoraria for work related to AD. Dr. Chiesa Fuxench has also been a recipient of research grants from Regeneron, Sanofi, Tioga, Vanda, Menlo Therapeutics, Leo Pharma, and Eli Lilly for work related to AD as well as honoraria for continuing medical education work related to AD sponsored through educational grants from Regeneron/Sanofi and Pfizer.
The identically designed, 52-week, randomized, double-blind, placebo-controlled trials enrolled 851 adolescents and adults with moderate to severe AD and included a 16-week induction period followed by a 36-week maintenance period. At week 16, the results “show a rapid onset of action in multiple domains of the disease, such as skin clearance and itch,” wrote lead author Jonathan Silverberg, MD, PhD, director of clinical research and contact dermatitis, at George Washington University, Washington, and colleagues. “Although 16 weeks of treatment with lebrikizumab is not sufficient to assess its long-term safety, the results from the induction period of these two trials suggest a safety profile that is consistent with findings in previous trials,” they added.
Results presented at the European Academy of Dermatology and Venereology 2022 annual meeting, but not yet published, showed similar efficacy maintained through the end of the trial.
Eligible patients were randomly assigned to receive either lebrikizumab 250 mg (with a 500-mg loading dose given at baseline and at week 2) or placebo, administered subcutaneously every 2 weeks, with concomitant topical or systemic treatments prohibited through week 16 except when deemed appropriate as rescue therapy. In such cases, moderate-potency topical glucocorticoids were preferred as first-line rescue therapy, while the study drug was discontinued if systemic therapy was needed.
In both trials, the primary efficacy outcome – a score of 0 or 1 on the Investigator’s Global Assessment (IGA) – and a reduction of at least 2 points from baseline at week 16, was met by more patients treated with lebrikizumab than with placebo: 43.1% vs. 12.7% respectively in trial 1 (P < .001); and 33.2% vs. 10.8% in trial 2 (P < .001).
Similarly, in both trials, a higher percentage of the lebrikizumab than placebo patients had an EASI-75 response (75% improvement in the Eczema Area and Severity Index score): 58.8% vs. 16.2% (P < .001) in trial 1 and 52.1% vs. 18.1% (P < .001) in trial 2.
Improvement in itch was also significantly better in patients treated with lebrikizumab, compared with placebo. This was measured by a reduction of at least 4 points in the Pruritus NRS from baseline to week 16 and a reduction in the Sleep-Loss Scale score of at least 2 points from baseline to week 16 (P < .001 for both measures in both trials).
A higher percentage of placebo vs. lebrikizumab patients discontinued the trials during the induction phases (14.9% vs. 7.1% in trial 1 and 11.0% vs. 7.8% in trial 2), and the use of rescue medication was approximately three times and two times higher in both placebo groups respectively.
Conjunctivitis was the most common adverse event, occurring consistently more frequently in patients treated with lebrikizumab, compared with placebo (7.4% vs. 2.8% in trial 1 and 7.5% vs. 2.1% in trial 2).
“Although several theories have been proposed for the pathogenesis of conjunctivitis in patients with atopic dermatitis treated with this class of biologic agents, the mechanism remains unclear and warrants further study,” the investigators wrote.
Asked to comment on the new results, Zelma Chiesa Fuxench, MD, who was not involved in the research, said they “continue to demonstrate the superior efficacy and favorable safety profile” of lebrikizumab in adolescents and adults and support the results of earlier phase 2 studies. “The results of these studies thus far continue to offer more hope and the possibility of a better future for our patients with atopic dermatitis who are still struggling to achieve control of their disease.”
Dr. Chiesa Fuxench from the department of dermatology at the University of Pennsylvania, Philadelphia, said she looks forward to reviewing the full study results in which patients who achieved the primary outcomes of interest were then rerandomized to either placebo, or lebrikizumab every 2 weeks or every 4 weeks for the 36-week maintenance period “because we know that there is data for other biologics in atopic dermatitis (such as tralokinumab) that demonstrate that a decrease in the frequency of injections may be possible for patients who achieve disease control after an initial 16 weeks of therapy every 2 weeks.”
The research was supported by Dermira, a wholly owned subsidiary of Eli Lilly. Dr. Silverberg disclosed he is a consultant for Dermira and Eli Lilly, as are other coauthors on the paper who additionally disclosed grants from Dermira and other relationships with Eli Lilly such as advisory board membership and having received lecture fees. Three authors are Eli Lilly employees. Dr. Chiesa Fuxench disclosed that she is a consultant for the Asthma and Allergy Foundation of America, National Eczema Association, Pfizer, Abbvie, and Incyte for which she has received honoraria for work related to AD. Dr. Chiesa Fuxench has also been a recipient of research grants from Regeneron, Sanofi, Tioga, Vanda, Menlo Therapeutics, Leo Pharma, and Eli Lilly for work related to AD as well as honoraria for continuing medical education work related to AD sponsored through educational grants from Regeneron/Sanofi and Pfizer.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Standard-of-care therapy in lung cancer: Be open to new ideas
This transcript has been edited for clarity.
I’ll focus on some important topics related to decision-making and daily practice, and the practitioners’ thoughts from the meeting.
There’s no doubt that our outcomes are better for patients, but it’s much harder to make the best choice and I think there’s more pressure on us to make the best choice.
Topic one was the need for next-generation sequencing (NGS) testing. I’ll put it before you that every patient needs NGS testing at the time of diagnosis. It really shouldn’t be put off. How to do that is a topic for another day, but you need NGS testing.
Moving along with this, even when you’re thinking you’re going to go down the road of a checkpoint inhibitor with chemotherapy, the recent Food and Drug Administration approval for cemiplimab and chemotherapy says that you have to make sure that patients don’t have EGFR or ALK aberrations. Now, for cemiplimab, you have to make sure they don’t have ROS1 aberrations.
You need NGS testing to find those targets and give patients a targeted therapy. Even if you want to give a checkpoint inhibitor with or without chemotherapy, you need to have NGS testing.
Second, the way to get the most comprehensive analysis of targets for which there are therapeutic avenues is to do more comprehensive NGS testing, including both DNA and RNA. Not all the panels do this right now, and you really need that RNA-based testing to find all the fusions that are druggable by the current medications that we have.
Bottom line: NGS testing should be done for everybody, and you need to do the most comprehensive panel available both for DNA and RNA.
The next topic that there was great agreement on was the emergence of antibody-drug conjugates. I think everybody’s excited. All of them have shown evidence of benefit. There are varying degrees of side effects, and we’ll learn how to deal with those. They’re new drugs, they’re here, and they’re safe.
There are a couple of things to consider, though. Number one, these drugs do have chemotherapy and they have side effects from chemotherapy. I think the consensus is that when you treat patients with an antibody-drug conjugate, you need to give antiemetic regimens, at least for trastuzumab and the other deruxtecan drugs. You need to give a regimen for highly emetogenic chemotherapy as prophylactic antiemetics. I think that was a consensus thought.
Second, these drugs are making us rethink what it means to have the expression of the protein. I’m totally struck that for trastuzumab deruxtecan, patritumab deruxtecan, and datopotamab deruxtecan, the degree of protein expression is not particularly relevant, and these drugs can work in all patients. There have been cases clearly shown that datopotamab deruxtecan and patritumab deruxtecan both have benefit in patients with EGFR mutations after progression on osimertinib.
This idea of a need for overexpression, and maybe even the idea of testing, is being challenged now. These drugs seem to work as long as some protein is present. They don’t work in every patient, but they work in the vast majority. This thinking about overexpression with the antibody-drug conjugates is probably going to need to be reevaluated.
Last are some thoughts about our targeted therapies. Again, we have more targets. We have EGFR exon 20, for example, and more drugs for MET. I’d like to share a couple of thoughts on what the experts presented at the meeting.
First, although we have a bunch of new targeted agents for patients with EGFR-mutant cancers, probably the thing that’s going to change therapy now is adding chemotherapy to these agents. We may also use circulating tumor (ctDNA) to help guide us to identify which patients would be more likely to benefit from a chemotherapy with osimertinib. I see that as a trend and as a strategy that we’re likely to see move forward.
Another is in the ALK space. I know we’ve gotten very comfortable giving alectinib and brigatinib, but when you look at all the data, it points to lorlatinib perhaps being a better first-line therapy.
I think the experts thought lorlatinib would be a good drug. Yes, it has a different spectrum of side effects. The central nervous system (CNS) side effects are something we have to learn how to take care of; however, we can do that. Generally, with dose reduction, those side effects are manageable.
If you can get better outcomes in general and in patients with brain metastases, it may make some sense to displace our go-to first-line drugs, brigatinib and alectinib, with lorlatinib.
Changes in practice are happening now. There are drugs available. I urge oncologists to be open to rethinking what your standard of care is and also open to rethinking how these drugs work and to go with the data that we have.
We’re doing much better now, but the best is yet to come.
Mark G. Kris, MD, is chief of the thoracic oncology service and the William and Joy Ruane Chair in Thoracic Oncology at Memorial Sloan Kettering Cancer Center in New York City. His research interests include targeted therapies for lung cancer, multimodality therapy, the development of new anticancer drugs, and symptom management with a focus on preventing emesis. A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’ll focus on some important topics related to decision-making and daily practice, and the practitioners’ thoughts from the meeting.
There’s no doubt that our outcomes are better for patients, but it’s much harder to make the best choice and I think there’s more pressure on us to make the best choice.
Topic one was the need for next-generation sequencing (NGS) testing. I’ll put it before you that every patient needs NGS testing at the time of diagnosis. It really shouldn’t be put off. How to do that is a topic for another day, but you need NGS testing.
Moving along with this, even when you’re thinking you’re going to go down the road of a checkpoint inhibitor with chemotherapy, the recent Food and Drug Administration approval for cemiplimab and chemotherapy says that you have to make sure that patients don’t have EGFR or ALK aberrations. Now, for cemiplimab, you have to make sure they don’t have ROS1 aberrations.
You need NGS testing to find those targets and give patients a targeted therapy. Even if you want to give a checkpoint inhibitor with or without chemotherapy, you need to have NGS testing.
Second, the way to get the most comprehensive analysis of targets for which there are therapeutic avenues is to do more comprehensive NGS testing, including both DNA and RNA. Not all the panels do this right now, and you really need that RNA-based testing to find all the fusions that are druggable by the current medications that we have.
Bottom line: NGS testing should be done for everybody, and you need to do the most comprehensive panel available both for DNA and RNA.
The next topic that there was great agreement on was the emergence of antibody-drug conjugates. I think everybody’s excited. All of them have shown evidence of benefit. There are varying degrees of side effects, and we’ll learn how to deal with those. They’re new drugs, they’re here, and they’re safe.
There are a couple of things to consider, though. Number one, these drugs do have chemotherapy and they have side effects from chemotherapy. I think the consensus is that when you treat patients with an antibody-drug conjugate, you need to give antiemetic regimens, at least for trastuzumab and the other deruxtecan drugs. You need to give a regimen for highly emetogenic chemotherapy as prophylactic antiemetics. I think that was a consensus thought.
Second, these drugs are making us rethink what it means to have the expression of the protein. I’m totally struck that for trastuzumab deruxtecan, patritumab deruxtecan, and datopotamab deruxtecan, the degree of protein expression is not particularly relevant, and these drugs can work in all patients. There have been cases clearly shown that datopotamab deruxtecan and patritumab deruxtecan both have benefit in patients with EGFR mutations after progression on osimertinib.
This idea of a need for overexpression, and maybe even the idea of testing, is being challenged now. These drugs seem to work as long as some protein is present. They don’t work in every patient, but they work in the vast majority. This thinking about overexpression with the antibody-drug conjugates is probably going to need to be reevaluated.
Last are some thoughts about our targeted therapies. Again, we have more targets. We have EGFR exon 20, for example, and more drugs for MET. I’d like to share a couple of thoughts on what the experts presented at the meeting.
First, although we have a bunch of new targeted agents for patients with EGFR-mutant cancers, probably the thing that’s going to change therapy now is adding chemotherapy to these agents. We may also use circulating tumor (ctDNA) to help guide us to identify which patients would be more likely to benefit from a chemotherapy with osimertinib. I see that as a trend and as a strategy that we’re likely to see move forward.
Another is in the ALK space. I know we’ve gotten very comfortable giving alectinib and brigatinib, but when you look at all the data, it points to lorlatinib perhaps being a better first-line therapy.
I think the experts thought lorlatinib would be a good drug. Yes, it has a different spectrum of side effects. The central nervous system (CNS) side effects are something we have to learn how to take care of; however, we can do that. Generally, with dose reduction, those side effects are manageable.
If you can get better outcomes in general and in patients with brain metastases, it may make some sense to displace our go-to first-line drugs, brigatinib and alectinib, with lorlatinib.
Changes in practice are happening now. There are drugs available. I urge oncologists to be open to rethinking what your standard of care is and also open to rethinking how these drugs work and to go with the data that we have.
We’re doing much better now, but the best is yet to come.
Mark G. Kris, MD, is chief of the thoracic oncology service and the William and Joy Ruane Chair in Thoracic Oncology at Memorial Sloan Kettering Cancer Center in New York City. His research interests include targeted therapies for lung cancer, multimodality therapy, the development of new anticancer drugs, and symptom management with a focus on preventing emesis. A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
I’ll focus on some important topics related to decision-making and daily practice, and the practitioners’ thoughts from the meeting.
There’s no doubt that our outcomes are better for patients, but it’s much harder to make the best choice and I think there’s more pressure on us to make the best choice.
Topic one was the need for next-generation sequencing (NGS) testing. I’ll put it before you that every patient needs NGS testing at the time of diagnosis. It really shouldn’t be put off. How to do that is a topic for another day, but you need NGS testing.
Moving along with this, even when you’re thinking you’re going to go down the road of a checkpoint inhibitor with chemotherapy, the recent Food and Drug Administration approval for cemiplimab and chemotherapy says that you have to make sure that patients don’t have EGFR or ALK aberrations. Now, for cemiplimab, you have to make sure they don’t have ROS1 aberrations.
You need NGS testing to find those targets and give patients a targeted therapy. Even if you want to give a checkpoint inhibitor with or without chemotherapy, you need to have NGS testing.
Second, the way to get the most comprehensive analysis of targets for which there are therapeutic avenues is to do more comprehensive NGS testing, including both DNA and RNA. Not all the panels do this right now, and you really need that RNA-based testing to find all the fusions that are druggable by the current medications that we have.
Bottom line: NGS testing should be done for everybody, and you need to do the most comprehensive panel available both for DNA and RNA.
The next topic that there was great agreement on was the emergence of antibody-drug conjugates. I think everybody’s excited. All of them have shown evidence of benefit. There are varying degrees of side effects, and we’ll learn how to deal with those. They’re new drugs, they’re here, and they’re safe.
There are a couple of things to consider, though. Number one, these drugs do have chemotherapy and they have side effects from chemotherapy. I think the consensus is that when you treat patients with an antibody-drug conjugate, you need to give antiemetic regimens, at least for trastuzumab and the other deruxtecan drugs. You need to give a regimen for highly emetogenic chemotherapy as prophylactic antiemetics. I think that was a consensus thought.
Second, these drugs are making us rethink what it means to have the expression of the protein. I’m totally struck that for trastuzumab deruxtecan, patritumab deruxtecan, and datopotamab deruxtecan, the degree of protein expression is not particularly relevant, and these drugs can work in all patients. There have been cases clearly shown that datopotamab deruxtecan and patritumab deruxtecan both have benefit in patients with EGFR mutations after progression on osimertinib.
This idea of a need for overexpression, and maybe even the idea of testing, is being challenged now. These drugs seem to work as long as some protein is present. They don’t work in every patient, but they work in the vast majority. This thinking about overexpression with the antibody-drug conjugates is probably going to need to be reevaluated.
Last are some thoughts about our targeted therapies. Again, we have more targets. We have EGFR exon 20, for example, and more drugs for MET. I’d like to share a couple of thoughts on what the experts presented at the meeting.
First, although we have a bunch of new targeted agents for patients with EGFR-mutant cancers, probably the thing that’s going to change therapy now is adding chemotherapy to these agents. We may also use circulating tumor (ctDNA) to help guide us to identify which patients would be more likely to benefit from a chemotherapy with osimertinib. I see that as a trend and as a strategy that we’re likely to see move forward.
Another is in the ALK space. I know we’ve gotten very comfortable giving alectinib and brigatinib, but when you look at all the data, it points to lorlatinib perhaps being a better first-line therapy.
I think the experts thought lorlatinib would be a good drug. Yes, it has a different spectrum of side effects. The central nervous system (CNS) side effects are something we have to learn how to take care of; however, we can do that. Generally, with dose reduction, those side effects are manageable.
If you can get better outcomes in general and in patients with brain metastases, it may make some sense to displace our go-to first-line drugs, brigatinib and alectinib, with lorlatinib.
Changes in practice are happening now. There are drugs available. I urge oncologists to be open to rethinking what your standard of care is and also open to rethinking how these drugs work and to go with the data that we have.
We’re doing much better now, but the best is yet to come.
Mark G. Kris, MD, is chief of the thoracic oncology service and the William and Joy Ruane Chair in Thoracic Oncology at Memorial Sloan Kettering Cancer Center in New York City. His research interests include targeted therapies for lung cancer, multimodality therapy, the development of new anticancer drugs, and symptom management with a focus on preventing emesis. A version of this article first appeared on Medscape.com.
Solitary abdominal papule
Dermoscopy revealed an 8-mm scaly brown-black papule that lacked melanocytic features (pigment network, globules, streaks, or homogeneous blue or brown color) but had milia-like cysts and so-called “fat fingers” (short, straight to curved radial projections1). These findings were consistent with a diagnosis of seborrheic keratosis (SK).
SKs go by many names and are often confused with nevi. Some patients might know them by such names as “age spots” or “liver spots.” Patients often have many SKs on their body; the back and skin folds are common locations. Patients may be unhappy about the way they look and may describe occasional discomfort when the SKs rub against clothes and inflammation that occurs spontaneously or with trauma.
Classic SKs have a well-demarcated border and waxy, stuck-on appearance. There are times when it is difficult to distinguish between an SK and a melanocytic lesion. Thus, a biopsy may be necessary. In addition, SKs are so common that collision lesions may occur. (Collision lesions result when 2 histologically distinct neoplasms occur adjacent to each other and cause an unusual clinical appearance with features of each lesion.) The atypical clinical features in a collision lesion may prompt a biopsy to exclude malignancy.
Dermoscopic features of SKs include well-demarcated borders, milia-like cysts (white circular inclusions), comedo-like openings (brown/black circular inclusions), fissures and ridges, hairpin vessels, and fat fingers.
Cryotherapy is a quick and efficient treatment when a patient would like the lesions removed. Curettage or light electrodessication may be less likely to cause post-inflammatory hypopigmentation in patients with darker skin types. These various destructive therapies are often considered cosmetic and are unlikely to be covered by insurance unless there is documentation of significant inflammation or discomfort. In this case, the lesion was not treated.
Photos and text for Photo Rounds Friday courtesy of Jonathan Karnes, MD (copyright retained). Dr. Karnes is the medical director of MDFMR Dermatology Services, Augusta, ME.
1. Wang S, Rabinovitz H, Oliviero M, et al. Solar lentigines, seborrheic keratoses, and lichen planus-like keratoses. In: Marghoob A, Malvehy J, Braun, R, eds. Atlas of Dermoscopy. 2nd ed. Informa Healthcare; 2012: 58-69.
Dermoscopy revealed an 8-mm scaly brown-black papule that lacked melanocytic features (pigment network, globules, streaks, or homogeneous blue or brown color) but had milia-like cysts and so-called “fat fingers” (short, straight to curved radial projections1). These findings were consistent with a diagnosis of seborrheic keratosis (SK).
SKs go by many names and are often confused with nevi. Some patients might know them by such names as “age spots” or “liver spots.” Patients often have many SKs on their body; the back and skin folds are common locations. Patients may be unhappy about the way they look and may describe occasional discomfort when the SKs rub against clothes and inflammation that occurs spontaneously or with trauma.
Classic SKs have a well-demarcated border and waxy, stuck-on appearance. There are times when it is difficult to distinguish between an SK and a melanocytic lesion. Thus, a biopsy may be necessary. In addition, SKs are so common that collision lesions may occur. (Collision lesions result when 2 histologically distinct neoplasms occur adjacent to each other and cause an unusual clinical appearance with features of each lesion.) The atypical clinical features in a collision lesion may prompt a biopsy to exclude malignancy.
Dermoscopic features of SKs include well-demarcated borders, milia-like cysts (white circular inclusions), comedo-like openings (brown/black circular inclusions), fissures and ridges, hairpin vessels, and fat fingers.
Cryotherapy is a quick and efficient treatment when a patient would like the lesions removed. Curettage or light electrodessication may be less likely to cause post-inflammatory hypopigmentation in patients with darker skin types. These various destructive therapies are often considered cosmetic and are unlikely to be covered by insurance unless there is documentation of significant inflammation or discomfort. In this case, the lesion was not treated.
Photos and text for Photo Rounds Friday courtesy of Jonathan Karnes, MD (copyright retained). Dr. Karnes is the medical director of MDFMR Dermatology Services, Augusta, ME.
Dermoscopy revealed an 8-mm scaly brown-black papule that lacked melanocytic features (pigment network, globules, streaks, or homogeneous blue or brown color) but had milia-like cysts and so-called “fat fingers” (short, straight to curved radial projections1). These findings were consistent with a diagnosis of seborrheic keratosis (SK).
SKs go by many names and are often confused with nevi. Some patients might know them by such names as “age spots” or “liver spots.” Patients often have many SKs on their body; the back and skin folds are common locations. Patients may be unhappy about the way they look and may describe occasional discomfort when the SKs rub against clothes and inflammation that occurs spontaneously or with trauma.
Classic SKs have a well-demarcated border and waxy, stuck-on appearance. There are times when it is difficult to distinguish between an SK and a melanocytic lesion. Thus, a biopsy may be necessary. In addition, SKs are so common that collision lesions may occur. (Collision lesions result when 2 histologically distinct neoplasms occur adjacent to each other and cause an unusual clinical appearance with features of each lesion.) The atypical clinical features in a collision lesion may prompt a biopsy to exclude malignancy.
Dermoscopic features of SKs include well-demarcated borders, milia-like cysts (white circular inclusions), comedo-like openings (brown/black circular inclusions), fissures and ridges, hairpin vessels, and fat fingers.
Cryotherapy is a quick and efficient treatment when a patient would like the lesions removed. Curettage or light electrodessication may be less likely to cause post-inflammatory hypopigmentation in patients with darker skin types. These various destructive therapies are often considered cosmetic and are unlikely to be covered by insurance unless there is documentation of significant inflammation or discomfort. In this case, the lesion was not treated.
Photos and text for Photo Rounds Friday courtesy of Jonathan Karnes, MD (copyright retained). Dr. Karnes is the medical director of MDFMR Dermatology Services, Augusta, ME.
1. Wang S, Rabinovitz H, Oliviero M, et al. Solar lentigines, seborrheic keratoses, and lichen planus-like keratoses. In: Marghoob A, Malvehy J, Braun, R, eds. Atlas of Dermoscopy. 2nd ed. Informa Healthcare; 2012: 58-69.
1. Wang S, Rabinovitz H, Oliviero M, et al. Solar lentigines, seborrheic keratoses, and lichen planus-like keratoses. In: Marghoob A, Malvehy J, Braun, R, eds. Atlas of Dermoscopy. 2nd ed. Informa Healthcare; 2012: 58-69.
High caffeine levels may lower body fat, type 2 diabetes risks
the results of a new study suggest.
Explaining that caffeine has thermogenic effects, the researchers note that previous short-term studies have linked caffeine intake with reductions in weight and fat mass. And observational data have shown associations between coffee consumption and lower risks of type 2 diabetes and cardiovascular disease.
In an effort to isolate the effects of caffeine from those of other food and drink components, Susanna C. Larsson, PhD, of the Karolinska Institute, Stockholm, and colleagues used data from studies of mainly European populations to examine two specific genetic mutations that have been linked to a slower speed of caffeine metabolism.
The two gene variants resulted in “genetically predicted, lifelong, higher plasma caffeine concentrations,” the researchers note “and were associated with lower body mass index and fat mass, as well as a lower risk of type 2 diabetes.”
Approximately half of the effect of caffeine on type 2 diabetes was estimated to be mediated through body mass index (BMI) reduction.
The work was published online March 14 in BMJ Medicine.
“This publication supports existing studies suggesting a link between caffeine consumption and increased fat burn,” notes Stephen Lawrence, MBChB, Warwick (England) University. “The big leap of faith that the authors have made is to assume that the weight loss brought about by increased caffeine consumption is sufficient to reduce the risk of developing type 2 diabetes,” he told the UK Science Media Centre.
“It does not, however, prove cause and effect.”
The researchers agree, noting: “Further clinical study is warranted to investigate the translational potential of these findings towards reducing the burden of metabolic disease.”
Katarina Kos, MD, PhD, a senior lecturer in diabetes and obesity at the University of Exeter (England), emphasized that this genetic study “shows links and potential health benefits for people with certain genes attributed to a faster [caffeine] metabolism as a hereditary trait and potentially a better metabolism.”
“It does not study or recommend drinking more coffee, which was not the purpose of this research,” she told the UK Science Media Centre.
Using Mendelian randomization, Dr. Larsson and colleagues examined data that came from a genomewide association meta-analysis of 9,876 individuals of European ancestry from six population-based studies.
Genetically predicted higher plasma caffeine concentrations in those carrying the two gene variants were associated with a lower BMI, with one standard deviation increase in predicted plasma caffeine equaling about 4.8 kg/m2 in BMI (P < .001).
For whole-body fat mass, one standard deviation increase in plasma caffeine equaled a reduction of about 9.5 kg (P < .001). However, there was no significant association with fat-free body mass (P = .17).
Genetically predicted higher plasma caffeine concentrations were also associated with a lower risk for type 2 diabetes in the FinnGen study (odds ratio, 0.77 per standard deviation increase; P < .001) and the DIAMANTE consortia (0.84, P < .001).
Combined, the odds ratio of type 2 diabetes per standard deviation of plasma caffeine increase was 0.81 (P < .001).
Dr. Larsson and colleagues calculated that approximately 43% of the protective effect of plasma caffeine on type 2 diabetes was mediated through BMI.
They did not find any strong associations between genetically predicted plasma caffeine concentrations and risk of any of the studied cardiovascular disease outcomes (ischemic heart disease, atrial fibrillation, heart failure, and stroke).
The thermogenic response to caffeine has been previously quantified as an approximate 100 kcal increase in energy expenditure per 100 mg daily caffeine intake, an amount that could result in reduced obesity risk. Another possible mechanism is enhanced satiety and suppressed energy intake with higher caffeine levels, the researchers say.
“Long-term clinical studies investigating the effect of caffeine intake on fat mass and type 2 diabetes risk are warranted,” they note. “Randomized controlled trials are warranted to assess whether noncaloric caffeine-containing beverages might play a role in reducing the risk of obesity and type 2 diabetes.”
The study was supported by the Swedish Research Council for Health, Working Life and Welfare, Swedish Heart Lung Foundation, and Swedish Research Council. Dr. Larsson, Dr. Lawrence, and Dr. Kos have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
the results of a new study suggest.
Explaining that caffeine has thermogenic effects, the researchers note that previous short-term studies have linked caffeine intake with reductions in weight and fat mass. And observational data have shown associations between coffee consumption and lower risks of type 2 diabetes and cardiovascular disease.
In an effort to isolate the effects of caffeine from those of other food and drink components, Susanna C. Larsson, PhD, of the Karolinska Institute, Stockholm, and colleagues used data from studies of mainly European populations to examine two specific genetic mutations that have been linked to a slower speed of caffeine metabolism.
The two gene variants resulted in “genetically predicted, lifelong, higher plasma caffeine concentrations,” the researchers note “and were associated with lower body mass index and fat mass, as well as a lower risk of type 2 diabetes.”
Approximately half of the effect of caffeine on type 2 diabetes was estimated to be mediated through body mass index (BMI) reduction.
The work was published online March 14 in BMJ Medicine.
“This publication supports existing studies suggesting a link between caffeine consumption and increased fat burn,” notes Stephen Lawrence, MBChB, Warwick (England) University. “The big leap of faith that the authors have made is to assume that the weight loss brought about by increased caffeine consumption is sufficient to reduce the risk of developing type 2 diabetes,” he told the UK Science Media Centre.
“It does not, however, prove cause and effect.”
The researchers agree, noting: “Further clinical study is warranted to investigate the translational potential of these findings towards reducing the burden of metabolic disease.”
Katarina Kos, MD, PhD, a senior lecturer in diabetes and obesity at the University of Exeter (England), emphasized that this genetic study “shows links and potential health benefits for people with certain genes attributed to a faster [caffeine] metabolism as a hereditary trait and potentially a better metabolism.”
“It does not study or recommend drinking more coffee, which was not the purpose of this research,” she told the UK Science Media Centre.
Using Mendelian randomization, Dr. Larsson and colleagues examined data that came from a genomewide association meta-analysis of 9,876 individuals of European ancestry from six population-based studies.
Genetically predicted higher plasma caffeine concentrations in those carrying the two gene variants were associated with a lower BMI, with one standard deviation increase in predicted plasma caffeine equaling about 4.8 kg/m2 in BMI (P < .001).
For whole-body fat mass, one standard deviation increase in plasma caffeine equaled a reduction of about 9.5 kg (P < .001). However, there was no significant association with fat-free body mass (P = .17).
Genetically predicted higher plasma caffeine concentrations were also associated with a lower risk for type 2 diabetes in the FinnGen study (odds ratio, 0.77 per standard deviation increase; P < .001) and the DIAMANTE consortia (0.84, P < .001).
Combined, the odds ratio of type 2 diabetes per standard deviation of plasma caffeine increase was 0.81 (P < .001).
Dr. Larsson and colleagues calculated that approximately 43% of the protective effect of plasma caffeine on type 2 diabetes was mediated through BMI.
They did not find any strong associations between genetically predicted plasma caffeine concentrations and risk of any of the studied cardiovascular disease outcomes (ischemic heart disease, atrial fibrillation, heart failure, and stroke).
The thermogenic response to caffeine has been previously quantified as an approximate 100 kcal increase in energy expenditure per 100 mg daily caffeine intake, an amount that could result in reduced obesity risk. Another possible mechanism is enhanced satiety and suppressed energy intake with higher caffeine levels, the researchers say.
“Long-term clinical studies investigating the effect of caffeine intake on fat mass and type 2 diabetes risk are warranted,” they note. “Randomized controlled trials are warranted to assess whether noncaloric caffeine-containing beverages might play a role in reducing the risk of obesity and type 2 diabetes.”
The study was supported by the Swedish Research Council for Health, Working Life and Welfare, Swedish Heart Lung Foundation, and Swedish Research Council. Dr. Larsson, Dr. Lawrence, and Dr. Kos have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
the results of a new study suggest.
Explaining that caffeine has thermogenic effects, the researchers note that previous short-term studies have linked caffeine intake with reductions in weight and fat mass. And observational data have shown associations between coffee consumption and lower risks of type 2 diabetes and cardiovascular disease.
In an effort to isolate the effects of caffeine from those of other food and drink components, Susanna C. Larsson, PhD, of the Karolinska Institute, Stockholm, and colleagues used data from studies of mainly European populations to examine two specific genetic mutations that have been linked to a slower speed of caffeine metabolism.
The two gene variants resulted in “genetically predicted, lifelong, higher plasma caffeine concentrations,” the researchers note “and were associated with lower body mass index and fat mass, as well as a lower risk of type 2 diabetes.”
Approximately half of the effect of caffeine on type 2 diabetes was estimated to be mediated through body mass index (BMI) reduction.
The work was published online March 14 in BMJ Medicine.
“This publication supports existing studies suggesting a link between caffeine consumption and increased fat burn,” notes Stephen Lawrence, MBChB, Warwick (England) University. “The big leap of faith that the authors have made is to assume that the weight loss brought about by increased caffeine consumption is sufficient to reduce the risk of developing type 2 diabetes,” he told the UK Science Media Centre.
“It does not, however, prove cause and effect.”
The researchers agree, noting: “Further clinical study is warranted to investigate the translational potential of these findings towards reducing the burden of metabolic disease.”
Katarina Kos, MD, PhD, a senior lecturer in diabetes and obesity at the University of Exeter (England), emphasized that this genetic study “shows links and potential health benefits for people with certain genes attributed to a faster [caffeine] metabolism as a hereditary trait and potentially a better metabolism.”
“It does not study or recommend drinking more coffee, which was not the purpose of this research,” she told the UK Science Media Centre.
Using Mendelian randomization, Dr. Larsson and colleagues examined data that came from a genomewide association meta-analysis of 9,876 individuals of European ancestry from six population-based studies.
Genetically predicted higher plasma caffeine concentrations in those carrying the two gene variants were associated with a lower BMI, with one standard deviation increase in predicted plasma caffeine equaling about 4.8 kg/m2 in BMI (P < .001).
For whole-body fat mass, one standard deviation increase in plasma caffeine equaled a reduction of about 9.5 kg (P < .001). However, there was no significant association with fat-free body mass (P = .17).
Genetically predicted higher plasma caffeine concentrations were also associated with a lower risk for type 2 diabetes in the FinnGen study (odds ratio, 0.77 per standard deviation increase; P < .001) and the DIAMANTE consortia (0.84, P < .001).
Combined, the odds ratio of type 2 diabetes per standard deviation of plasma caffeine increase was 0.81 (P < .001).
Dr. Larsson and colleagues calculated that approximately 43% of the protective effect of plasma caffeine on type 2 diabetes was mediated through BMI.
They did not find any strong associations between genetically predicted plasma caffeine concentrations and risk of any of the studied cardiovascular disease outcomes (ischemic heart disease, atrial fibrillation, heart failure, and stroke).
The thermogenic response to caffeine has been previously quantified as an approximate 100 kcal increase in energy expenditure per 100 mg daily caffeine intake, an amount that could result in reduced obesity risk. Another possible mechanism is enhanced satiety and suppressed energy intake with higher caffeine levels, the researchers say.
“Long-term clinical studies investigating the effect of caffeine intake on fat mass and type 2 diabetes risk are warranted,” they note. “Randomized controlled trials are warranted to assess whether noncaloric caffeine-containing beverages might play a role in reducing the risk of obesity and type 2 diabetes.”
The study was supported by the Swedish Research Council for Health, Working Life and Welfare, Swedish Heart Lung Foundation, and Swedish Research Council. Dr. Larsson, Dr. Lawrence, and Dr. Kos have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM BMJ MEDICINE
Few women identify breast density as a breast cancer risk
Most women did not feel confident they knew what actions could mitigate breast cancer risk, leading researchers to the conclusion that comprehensive education about breast cancer risks and prevention strategies is needed.
The study was published earlier this year in JAMA Network Open.
“Forty [percent] to 50% of women who undergo mammography fall into the two highest breast density categories,” said the study’s lead author Christine Gunn, PhD, of the Dartmouth Institute for Health Policy and Clinical Practice, N.H. “Breast cancer risk increases from 1.2-4.0 times depending on the level of breast density. By comparison, a first-degree family history of breast cancer, particularly in premenopausal women, confers a two-fold higher breast cancer risk.”
Dr. Gunn’s study is based on a survey of 2,306 women (between 40 and 76 years old) that was conducted between 2019 and 2020. The goal was to determine how well women understood cancer risks associated with dense breast tissue. The final analysis included 1,858 women (9% Asian, 27% Black, 14% Hispanic, 43% White, and 7% other race or ethnicity).
Breast density was thought to be a greater risk than not having children, drinking daily, and having had a prior breast biopsy, according to 52%, 53%, and 48% of respondents, respectively. Breast density was believed to be a lesser breast cancer risk than having a first-degree relative with breast cancer by 93% of women, and 65% of women felt it was a lesser risk than being overweight or obese.
Of the 61 women who completed follow-up interviews, 6 described breast density as a contributing factor to breast cancer risk. And, 17 women did not know whether it was possible to reduce their breast cancer risk.
Doctors must notify patients in writing
Breast tissue falls under one of four categories: fatty tissue, scattered areas of dense fibroglandular tissue, many areas of glandular and connective tissue, or extremely dense tissue. The tissue is considered dense if it falls under heterogeneously dense or extremely dense, and in those cases, follow-up testing with ultrasound or MRI may be necessary. This is important, Dr. Gunn said, because dense tissue can make “it harder to find cancers because connective tissue appears white on the mammogram, potentially masking tumors.”
Prior studies have found that many clinicians are uncomfortable counseling patients on the implications of breast density and cancer risk, the authors wrote.
However, under the Mammography Quality Standards Act, which was updated on March 10, the Food and Drug Administration requires that patients be provided with a mammography report summary that “identifies whether the patient has dense or nondense breast tissue.” The report, which should be written in lay language, should also specify the “significance” of the dense tissue.
While some states mandate notification regardless of the density level, most only notify women if heterogeneously dense or extremely dense tissue has been identified, Dr. Gunn said. But the rules are inconsistent, she said. In some facilities in Massachusetts, for example, women may receive a mammography report letter and a separate breast density letter. “For some, it has been really confusing. They received a letter saying that their mammography was normal and then another one saying that they have dense breasts – resulting in a lot of uncertainty and anxiety. We don’t want to overly alarm people. We want them to understand their risk,” she said.
Breast density can be considered among other risk factors, including alcohol use, obesity, diet, parity, prior breast biopsy, and inherited unfavorable genetic mutations. “If the total lifetime risk is above 20%, that opens up further screening options, such as a breast MRI, which will catch more cancers than a breast mammogram by itself,” Dr. Gunn said.
“The challenges for physicians and patients around collecting and understanding breast density information in the context of other risk factors can potentially lead to disparities in who gets to know their risk and who doesn’t,” Dr. Gunn said. It would be possible, she speculated, to create or use existing risk calculators integrated into medical records and populated with information gathered in premammography visit questionnaires. Ideally, a radiologist could hand the patient results in real time at the end of the mammography visit, integrating risk estimates with mammography findings to make recommendations.
This study was supported by grant RSG-133017-CPHPS from the American Cancer Society.
Most women did not feel confident they knew what actions could mitigate breast cancer risk, leading researchers to the conclusion that comprehensive education about breast cancer risks and prevention strategies is needed.
The study was published earlier this year in JAMA Network Open.
“Forty [percent] to 50% of women who undergo mammography fall into the two highest breast density categories,” said the study’s lead author Christine Gunn, PhD, of the Dartmouth Institute for Health Policy and Clinical Practice, N.H. “Breast cancer risk increases from 1.2-4.0 times depending on the level of breast density. By comparison, a first-degree family history of breast cancer, particularly in premenopausal women, confers a two-fold higher breast cancer risk.”
Dr. Gunn’s study is based on a survey of 2,306 women (between 40 and 76 years old) that was conducted between 2019 and 2020. The goal was to determine how well women understood cancer risks associated with dense breast tissue. The final analysis included 1,858 women (9% Asian, 27% Black, 14% Hispanic, 43% White, and 7% other race or ethnicity).
Breast density was thought to be a greater risk than not having children, drinking daily, and having had a prior breast biopsy, according to 52%, 53%, and 48% of respondents, respectively. Breast density was believed to be a lesser breast cancer risk than having a first-degree relative with breast cancer by 93% of women, and 65% of women felt it was a lesser risk than being overweight or obese.
Of the 61 women who completed follow-up interviews, 6 described breast density as a contributing factor to breast cancer risk. And, 17 women did not know whether it was possible to reduce their breast cancer risk.
Doctors must notify patients in writing
Breast tissue falls under one of four categories: fatty tissue, scattered areas of dense fibroglandular tissue, many areas of glandular and connective tissue, or extremely dense tissue. The tissue is considered dense if it falls under heterogeneously dense or extremely dense, and in those cases, follow-up testing with ultrasound or MRI may be necessary. This is important, Dr. Gunn said, because dense tissue can make “it harder to find cancers because connective tissue appears white on the mammogram, potentially masking tumors.”
Prior studies have found that many clinicians are uncomfortable counseling patients on the implications of breast density and cancer risk, the authors wrote.
However, under the Mammography Quality Standards Act, which was updated on March 10, the Food and Drug Administration requires that patients be provided with a mammography report summary that “identifies whether the patient has dense or nondense breast tissue.” The report, which should be written in lay language, should also specify the “significance” of the dense tissue.
While some states mandate notification regardless of the density level, most only notify women if heterogeneously dense or extremely dense tissue has been identified, Dr. Gunn said. But the rules are inconsistent, she said. In some facilities in Massachusetts, for example, women may receive a mammography report letter and a separate breast density letter. “For some, it has been really confusing. They received a letter saying that their mammography was normal and then another one saying that they have dense breasts – resulting in a lot of uncertainty and anxiety. We don’t want to overly alarm people. We want them to understand their risk,” she said.
Breast density can be considered among other risk factors, including alcohol use, obesity, diet, parity, prior breast biopsy, and inherited unfavorable genetic mutations. “If the total lifetime risk is above 20%, that opens up further screening options, such as a breast MRI, which will catch more cancers than a breast mammogram by itself,” Dr. Gunn said.
“The challenges for physicians and patients around collecting and understanding breast density information in the context of other risk factors can potentially lead to disparities in who gets to know their risk and who doesn’t,” Dr. Gunn said. It would be possible, she speculated, to create or use existing risk calculators integrated into medical records and populated with information gathered in premammography visit questionnaires. Ideally, a radiologist could hand the patient results in real time at the end of the mammography visit, integrating risk estimates with mammography findings to make recommendations.
This study was supported by grant RSG-133017-CPHPS from the American Cancer Society.
Most women did not feel confident they knew what actions could mitigate breast cancer risk, leading researchers to the conclusion that comprehensive education about breast cancer risks and prevention strategies is needed.
The study was published earlier this year in JAMA Network Open.
“Forty [percent] to 50% of women who undergo mammography fall into the two highest breast density categories,” said the study’s lead author Christine Gunn, PhD, of the Dartmouth Institute for Health Policy and Clinical Practice, N.H. “Breast cancer risk increases from 1.2-4.0 times depending on the level of breast density. By comparison, a first-degree family history of breast cancer, particularly in premenopausal women, confers a two-fold higher breast cancer risk.”
Dr. Gunn’s study is based on a survey of 2,306 women (between 40 and 76 years old) that was conducted between 2019 and 2020. The goal was to determine how well women understood cancer risks associated with dense breast tissue. The final analysis included 1,858 women (9% Asian, 27% Black, 14% Hispanic, 43% White, and 7% other race or ethnicity).
Breast density was thought to be a greater risk than not having children, drinking daily, and having had a prior breast biopsy, according to 52%, 53%, and 48% of respondents, respectively. Breast density was believed to be a lesser breast cancer risk than having a first-degree relative with breast cancer by 93% of women, and 65% of women felt it was a lesser risk than being overweight or obese.
Of the 61 women who completed follow-up interviews, 6 described breast density as a contributing factor to breast cancer risk. And, 17 women did not know whether it was possible to reduce their breast cancer risk.
Doctors must notify patients in writing
Breast tissue falls under one of four categories: fatty tissue, scattered areas of dense fibroglandular tissue, many areas of glandular and connective tissue, or extremely dense tissue. The tissue is considered dense if it falls under heterogeneously dense or extremely dense, and in those cases, follow-up testing with ultrasound or MRI may be necessary. This is important, Dr. Gunn said, because dense tissue can make “it harder to find cancers because connective tissue appears white on the mammogram, potentially masking tumors.”
Prior studies have found that many clinicians are uncomfortable counseling patients on the implications of breast density and cancer risk, the authors wrote.
However, under the Mammography Quality Standards Act, which was updated on March 10, the Food and Drug Administration requires that patients be provided with a mammography report summary that “identifies whether the patient has dense or nondense breast tissue.” The report, which should be written in lay language, should also specify the “significance” of the dense tissue.
While some states mandate notification regardless of the density level, most only notify women if heterogeneously dense or extremely dense tissue has been identified, Dr. Gunn said. But the rules are inconsistent, she said. In some facilities in Massachusetts, for example, women may receive a mammography report letter and a separate breast density letter. “For some, it has been really confusing. They received a letter saying that their mammography was normal and then another one saying that they have dense breasts – resulting in a lot of uncertainty and anxiety. We don’t want to overly alarm people. We want them to understand their risk,” she said.
Breast density can be considered among other risk factors, including alcohol use, obesity, diet, parity, prior breast biopsy, and inherited unfavorable genetic mutations. “If the total lifetime risk is above 20%, that opens up further screening options, such as a breast MRI, which will catch more cancers than a breast mammogram by itself,” Dr. Gunn said.
“The challenges for physicians and patients around collecting and understanding breast density information in the context of other risk factors can potentially lead to disparities in who gets to know their risk and who doesn’t,” Dr. Gunn said. It would be possible, she speculated, to create or use existing risk calculators integrated into medical records and populated with information gathered in premammography visit questionnaires. Ideally, a radiologist could hand the patient results in real time at the end of the mammography visit, integrating risk estimates with mammography findings to make recommendations.
This study was supported by grant RSG-133017-CPHPS from the American Cancer Society.
FROM JAMA NETWORK OPEN
AI helps predict ulcerative colitis remission/activity, flare-ups
The AI tool predicted UC disease activity with 89% accuracy and inflammation at the biopsy site with 80% accuracy. Its ability to stratify risk of UC flare was on par with human pathologists.
“This tool in the near future will speed up, simplify, and standardize histological assessment of ulcerative colitis and provide the clinician with accurate prognostic information in real time,” co–lead author Marietta Iacucci, MD, PhD, from the University of Birmingham (England), and University College Cork (Ireland), said in an interview.
“The tool needs to be refined and further validated before it is ready for daily clinical practice. That work is ongoing now,” Dr. Iacucci said.
The researchers describe their advanced AI-based computer-aided detection tool in a study published online in Gastroenterology.
‘Strong’ performance
They used 535 digitized biopsies from 273 patients with UC (mean age, 48 years; 41% women) to develop and test the tool. They used a subset of 118 to train it to distinguish remission from activity, 42 to calibrate it, and 375 to test it. An additional 154 biopsies from 58 patients with UC were used to externally validate the tool.
The model also was tested to predict the corresponding endoscopic assessment and occurrence of flares at 12 months.
UC disease activity was defined by three different histologic indices: the Robarts Histopathology Index (RHI), the Nancy Histological Index (NHI), and the newly developed PICaSSO Histologic Remission Index (PHRI).
The AI tool had “strong diagnostic performance to detect disease activity” (PHRI > 0) with an overall area under the receiver operating characteristic curve of 0.87 and sensitivity and specificity of 89% and 85%, respectively.
The researchers note that, while the AI tool was trained for the PHRI, its sensitivity for RHI and NHI histologic remission/activity was also high (94% and 89%, respectively).
Despite the different mix of severity grades, the AI model “maintained a good diagnostic performance, proving its applicability outside the original development setting,” they reported.
The AI tool could also predict the presence of endoscopic inflammation in the biopsy area with about 80% accuracy.
“Though imperfect, this result is consistent with human-assessed correlation between endoscopy and histology,” the researchers noted.
The model predicted the corresponding endoscopic remission/activity with 79% and 82% accuracy for UCEIS and PICaSSO, respectively.
The hazard ratios for disease flare-up between the AI system and pathologists assessed by PHRI was similar (4.64 and 3.56, respectively), “demonstrating the ability of the computer to stratify the risk of flare comparably well to pathologists,” they added.
Both histology and outcome prediction were confirmed in the external validation cohort.
The AI system delivered results in an average of 9.8 seconds per slide.
Potential ‘game changer’
UC is a “complex condition to predict, and developing machine learning–derived systems to make this diagnostic job quicker and more accurate could be a game changer,” Dr. Iacucci said in a news release.
With refinement, the AI tool will have an impact on both clinical trials and daily practice, the researchers wrote. In clinical practice, histological reporting remains “largely descriptive and nonstandard, thus would greatly benefit from a quick and objective assessment. Similarly, clinical trials in UC could efficiently overcome costly central readings.”
Assessing and measuring improvement in endoscopy and histology are difficult parts of treating UC, said David Hudesman, MD, codirector of the Inflammatory Bowel Disease Center at New York University Langone Health.
“We do not know how much improvement is associated with improved long-term outcomes,” Dr. Hudesman said in an interview. “For example, does a patient need complete healing or is 50% better enough?” Dr. Hudesman was not involved with the current research.
“This study showed that AI can predict – with good accuracy – endoscopy and histology scores, as well as 1-year patient outcomes. If this is validated in larger studies, AI can help determine if we should adjust/change therapies or continue, which is very important,” he said.
This research was supported by the National Institute for Health Research Birmingham Biomedical Research Centre. Dr. Iacucci and Dr. Hudesman reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The AI tool predicted UC disease activity with 89% accuracy and inflammation at the biopsy site with 80% accuracy. Its ability to stratify risk of UC flare was on par with human pathologists.
“This tool in the near future will speed up, simplify, and standardize histological assessment of ulcerative colitis and provide the clinician with accurate prognostic information in real time,” co–lead author Marietta Iacucci, MD, PhD, from the University of Birmingham (England), and University College Cork (Ireland), said in an interview.
“The tool needs to be refined and further validated before it is ready for daily clinical practice. That work is ongoing now,” Dr. Iacucci said.
The researchers describe their advanced AI-based computer-aided detection tool in a study published online in Gastroenterology.
‘Strong’ performance
They used 535 digitized biopsies from 273 patients with UC (mean age, 48 years; 41% women) to develop and test the tool. They used a subset of 118 to train it to distinguish remission from activity, 42 to calibrate it, and 375 to test it. An additional 154 biopsies from 58 patients with UC were used to externally validate the tool.
The model also was tested to predict the corresponding endoscopic assessment and occurrence of flares at 12 months.
UC disease activity was defined by three different histologic indices: the Robarts Histopathology Index (RHI), the Nancy Histological Index (NHI), and the newly developed PICaSSO Histologic Remission Index (PHRI).
The AI tool had “strong diagnostic performance to detect disease activity” (PHRI > 0) with an overall area under the receiver operating characteristic curve of 0.87 and sensitivity and specificity of 89% and 85%, respectively.
The researchers note that, while the AI tool was trained for the PHRI, its sensitivity for RHI and NHI histologic remission/activity was also high (94% and 89%, respectively).
Despite the different mix of severity grades, the AI model “maintained a good diagnostic performance, proving its applicability outside the original development setting,” they reported.
The AI tool could also predict the presence of endoscopic inflammation in the biopsy area with about 80% accuracy.
“Though imperfect, this result is consistent with human-assessed correlation between endoscopy and histology,” the researchers noted.
The model predicted the corresponding endoscopic remission/activity with 79% and 82% accuracy for UCEIS and PICaSSO, respectively.
The hazard ratios for disease flare-up between the AI system and pathologists assessed by PHRI was similar (4.64 and 3.56, respectively), “demonstrating the ability of the computer to stratify the risk of flare comparably well to pathologists,” they added.
Both histology and outcome prediction were confirmed in the external validation cohort.
The AI system delivered results in an average of 9.8 seconds per slide.
Potential ‘game changer’
UC is a “complex condition to predict, and developing machine learning–derived systems to make this diagnostic job quicker and more accurate could be a game changer,” Dr. Iacucci said in a news release.
With refinement, the AI tool will have an impact on both clinical trials and daily practice, the researchers wrote. In clinical practice, histological reporting remains “largely descriptive and nonstandard, thus would greatly benefit from a quick and objective assessment. Similarly, clinical trials in UC could efficiently overcome costly central readings.”
Assessing and measuring improvement in endoscopy and histology are difficult parts of treating UC, said David Hudesman, MD, codirector of the Inflammatory Bowel Disease Center at New York University Langone Health.
“We do not know how much improvement is associated with improved long-term outcomes,” Dr. Hudesman said in an interview. “For example, does a patient need complete healing or is 50% better enough?” Dr. Hudesman was not involved with the current research.
“This study showed that AI can predict – with good accuracy – endoscopy and histology scores, as well as 1-year patient outcomes. If this is validated in larger studies, AI can help determine if we should adjust/change therapies or continue, which is very important,” he said.
This research was supported by the National Institute for Health Research Birmingham Biomedical Research Centre. Dr. Iacucci and Dr. Hudesman reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The AI tool predicted UC disease activity with 89% accuracy and inflammation at the biopsy site with 80% accuracy. Its ability to stratify risk of UC flare was on par with human pathologists.
“This tool in the near future will speed up, simplify, and standardize histological assessment of ulcerative colitis and provide the clinician with accurate prognostic information in real time,” co–lead author Marietta Iacucci, MD, PhD, from the University of Birmingham (England), and University College Cork (Ireland), said in an interview.
“The tool needs to be refined and further validated before it is ready for daily clinical practice. That work is ongoing now,” Dr. Iacucci said.
The researchers describe their advanced AI-based computer-aided detection tool in a study published online in Gastroenterology.
‘Strong’ performance
They used 535 digitized biopsies from 273 patients with UC (mean age, 48 years; 41% women) to develop and test the tool. They used a subset of 118 to train it to distinguish remission from activity, 42 to calibrate it, and 375 to test it. An additional 154 biopsies from 58 patients with UC were used to externally validate the tool.
The model also was tested to predict the corresponding endoscopic assessment and occurrence of flares at 12 months.
UC disease activity was defined by three different histologic indices: the Robarts Histopathology Index (RHI), the Nancy Histological Index (NHI), and the newly developed PICaSSO Histologic Remission Index (PHRI).
The AI tool had “strong diagnostic performance to detect disease activity” (PHRI > 0) with an overall area under the receiver operating characteristic curve of 0.87 and sensitivity and specificity of 89% and 85%, respectively.
The researchers note that, while the AI tool was trained for the PHRI, its sensitivity for RHI and NHI histologic remission/activity was also high (94% and 89%, respectively).
Despite the different mix of severity grades, the AI model “maintained a good diagnostic performance, proving its applicability outside the original development setting,” they reported.
The AI tool could also predict the presence of endoscopic inflammation in the biopsy area with about 80% accuracy.
“Though imperfect, this result is consistent with human-assessed correlation between endoscopy and histology,” the researchers noted.
The model predicted the corresponding endoscopic remission/activity with 79% and 82% accuracy for UCEIS and PICaSSO, respectively.
The hazard ratios for disease flare-up between the AI system and pathologists assessed by PHRI was similar (4.64 and 3.56, respectively), “demonstrating the ability of the computer to stratify the risk of flare comparably well to pathologists,” they added.
Both histology and outcome prediction were confirmed in the external validation cohort.
The AI system delivered results in an average of 9.8 seconds per slide.
Potential ‘game changer’
UC is a “complex condition to predict, and developing machine learning–derived systems to make this diagnostic job quicker and more accurate could be a game changer,” Dr. Iacucci said in a news release.
With refinement, the AI tool will have an impact on both clinical trials and daily practice, the researchers wrote. In clinical practice, histological reporting remains “largely descriptive and nonstandard, thus would greatly benefit from a quick and objective assessment. Similarly, clinical trials in UC could efficiently overcome costly central readings.”
Assessing and measuring improvement in endoscopy and histology are difficult parts of treating UC, said David Hudesman, MD, codirector of the Inflammatory Bowel Disease Center at New York University Langone Health.
“We do not know how much improvement is associated with improved long-term outcomes,” Dr. Hudesman said in an interview. “For example, does a patient need complete healing or is 50% better enough?” Dr. Hudesman was not involved with the current research.
“This study showed that AI can predict – with good accuracy – endoscopy and histology scores, as well as 1-year patient outcomes. If this is validated in larger studies, AI can help determine if we should adjust/change therapies or continue, which is very important,” he said.
This research was supported by the National Institute for Health Research Birmingham Biomedical Research Centre. Dr. Iacucci and Dr. Hudesman reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM GASTROENTEROLOGY
High school athletes sustaining worse injuries
High school students are injuring themselves more severely even as overall injury rates have declined, according to a new study presented at the annual meeting of the American Academy of Orthopaedic Surgeons.
The study compared injuries from a 4-year period ending in 2019 to data from 2005 and 2006. The overall rate of injuries dropped 9%, from 2.51 injuries per 1,000 athletic games or practices to 2.29 per 1,000; injuries requiring less than 1 week of recovery time fell by 13%. But, the number of head and neck injuries increased by 10%, injuries requiring surgery increased by 1%, and injuries leading to medical disqualification jumped by 11%.
“It’s wonderful that the injury rate is declining,” said Jordan Neoma Pizzarro, a medical student at George Washington University, Washington, who led the study. “But the data does suggest that the injuries that are happening are worse.”
The increases may also reflect increased education and awareness of how to detect concussions and other injuries that need medical attention, said Micah Lissy, MD, MS, an orthopedic surgeon specializing in sports medicine at Michigan State University, East Lansing. Dr. Lissy cautioned against physicians and others taking the data at face value.
“We need to be implementing preventive measures wherever possible, but I think we can also consider that there may be some confounding factors in the data,” Dr. Lissy told this news organization.
Ms. Pizzarro and her team analyzed data collected from athletic trainers at 100 high schools across the country for the ongoing National Health School Sports-Related Injury Surveillance Study.
Athletes participating in sports such as football, soccer, basketball, volleyball, and softball were included in the analysis. Trainers report the number of injuries for every competition and practice, also known as “athletic exposures.”
Boys’ football carried the highest injury rate, with 3.96 injuries per 1,000 AEs, amounting to 44% of all injuries reported. Girls’ soccer and boys’ wrestling followed, with injury rates of 2.65 and 1.56, respectively.
Sprains and strains accounted for 37% of injuries, followed by concussions (21.6%). The head and/or face was the most injured body site, followed by the ankles and/or knees. Most injuries took place during competitions rather than in practices (relative risk, 3.39; 95% confidence interval, 3.28-3.49; P < .05).
Ms. Pizzarro said that an overall increase in intensity, physical contact, and collisions may account for the spike in more severe injuries.
“Kids are encouraged to specialize in one sport early on and stick with it year-round,” she said. “They’re probably becoming more agile and better athletes, but they’re probably also getting more competitive.”
Dr. Lissy, who has worked with high school athletes as a surgeon, physical therapist, athletic trainer, and coach, said that some of the increases in severity of injuries may reflect trends in sports over the past two decades: Student athletes have become stronger and faster and have put on more muscle mass.
“When you have something that’s much larger, moving much faster and with more force, you’re going to have more force when you bump into things,” he said. “This can lead to more significant injuries.”
The study was independently supported. Study authors report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
High school students are injuring themselves more severely even as overall injury rates have declined, according to a new study presented at the annual meeting of the American Academy of Orthopaedic Surgeons.
The study compared injuries from a 4-year period ending in 2019 to data from 2005 and 2006. The overall rate of injuries dropped 9%, from 2.51 injuries per 1,000 athletic games or practices to 2.29 per 1,000; injuries requiring less than 1 week of recovery time fell by 13%. But, the number of head and neck injuries increased by 10%, injuries requiring surgery increased by 1%, and injuries leading to medical disqualification jumped by 11%.
“It’s wonderful that the injury rate is declining,” said Jordan Neoma Pizzarro, a medical student at George Washington University, Washington, who led the study. “But the data does suggest that the injuries that are happening are worse.”
The increases may also reflect increased education and awareness of how to detect concussions and other injuries that need medical attention, said Micah Lissy, MD, MS, an orthopedic surgeon specializing in sports medicine at Michigan State University, East Lansing. Dr. Lissy cautioned against physicians and others taking the data at face value.
“We need to be implementing preventive measures wherever possible, but I think we can also consider that there may be some confounding factors in the data,” Dr. Lissy told this news organization.
Ms. Pizzarro and her team analyzed data collected from athletic trainers at 100 high schools across the country for the ongoing National Health School Sports-Related Injury Surveillance Study.
Athletes participating in sports such as football, soccer, basketball, volleyball, and softball were included in the analysis. Trainers report the number of injuries for every competition and practice, also known as “athletic exposures.”
Boys’ football carried the highest injury rate, with 3.96 injuries per 1,000 AEs, amounting to 44% of all injuries reported. Girls’ soccer and boys’ wrestling followed, with injury rates of 2.65 and 1.56, respectively.
Sprains and strains accounted for 37% of injuries, followed by concussions (21.6%). The head and/or face was the most injured body site, followed by the ankles and/or knees. Most injuries took place during competitions rather than in practices (relative risk, 3.39; 95% confidence interval, 3.28-3.49; P < .05).
Ms. Pizzarro said that an overall increase in intensity, physical contact, and collisions may account for the spike in more severe injuries.
“Kids are encouraged to specialize in one sport early on and stick with it year-round,” she said. “They’re probably becoming more agile and better athletes, but they’re probably also getting more competitive.”
Dr. Lissy, who has worked with high school athletes as a surgeon, physical therapist, athletic trainer, and coach, said that some of the increases in severity of injuries may reflect trends in sports over the past two decades: Student athletes have become stronger and faster and have put on more muscle mass.
“When you have something that’s much larger, moving much faster and with more force, you’re going to have more force when you bump into things,” he said. “This can lead to more significant injuries.”
The study was independently supported. Study authors report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
High school students are injuring themselves more severely even as overall injury rates have declined, according to a new study presented at the annual meeting of the American Academy of Orthopaedic Surgeons.
The study compared injuries from a 4-year period ending in 2019 to data from 2005 and 2006. The overall rate of injuries dropped 9%, from 2.51 injuries per 1,000 athletic games or practices to 2.29 per 1,000; injuries requiring less than 1 week of recovery time fell by 13%. But, the number of head and neck injuries increased by 10%, injuries requiring surgery increased by 1%, and injuries leading to medical disqualification jumped by 11%.
“It’s wonderful that the injury rate is declining,” said Jordan Neoma Pizzarro, a medical student at George Washington University, Washington, who led the study. “But the data does suggest that the injuries that are happening are worse.”
The increases may also reflect increased education and awareness of how to detect concussions and other injuries that need medical attention, said Micah Lissy, MD, MS, an orthopedic surgeon specializing in sports medicine at Michigan State University, East Lansing. Dr. Lissy cautioned against physicians and others taking the data at face value.
“We need to be implementing preventive measures wherever possible, but I think we can also consider that there may be some confounding factors in the data,” Dr. Lissy told this news organization.
Ms. Pizzarro and her team analyzed data collected from athletic trainers at 100 high schools across the country for the ongoing National Health School Sports-Related Injury Surveillance Study.
Athletes participating in sports such as football, soccer, basketball, volleyball, and softball were included in the analysis. Trainers report the number of injuries for every competition and practice, also known as “athletic exposures.”
Boys’ football carried the highest injury rate, with 3.96 injuries per 1,000 AEs, amounting to 44% of all injuries reported. Girls’ soccer and boys’ wrestling followed, with injury rates of 2.65 and 1.56, respectively.
Sprains and strains accounted for 37% of injuries, followed by concussions (21.6%). The head and/or face was the most injured body site, followed by the ankles and/or knees. Most injuries took place during competitions rather than in practices (relative risk, 3.39; 95% confidence interval, 3.28-3.49; P < .05).
Ms. Pizzarro said that an overall increase in intensity, physical contact, and collisions may account for the spike in more severe injuries.
“Kids are encouraged to specialize in one sport early on and stick with it year-round,” she said. “They’re probably becoming more agile and better athletes, but they’re probably also getting more competitive.”
Dr. Lissy, who has worked with high school athletes as a surgeon, physical therapist, athletic trainer, and coach, said that some of the increases in severity of injuries may reflect trends in sports over the past two decades: Student athletes have become stronger and faster and have put on more muscle mass.
“When you have something that’s much larger, moving much faster and with more force, you’re going to have more force when you bump into things,” he said. “This can lead to more significant injuries.”
The study was independently supported. Study authors report no relevant financial relationships.
A version of this article originally appeared on Medscape.com.
LGBTQ+ teens in homophobic high schools
I am a psychiatrist now but had another life teaching English in public high school for 17 years. My teaching life, in which I was an openly gay teacher, spanned 2001-2018 and was divided between two urban California schools – in Berkeley and San Leandro. I came out by responding honestly to student questions about whether I had a girlfriend, and what I did over the weekend. At Berkeley High my openness wasn’t an issue at all. The school had a vibrant Gay Straight Alliance/GSA for years, there were many openly gay staff and many openly gay students. No students felt the need to come out to me in search of a gay mentor.
Two years later, I began teaching in San Leandro, 20 miles away, and it was a lesson in how even the San Francisco Bay Area, an LGBTQ+ bastion, could harbor homophobia. When I was hired in 2003, San Leandro High had one openly gay teacher, Q. I quickly realized how much braver his coming out was compared with mine in Berkeley.
In San Leandro, gay slurs were heard nonstop in the hallways, no students were out, and by the end of my first year Q had quit, confiding in me that he couldn’t handle the homophobic harassment from students anymore. There was no GSA. A few years ago, two lesbians had held hands during lunch and inspired the wrath of a group of parents who advocated for their expulsion. In response, a teacher tried to introduce gay sensitivity training into his class and the same group of parents tried to get him fired. He was reprimanded by the principal, he countersued in a case that went all the way to the California Supreme Court, and won. Comparing these two local high schools reinforced to me how visibility really matters in creating a childhood experience that is nurturing versus traumatizing.1
Two Chinese girls in love
N and T were two Chinese girls who grew up in San Leandro. They went to the same elementary school and had crushes on each other since then. In their junior year, they joined our first student GSA, becoming president and vice-president. They were out. And, of course, they must’ve known that their families, who would not have been supportive, would become aware. I remember sitting at an outdoor concert when I got a text from N warning me her father had found out and blamed me for having corrupted her. He planned on coming to school to demand I be fired. And such was the unrelenting pressure that N and T faced every time they went home from school and sat at their dinner tables. Eventually, they broke up. They didn’t do so tearfully, but more wearily.
This story illustrates how difficult it is for love between two LGBTQ+ teens to be nurtured. Love in youth can already be volatile because of the lack of emotional regulation and experience. The questioning of identity and the threat of family disintegration at a time when these teens do not have the economic means to protect themselves makes love dangerous. It is no wonder that gay teens are at increased risk for homelessness.2
The family incident that led to the girls’ breakup reveals how culture affects homophobic pressure. N resisted her parents’ disapproval for months, but she capitulated when her father had a heart attack and blamed it on her. “And it’s true,” N confided. “After my parents found out, they were continually stressed. I could see it affect their health. And it breaks my heart to see my dad in the hospital.”
For N, she had not capitulated from fear, but perhaps because of filial piety, or one’s obligation to protect one’s parent. It was a choice between two heartbreaks. Double minorities, like N and T, face a double threat and often can find no safe place. One of my patients who is gay and Black put it best: “It’s like being beaten up at school only to come home to another beating.” This double threat is evidenced by the higher suicide risk of ethnicities who are LGBTQ+ relative to their white counterparts.3
The confusion of a gay athlete
R was a star point guard, a senior who had secured an athletic scholarship, and was recognized as the best athlete in our county. A popular boy, he flaunted his physique and flirted with all the girls. And then when he was enrolled in my class, he began flirting with all the boys, too. There was gossip that R was bisexual. Then one day, not unexpectedly, he came out to me as gay. He admitted he only flirted with girls for his reputation.
By this time many students had come out to me but he flirted with me with his revelation. I corrected him and warned him unequivocally that it was inappropriate but I was worried because I knew he had placed his trust in me. I also knew he came from a homophobic family that was violent – his father had attacked him physically at a school game and our coaches had to pull him off.
Instinctively, I felt I had to have a witness so I confided in another teacher and documented the situation meticulously. Then, one day, just as I feared, he went too far. He stayed after class and said he wanted to show me something on his phone. And that something turned out to be a picture of himself naked. I immediately confiscated the phone and reported it to the administration. This was not how I wanted him to come out: His family notified by the police that he had sexually harassed his teacher, expulsion pending, and scholarship inevitably revoked. Fortunately, we did find a resolution that restored R’s future.
Let’s examine the circumstances that could’ve informed his transgressive behavior. If we consider sexual harassment a form of bullying, R’s history of having a father who publicly bullied him – and may have bullied others in front of him – is a known risk factor.4 It is also common knowledge that organized team sports were and still are a bastion of homophobia and that gay athletes had to accept a culture of explicit homophobia.5
So, it is not hard to understand the constant public pressures that R faced in addition to those from his family. Let’s also consider that appropriate sexual behaviors are not something we are born with, but something that is learned. Of course, inappropriate sexual behavior also happens in the heterosexual world. But heterosexual sexual behavior often has more accepted paths of trial and error. Children experiment with these behaviors and are corrected by adults and older peers as they mature.
However, for homosexual behaviors, there is not usually the fine-tuning about what is appropriate.
Summary
An educational environment where LGBTQ+ persons are highly visible and accepted is a more nurturing environment for LGBTQ teens than one that is not. Specific subcultures within the LGBTQ population involving race, culture, gender, and athletics modulate the experience of coming out and the nature of homophobic oppression.
Dr. Nguyen is a first-year psychiatry resident at the University of San Francisco School of Medicine at Fresno.
References
1. Kosciw JG et al. The effect of negative school climate on academic outcomes for LGBT youth and the role of in-school supports. J Sch Violence. 2013;12(1):45-63.
2. Center for American Progress. Gay and Transgender Youth Homelessness by the Numbers. June 21, 2010).
3. O’Donnell S et al. Increased risk of suicide attempts among Black and Latino lesbians, gay men, and bisexuals. Am J Public Health. 2011;101(6):1055-9.
4. Farrington D and Baldry A. Individual risk factors for school bullying. J Aggress Confl Peace Res. 2010 Jan;2(1):4-16.
5. Anderson E. Openly gay athletes: Contesting hegemonic masculinity in a homophobic environment Gend Soc. 2002 Dec:16(6):860-77.
I am a psychiatrist now but had another life teaching English in public high school for 17 years. My teaching life, in which I was an openly gay teacher, spanned 2001-2018 and was divided between two urban California schools – in Berkeley and San Leandro. I came out by responding honestly to student questions about whether I had a girlfriend, and what I did over the weekend. At Berkeley High my openness wasn’t an issue at all. The school had a vibrant Gay Straight Alliance/GSA for years, there were many openly gay staff and many openly gay students. No students felt the need to come out to me in search of a gay mentor.
Two years later, I began teaching in San Leandro, 20 miles away, and it was a lesson in how even the San Francisco Bay Area, an LGBTQ+ bastion, could harbor homophobia. When I was hired in 2003, San Leandro High had one openly gay teacher, Q. I quickly realized how much braver his coming out was compared with mine in Berkeley.
In San Leandro, gay slurs were heard nonstop in the hallways, no students were out, and by the end of my first year Q had quit, confiding in me that he couldn’t handle the homophobic harassment from students anymore. There was no GSA. A few years ago, two lesbians had held hands during lunch and inspired the wrath of a group of parents who advocated for their expulsion. In response, a teacher tried to introduce gay sensitivity training into his class and the same group of parents tried to get him fired. He was reprimanded by the principal, he countersued in a case that went all the way to the California Supreme Court, and won. Comparing these two local high schools reinforced to me how visibility really matters in creating a childhood experience that is nurturing versus traumatizing.1
Two Chinese girls in love
N and T were two Chinese girls who grew up in San Leandro. They went to the same elementary school and had crushes on each other since then. In their junior year, they joined our first student GSA, becoming president and vice-president. They were out. And, of course, they must’ve known that their families, who would not have been supportive, would become aware. I remember sitting at an outdoor concert when I got a text from N warning me her father had found out and blamed me for having corrupted her. He planned on coming to school to demand I be fired. And such was the unrelenting pressure that N and T faced every time they went home from school and sat at their dinner tables. Eventually, they broke up. They didn’t do so tearfully, but more wearily.
This story illustrates how difficult it is for love between two LGBTQ+ teens to be nurtured. Love in youth can already be volatile because of the lack of emotional regulation and experience. The questioning of identity and the threat of family disintegration at a time when these teens do not have the economic means to protect themselves makes love dangerous. It is no wonder that gay teens are at increased risk for homelessness.2
The family incident that led to the girls’ breakup reveals how culture affects homophobic pressure. N resisted her parents’ disapproval for months, but she capitulated when her father had a heart attack and blamed it on her. “And it’s true,” N confided. “After my parents found out, they were continually stressed. I could see it affect their health. And it breaks my heart to see my dad in the hospital.”
For N, she had not capitulated from fear, but perhaps because of filial piety, or one’s obligation to protect one’s parent. It was a choice between two heartbreaks. Double minorities, like N and T, face a double threat and often can find no safe place. One of my patients who is gay and Black put it best: “It’s like being beaten up at school only to come home to another beating.” This double threat is evidenced by the higher suicide risk of ethnicities who are LGBTQ+ relative to their white counterparts.3
The confusion of a gay athlete
R was a star point guard, a senior who had secured an athletic scholarship, and was recognized as the best athlete in our county. A popular boy, he flaunted his physique and flirted with all the girls. And then when he was enrolled in my class, he began flirting with all the boys, too. There was gossip that R was bisexual. Then one day, not unexpectedly, he came out to me as gay. He admitted he only flirted with girls for his reputation.
By this time many students had come out to me but he flirted with me with his revelation. I corrected him and warned him unequivocally that it was inappropriate but I was worried because I knew he had placed his trust in me. I also knew he came from a homophobic family that was violent – his father had attacked him physically at a school game and our coaches had to pull him off.
Instinctively, I felt I had to have a witness so I confided in another teacher and documented the situation meticulously. Then, one day, just as I feared, he went too far. He stayed after class and said he wanted to show me something on his phone. And that something turned out to be a picture of himself naked. I immediately confiscated the phone and reported it to the administration. This was not how I wanted him to come out: His family notified by the police that he had sexually harassed his teacher, expulsion pending, and scholarship inevitably revoked. Fortunately, we did find a resolution that restored R’s future.
Let’s examine the circumstances that could’ve informed his transgressive behavior. If we consider sexual harassment a form of bullying, R’s history of having a father who publicly bullied him – and may have bullied others in front of him – is a known risk factor.4 It is also common knowledge that organized team sports were and still are a bastion of homophobia and that gay athletes had to accept a culture of explicit homophobia.5
So, it is not hard to understand the constant public pressures that R faced in addition to those from his family. Let’s also consider that appropriate sexual behaviors are not something we are born with, but something that is learned. Of course, inappropriate sexual behavior also happens in the heterosexual world. But heterosexual sexual behavior often has more accepted paths of trial and error. Children experiment with these behaviors and are corrected by adults and older peers as they mature.
However, for homosexual behaviors, there is not usually the fine-tuning about what is appropriate.
Summary
An educational environment where LGBTQ+ persons are highly visible and accepted is a more nurturing environment for LGBTQ teens than one that is not. Specific subcultures within the LGBTQ population involving race, culture, gender, and athletics modulate the experience of coming out and the nature of homophobic oppression.
Dr. Nguyen is a first-year psychiatry resident at the University of San Francisco School of Medicine at Fresno.
References
1. Kosciw JG et al. The effect of negative school climate on academic outcomes for LGBT youth and the role of in-school supports. J Sch Violence. 2013;12(1):45-63.
2. Center for American Progress. Gay and Transgender Youth Homelessness by the Numbers. June 21, 2010).
3. O’Donnell S et al. Increased risk of suicide attempts among Black and Latino lesbians, gay men, and bisexuals. Am J Public Health. 2011;101(6):1055-9.
4. Farrington D and Baldry A. Individual risk factors for school bullying. J Aggress Confl Peace Res. 2010 Jan;2(1):4-16.
5. Anderson E. Openly gay athletes: Contesting hegemonic masculinity in a homophobic environment Gend Soc. 2002 Dec:16(6):860-77.
I am a psychiatrist now but had another life teaching English in public high school for 17 years. My teaching life, in which I was an openly gay teacher, spanned 2001-2018 and was divided between two urban California schools – in Berkeley and San Leandro. I came out by responding honestly to student questions about whether I had a girlfriend, and what I did over the weekend. At Berkeley High my openness wasn’t an issue at all. The school had a vibrant Gay Straight Alliance/GSA for years, there were many openly gay staff and many openly gay students. No students felt the need to come out to me in search of a gay mentor.
Two years later, I began teaching in San Leandro, 20 miles away, and it was a lesson in how even the San Francisco Bay Area, an LGBTQ+ bastion, could harbor homophobia. When I was hired in 2003, San Leandro High had one openly gay teacher, Q. I quickly realized how much braver his coming out was compared with mine in Berkeley.
In San Leandro, gay slurs were heard nonstop in the hallways, no students were out, and by the end of my first year Q had quit, confiding in me that he couldn’t handle the homophobic harassment from students anymore. There was no GSA. A few years ago, two lesbians had held hands during lunch and inspired the wrath of a group of parents who advocated for their expulsion. In response, a teacher tried to introduce gay sensitivity training into his class and the same group of parents tried to get him fired. He was reprimanded by the principal, he countersued in a case that went all the way to the California Supreme Court, and won. Comparing these two local high schools reinforced to me how visibility really matters in creating a childhood experience that is nurturing versus traumatizing.1
Two Chinese girls in love
N and T were two Chinese girls who grew up in San Leandro. They went to the same elementary school and had crushes on each other since then. In their junior year, they joined our first student GSA, becoming president and vice-president. They were out. And, of course, they must’ve known that their families, who would not have been supportive, would become aware. I remember sitting at an outdoor concert when I got a text from N warning me her father had found out and blamed me for having corrupted her. He planned on coming to school to demand I be fired. And such was the unrelenting pressure that N and T faced every time they went home from school and sat at their dinner tables. Eventually, they broke up. They didn’t do so tearfully, but more wearily.
This story illustrates how difficult it is for love between two LGBTQ+ teens to be nurtured. Love in youth can already be volatile because of the lack of emotional regulation and experience. The questioning of identity and the threat of family disintegration at a time when these teens do not have the economic means to protect themselves makes love dangerous. It is no wonder that gay teens are at increased risk for homelessness.2
The family incident that led to the girls’ breakup reveals how culture affects homophobic pressure. N resisted her parents’ disapproval for months, but she capitulated when her father had a heart attack and blamed it on her. “And it’s true,” N confided. “After my parents found out, they were continually stressed. I could see it affect their health. And it breaks my heart to see my dad in the hospital.”
For N, she had not capitulated from fear, but perhaps because of filial piety, or one’s obligation to protect one’s parent. It was a choice between two heartbreaks. Double minorities, like N and T, face a double threat and often can find no safe place. One of my patients who is gay and Black put it best: “It’s like being beaten up at school only to come home to another beating.” This double threat is evidenced by the higher suicide risk of ethnicities who are LGBTQ+ relative to their white counterparts.3
The confusion of a gay athlete
R was a star point guard, a senior who had secured an athletic scholarship, and was recognized as the best athlete in our county. A popular boy, he flaunted his physique and flirted with all the girls. And then when he was enrolled in my class, he began flirting with all the boys, too. There was gossip that R was bisexual. Then one day, not unexpectedly, he came out to me as gay. He admitted he only flirted with girls for his reputation.
By this time many students had come out to me but he flirted with me with his revelation. I corrected him and warned him unequivocally that it was inappropriate but I was worried because I knew he had placed his trust in me. I also knew he came from a homophobic family that was violent – his father had attacked him physically at a school game and our coaches had to pull him off.
Instinctively, I felt I had to have a witness so I confided in another teacher and documented the situation meticulously. Then, one day, just as I feared, he went too far. He stayed after class and said he wanted to show me something on his phone. And that something turned out to be a picture of himself naked. I immediately confiscated the phone and reported it to the administration. This was not how I wanted him to come out: His family notified by the police that he had sexually harassed his teacher, expulsion pending, and scholarship inevitably revoked. Fortunately, we did find a resolution that restored R’s future.
Let’s examine the circumstances that could’ve informed his transgressive behavior. If we consider sexual harassment a form of bullying, R’s history of having a father who publicly bullied him – and may have bullied others in front of him – is a known risk factor.4 It is also common knowledge that organized team sports were and still are a bastion of homophobia and that gay athletes had to accept a culture of explicit homophobia.5
So, it is not hard to understand the constant public pressures that R faced in addition to those from his family. Let’s also consider that appropriate sexual behaviors are not something we are born with, but something that is learned. Of course, inappropriate sexual behavior also happens in the heterosexual world. But heterosexual sexual behavior often has more accepted paths of trial and error. Children experiment with these behaviors and are corrected by adults and older peers as they mature.
However, for homosexual behaviors, there is not usually the fine-tuning about what is appropriate.
Summary
An educational environment where LGBTQ+ persons are highly visible and accepted is a more nurturing environment for LGBTQ teens than one that is not. Specific subcultures within the LGBTQ population involving race, culture, gender, and athletics modulate the experience of coming out and the nature of homophobic oppression.
Dr. Nguyen is a first-year psychiatry resident at the University of San Francisco School of Medicine at Fresno.
References
1. Kosciw JG et al. The effect of negative school climate on academic outcomes for LGBT youth and the role of in-school supports. J Sch Violence. 2013;12(1):45-63.
2. Center for American Progress. Gay and Transgender Youth Homelessness by the Numbers. June 21, 2010).
3. O’Donnell S et al. Increased risk of suicide attempts among Black and Latino lesbians, gay men, and bisexuals. Am J Public Health. 2011;101(6):1055-9.
4. Farrington D and Baldry A. Individual risk factors for school bullying. J Aggress Confl Peace Res. 2010 Jan;2(1):4-16.
5. Anderson E. Openly gay athletes: Contesting hegemonic masculinity in a homophobic environment Gend Soc. 2002 Dec:16(6):860-77.