User login
Joint symposium addresses exocrine pancreatic insufficiency
Based on discussions during PancreasFest 2021, recent report in Gastro Hep Advances
, according to aDue to its complex and individualized nature, EPI requires multidisciplinary approaches to therapy, as well as better pancreas function tests and biomarkers for diagnosis and treatment, wrote researchers who were led by David C. Whitcomb, MD, PhD, AGAF, emeritus professor of medicine in the division of gastroenterology, hepatology and nutrition at the University of Pittsburgh.
“This condition remains challenging even to define, and serious limitations in diagnostic testing and therapeutic options lead to clinical confusion and frequently less than optimal patient management,” the authors wrote.
EPI is clinically defined as inadequate delivery of pancreatic digestive enzymes to meet nutritional needs, which is typically based on a physician’s assessment of a patient’s maldigestion. However, there’s not a universally accepted definition or a precise threshold of reduced pancreatic digestive enzymes that indicates “pancreatic insufficiency” in an individual patient.
Current guidelines also don’t clearly outline the role of pancreatic function tests, the effects of different metabolic needs and nutrition intake, the timing of pancreatic enzyme replacement therapy (PERT), or the best practices for monitoring or titrating multiple therapies.
In response, Dr. Whitcomb and colleagues proposed a new mechanistic definition of EPI, including the disorder’s physiologic effects and impact on health. First, they said, EPI is a disorder caused by failure of the pancreas to deliver a minimum or threshold level of specific pancreatic digestive enzymes to the intestine in concert with ingested nutrients, followed by enzymatic digestion of individual meals over time to meet certain nutritional and metabolic needs. In addition, the disorder is characterized by variable deficiencies in micronutrients and macronutrients, especially essential fats and fat-soluble vitamins, as well as gastrointestinal symptoms of nutrient maldigestion.
The threshold for EPI should consider the nutritional needs of the patient, dietary intake, residual exocrine pancreas function, and the absorptive capacity of the intestine based on anatomy, mucosal function, motility, inflammation, the microbiome, and physiological adaptation, the authors wrote.
Due to challenges in diagnosing EPI and its common chronic symptoms such as abdominal pain, bloating, and diarrhea, several conditions may mimic EPI, be present concomitantly with EPI, or hinder PERT response. These include celiac disease, small intestinal bacterial overgrowth, disaccharidase deficiencies, inflammatory bowel disease (IBD), bile acid diarrhea, giardiasis, diabetes mellitus, and functional conditions such as irritable bowel syndrome. These conditions should be considered to address underlying pathology and PERT diagnostic challenges.
Although there is consensus that exocrine pancreatic function testing (PFT) is important to diagnosis EPI, no optimal test exists, and pancreatic function is only one aspect of digestion and absorption that should be considered. PFT may be needed to make an objective EPI diagnosis related to acute pancreatitis, pancreatic cancer, pancreatic resection, gastric resection, cystic fibrosis, or IBD. Direct or indirect PFTs may be used, which typically differs by center.
“The medical community still awaits a clinically useful pancreas function test that is easy to perform, well tolerated by patients, and allows personalized dosing of PERT,” the authors wrote.
After diagnosis, a general assessment should include information about symptoms, nutritional status, medications, diet, and lifestyle. This information can be used for a multifaceted treatment approach, with a focus on lifestyle changes, concomitant disease treatment, optimized diet, dietary supplements, and PERT administration.
PERT remains a mainstay of EPI treatment and has shown improvements in steatorrhea, postprandial bloating and pain, nutrition, and unexplained weight loss. The Food and Drug Administration has approved several formulations in different strengths. The typical starting dose is based on age and weight, which is derived from guidelines for EPI treatment in patients with cystic fibrosis. However, the recommendations don’t consider many of the variables discussed above and simply provide an estimate for the average subject with severe EPI, so the dose should be titrated as needed based on age, weight, symptoms, and the holistic management plan.
For optimal results, regular follow-up is necessary to monitor compliance and treatment response. A reduction in symptoms can serve as a reliable indicator of effective EPI management, particularly weight stabilization, improved steatorrhea and diarrhea, and reduced postprandial bloating, pain, and flatulence. Physicians may provide patients with tracking tools to record their PERT compliance, symptom frequency, and lifestyle changes.
For patients with persistent concerns, PERT can be increased as needed. Although many PERT formulations are enteric coated, a proton pump inhibitor or H2 receptor agonist may improve their effectiveness. If EPI symptoms persist despite increased doses, other causes of malabsorption should be considered, such as the concomitant conditions mentioned above.
“As EPI escalates, a lower fat diet may become necessary to alleviate distressing gastrointestinal symptoms,” the authors wrote. “A close working relationship between the treating provider and the [registered dietician] is crucial so that barriers to optimum nutrient assimilation can be identified, communicated, and overcome. Frequent monitoring of the nutritional state with therapy is also imperative.”
PancreasFest 2021 received no specific funding for this event. The authors declared grant support, adviser roles, and speaking honoraria from several pharmaceutical and medical device companies and health care foundations, including the National Pancreas Foundation.
Recognition of recent advances and unaddressed gaps can clarify key issues around exocrine pancreatic insufficiency (EPI).
The loss of pancreatic digestive enzymes and bicarbonate is caused by exocrine pancreatic and proximal small intestine disease. EPI’s clinical impact has been expanded by reports that 30% of subjects can develop EPI after a bout of acute pancreatitis. Diagnosing and treating EPI challenges clinicians and investigators.
The contribution on EPI by Whitcomb and colleagues provides state-of-the-art content relating to diagnosing EPI, assessing its metabolic impact, enzyme replacement, nutritional considerations, and how to assess the effectiveness of therapy.
Though the diagnosis and treatment of EPI have been examined for over 50 years, a consensus for either is still needed. Assessment of EPI with luminal tube tests and endoscopic collections of pancreatic secretion are the most accurate, but they are invasive, limited in availability, and time-consuming. Indirect assays of intestinal activities of pancreatic enzymes by the hydrolysis of substrates or stool excretion are frequently used to diagnose EPI. However, they need to be more insensitive and specific to meet clinical and investigative needs.
Indeed, all tests of exocrine secretion are surrogates of unclear value for the critical endpoint of EPI, its nutritional impact. An unmet need is the development of nutritional standards for assessing EPI and measures for the adequacy of pancreatic enzyme replacement therapy. In this context, a patient’s diet, and other factors, such as the intestinal microbiome, can affect pancreatic digestive enzyme activity and must be considered in designing the best EPI treatments. The summary concludes with a thoughtful and valuable road map for moving forward.
Fred Sanford Gorelick, MD, is the Henry J. and Joan W. Binder Professor of Medicine (Digestive Diseases) and of Cell Biology for Yale School of Medicine, New Haven, Conn. He also serves as director of the Yale School of Medicine NIH T32-funded research track in gastroenterology; and as deputy director of Yale School of Medicine MD-PhD program.
Potential conflicts: Dr. Gorelick serves as chair of NIH NIDDK DSMB for Stent vs. Indomethacin for Preventing Post-ERCP Pancreatitis (SVI) study. He also holds grants for research on mechanisms of acute pancreatitis from the U.S. Department of Veterans Affairs and the Department of Defense.
Recognition of recent advances and unaddressed gaps can clarify key issues around exocrine pancreatic insufficiency (EPI).
The loss of pancreatic digestive enzymes and bicarbonate is caused by exocrine pancreatic and proximal small intestine disease. EPI’s clinical impact has been expanded by reports that 30% of subjects can develop EPI after a bout of acute pancreatitis. Diagnosing and treating EPI challenges clinicians and investigators.
The contribution on EPI by Whitcomb and colleagues provides state-of-the-art content relating to diagnosing EPI, assessing its metabolic impact, enzyme replacement, nutritional considerations, and how to assess the effectiveness of therapy.
Though the diagnosis and treatment of EPI have been examined for over 50 years, a consensus for either is still needed. Assessment of EPI with luminal tube tests and endoscopic collections of pancreatic secretion are the most accurate, but they are invasive, limited in availability, and time-consuming. Indirect assays of intestinal activities of pancreatic enzymes by the hydrolysis of substrates or stool excretion are frequently used to diagnose EPI. However, they need to be more insensitive and specific to meet clinical and investigative needs.
Indeed, all tests of exocrine secretion are surrogates of unclear value for the critical endpoint of EPI, its nutritional impact. An unmet need is the development of nutritional standards for assessing EPI and measures for the adequacy of pancreatic enzyme replacement therapy. In this context, a patient’s diet, and other factors, such as the intestinal microbiome, can affect pancreatic digestive enzyme activity and must be considered in designing the best EPI treatments. The summary concludes with a thoughtful and valuable road map for moving forward.
Fred Sanford Gorelick, MD, is the Henry J. and Joan W. Binder Professor of Medicine (Digestive Diseases) and of Cell Biology for Yale School of Medicine, New Haven, Conn. He also serves as director of the Yale School of Medicine NIH T32-funded research track in gastroenterology; and as deputy director of Yale School of Medicine MD-PhD program.
Potential conflicts: Dr. Gorelick serves as chair of NIH NIDDK DSMB for Stent vs. Indomethacin for Preventing Post-ERCP Pancreatitis (SVI) study. He also holds grants for research on mechanisms of acute pancreatitis from the U.S. Department of Veterans Affairs and the Department of Defense.
Recognition of recent advances and unaddressed gaps can clarify key issues around exocrine pancreatic insufficiency (EPI).
The loss of pancreatic digestive enzymes and bicarbonate is caused by exocrine pancreatic and proximal small intestine disease. EPI’s clinical impact has been expanded by reports that 30% of subjects can develop EPI after a bout of acute pancreatitis. Diagnosing and treating EPI challenges clinicians and investigators.
The contribution on EPI by Whitcomb and colleagues provides state-of-the-art content relating to diagnosing EPI, assessing its metabolic impact, enzyme replacement, nutritional considerations, and how to assess the effectiveness of therapy.
Though the diagnosis and treatment of EPI have been examined for over 50 years, a consensus for either is still needed. Assessment of EPI with luminal tube tests and endoscopic collections of pancreatic secretion are the most accurate, but they are invasive, limited in availability, and time-consuming. Indirect assays of intestinal activities of pancreatic enzymes by the hydrolysis of substrates or stool excretion are frequently used to diagnose EPI. However, they need to be more insensitive and specific to meet clinical and investigative needs.
Indeed, all tests of exocrine secretion are surrogates of unclear value for the critical endpoint of EPI, its nutritional impact. An unmet need is the development of nutritional standards for assessing EPI and measures for the adequacy of pancreatic enzyme replacement therapy. In this context, a patient’s diet, and other factors, such as the intestinal microbiome, can affect pancreatic digestive enzyme activity and must be considered in designing the best EPI treatments. The summary concludes with a thoughtful and valuable road map for moving forward.
Fred Sanford Gorelick, MD, is the Henry J. and Joan W. Binder Professor of Medicine (Digestive Diseases) and of Cell Biology for Yale School of Medicine, New Haven, Conn. He also serves as director of the Yale School of Medicine NIH T32-funded research track in gastroenterology; and as deputy director of Yale School of Medicine MD-PhD program.
Potential conflicts: Dr. Gorelick serves as chair of NIH NIDDK DSMB for Stent vs. Indomethacin for Preventing Post-ERCP Pancreatitis (SVI) study. He also holds grants for research on mechanisms of acute pancreatitis from the U.S. Department of Veterans Affairs and the Department of Defense.
Based on discussions during PancreasFest 2021, recent report in Gastro Hep Advances
, according to aDue to its complex and individualized nature, EPI requires multidisciplinary approaches to therapy, as well as better pancreas function tests and biomarkers for diagnosis and treatment, wrote researchers who were led by David C. Whitcomb, MD, PhD, AGAF, emeritus professor of medicine in the division of gastroenterology, hepatology and nutrition at the University of Pittsburgh.
“This condition remains challenging even to define, and serious limitations in diagnostic testing and therapeutic options lead to clinical confusion and frequently less than optimal patient management,” the authors wrote.
EPI is clinically defined as inadequate delivery of pancreatic digestive enzymes to meet nutritional needs, which is typically based on a physician’s assessment of a patient’s maldigestion. However, there’s not a universally accepted definition or a precise threshold of reduced pancreatic digestive enzymes that indicates “pancreatic insufficiency” in an individual patient.
Current guidelines also don’t clearly outline the role of pancreatic function tests, the effects of different metabolic needs and nutrition intake, the timing of pancreatic enzyme replacement therapy (PERT), or the best practices for monitoring or titrating multiple therapies.
In response, Dr. Whitcomb and colleagues proposed a new mechanistic definition of EPI, including the disorder’s physiologic effects and impact on health. First, they said, EPI is a disorder caused by failure of the pancreas to deliver a minimum or threshold level of specific pancreatic digestive enzymes to the intestine in concert with ingested nutrients, followed by enzymatic digestion of individual meals over time to meet certain nutritional and metabolic needs. In addition, the disorder is characterized by variable deficiencies in micronutrients and macronutrients, especially essential fats and fat-soluble vitamins, as well as gastrointestinal symptoms of nutrient maldigestion.
The threshold for EPI should consider the nutritional needs of the patient, dietary intake, residual exocrine pancreas function, and the absorptive capacity of the intestine based on anatomy, mucosal function, motility, inflammation, the microbiome, and physiological adaptation, the authors wrote.
Due to challenges in diagnosing EPI and its common chronic symptoms such as abdominal pain, bloating, and diarrhea, several conditions may mimic EPI, be present concomitantly with EPI, or hinder PERT response. These include celiac disease, small intestinal bacterial overgrowth, disaccharidase deficiencies, inflammatory bowel disease (IBD), bile acid diarrhea, giardiasis, diabetes mellitus, and functional conditions such as irritable bowel syndrome. These conditions should be considered to address underlying pathology and PERT diagnostic challenges.
Although there is consensus that exocrine pancreatic function testing (PFT) is important to diagnosis EPI, no optimal test exists, and pancreatic function is only one aspect of digestion and absorption that should be considered. PFT may be needed to make an objective EPI diagnosis related to acute pancreatitis, pancreatic cancer, pancreatic resection, gastric resection, cystic fibrosis, or IBD. Direct or indirect PFTs may be used, which typically differs by center.
“The medical community still awaits a clinically useful pancreas function test that is easy to perform, well tolerated by patients, and allows personalized dosing of PERT,” the authors wrote.
After diagnosis, a general assessment should include information about symptoms, nutritional status, medications, diet, and lifestyle. This information can be used for a multifaceted treatment approach, with a focus on lifestyle changes, concomitant disease treatment, optimized diet, dietary supplements, and PERT administration.
PERT remains a mainstay of EPI treatment and has shown improvements in steatorrhea, postprandial bloating and pain, nutrition, and unexplained weight loss. The Food and Drug Administration has approved several formulations in different strengths. The typical starting dose is based on age and weight, which is derived from guidelines for EPI treatment in patients with cystic fibrosis. However, the recommendations don’t consider many of the variables discussed above and simply provide an estimate for the average subject with severe EPI, so the dose should be titrated as needed based on age, weight, symptoms, and the holistic management plan.
For optimal results, regular follow-up is necessary to monitor compliance and treatment response. A reduction in symptoms can serve as a reliable indicator of effective EPI management, particularly weight stabilization, improved steatorrhea and diarrhea, and reduced postprandial bloating, pain, and flatulence. Physicians may provide patients with tracking tools to record their PERT compliance, symptom frequency, and lifestyle changes.
For patients with persistent concerns, PERT can be increased as needed. Although many PERT formulations are enteric coated, a proton pump inhibitor or H2 receptor agonist may improve their effectiveness. If EPI symptoms persist despite increased doses, other causes of malabsorption should be considered, such as the concomitant conditions mentioned above.
“As EPI escalates, a lower fat diet may become necessary to alleviate distressing gastrointestinal symptoms,” the authors wrote. “A close working relationship between the treating provider and the [registered dietician] is crucial so that barriers to optimum nutrient assimilation can be identified, communicated, and overcome. Frequent monitoring of the nutritional state with therapy is also imperative.”
PancreasFest 2021 received no specific funding for this event. The authors declared grant support, adviser roles, and speaking honoraria from several pharmaceutical and medical device companies and health care foundations, including the National Pancreas Foundation.
Based on discussions during PancreasFest 2021, recent report in Gastro Hep Advances
, according to aDue to its complex and individualized nature, EPI requires multidisciplinary approaches to therapy, as well as better pancreas function tests and biomarkers for diagnosis and treatment, wrote researchers who were led by David C. Whitcomb, MD, PhD, AGAF, emeritus professor of medicine in the division of gastroenterology, hepatology and nutrition at the University of Pittsburgh.
“This condition remains challenging even to define, and serious limitations in diagnostic testing and therapeutic options lead to clinical confusion and frequently less than optimal patient management,” the authors wrote.
EPI is clinically defined as inadequate delivery of pancreatic digestive enzymes to meet nutritional needs, which is typically based on a physician’s assessment of a patient’s maldigestion. However, there’s not a universally accepted definition or a precise threshold of reduced pancreatic digestive enzymes that indicates “pancreatic insufficiency” in an individual patient.
Current guidelines also don’t clearly outline the role of pancreatic function tests, the effects of different metabolic needs and nutrition intake, the timing of pancreatic enzyme replacement therapy (PERT), or the best practices for monitoring or titrating multiple therapies.
In response, Dr. Whitcomb and colleagues proposed a new mechanistic definition of EPI, including the disorder’s physiologic effects and impact on health. First, they said, EPI is a disorder caused by failure of the pancreas to deliver a minimum or threshold level of specific pancreatic digestive enzymes to the intestine in concert with ingested nutrients, followed by enzymatic digestion of individual meals over time to meet certain nutritional and metabolic needs. In addition, the disorder is characterized by variable deficiencies in micronutrients and macronutrients, especially essential fats and fat-soluble vitamins, as well as gastrointestinal symptoms of nutrient maldigestion.
The threshold for EPI should consider the nutritional needs of the patient, dietary intake, residual exocrine pancreas function, and the absorptive capacity of the intestine based on anatomy, mucosal function, motility, inflammation, the microbiome, and physiological adaptation, the authors wrote.
Due to challenges in diagnosing EPI and its common chronic symptoms such as abdominal pain, bloating, and diarrhea, several conditions may mimic EPI, be present concomitantly with EPI, or hinder PERT response. These include celiac disease, small intestinal bacterial overgrowth, disaccharidase deficiencies, inflammatory bowel disease (IBD), bile acid diarrhea, giardiasis, diabetes mellitus, and functional conditions such as irritable bowel syndrome. These conditions should be considered to address underlying pathology and PERT diagnostic challenges.
Although there is consensus that exocrine pancreatic function testing (PFT) is important to diagnosis EPI, no optimal test exists, and pancreatic function is only one aspect of digestion and absorption that should be considered. PFT may be needed to make an objective EPI diagnosis related to acute pancreatitis, pancreatic cancer, pancreatic resection, gastric resection, cystic fibrosis, or IBD. Direct or indirect PFTs may be used, which typically differs by center.
“The medical community still awaits a clinically useful pancreas function test that is easy to perform, well tolerated by patients, and allows personalized dosing of PERT,” the authors wrote.
After diagnosis, a general assessment should include information about symptoms, nutritional status, medications, diet, and lifestyle. This information can be used for a multifaceted treatment approach, with a focus on lifestyle changes, concomitant disease treatment, optimized diet, dietary supplements, and PERT administration.
PERT remains a mainstay of EPI treatment and has shown improvements in steatorrhea, postprandial bloating and pain, nutrition, and unexplained weight loss. The Food and Drug Administration has approved several formulations in different strengths. The typical starting dose is based on age and weight, which is derived from guidelines for EPI treatment in patients with cystic fibrosis. However, the recommendations don’t consider many of the variables discussed above and simply provide an estimate for the average subject with severe EPI, so the dose should be titrated as needed based on age, weight, symptoms, and the holistic management plan.
For optimal results, regular follow-up is necessary to monitor compliance and treatment response. A reduction in symptoms can serve as a reliable indicator of effective EPI management, particularly weight stabilization, improved steatorrhea and diarrhea, and reduced postprandial bloating, pain, and flatulence. Physicians may provide patients with tracking tools to record their PERT compliance, symptom frequency, and lifestyle changes.
For patients with persistent concerns, PERT can be increased as needed. Although many PERT formulations are enteric coated, a proton pump inhibitor or H2 receptor agonist may improve their effectiveness. If EPI symptoms persist despite increased doses, other causes of malabsorption should be considered, such as the concomitant conditions mentioned above.
“As EPI escalates, a lower fat diet may become necessary to alleviate distressing gastrointestinal symptoms,” the authors wrote. “A close working relationship between the treating provider and the [registered dietician] is crucial so that barriers to optimum nutrient assimilation can be identified, communicated, and overcome. Frequent monitoring of the nutritional state with therapy is also imperative.”
PancreasFest 2021 received no specific funding for this event. The authors declared grant support, adviser roles, and speaking honoraria from several pharmaceutical and medical device companies and health care foundations, including the National Pancreas Foundation.
FROM GASTRO HEP ADVANCES
Study of environmental impact of GI endoscopy finds room for improvement
CHICAGO – Madhav Desai, MD, MPH, assistant professor of medicine at the University of Minnesota, Minneapolis. About 20% of the waste, most of which went to landfills, was potentially recyclable, he said in a presentation given at the annual Digestive Disease Week® meeting.
Gastrointestinal endoscopies are critical for the screening, diagnosis, and treatment of a variety of gastrointestinal conditions. But like other medical procedures, endoscopies are a source of environmental waste, including plastic, sharps, personal protective equipment (PPE), and cleaning supplies, and also energy waste.
“This all goes back to the damage that mankind is inflicting on the environment in general, with the health care sector as one of the top contributors to plastic waste generation, landfills and water wastage,” Dr. Desai said. “Endoscopies, with their numerous benefits, substantially increase waste generation through landfill waste and liquid consumption and waste through the cleaning of endoscopes. We have a responsibility to look into this topic.”
To prospectively assess total waste generation from their institution, Dr. Desai, who was with the Kansas City (Mo.) Veterans Administration Medical Center, when the research was conducted, collected data on the items used in 450 consecutive procedures from May to June 2022. The data included procedure type, accessory use, intravenous tubing, numbers of biopsy jars, linens, PPE, and more, beginning at the point of patient entry to the endoscopy unit until discharge. They also collected data on waste generation related to reprocessing after each procedure and daily energy use (including endoscopy equipment, lights, and computers). With an eye toward finding opportunities to improve and maximize waste recycling, they stratified waste into the three categories of biohazardous, nonbiohazardous, or potentially recyclable.
“We found that the total waste generated during the time period was 1,398.6 kg, with more than half of it, 61.6%, going directly to landfill,” Dr. Desai said in an interview. “That’s an amount that an average family in the U.S. would use for 2 months. That’s a huge amount.”
Most waste consists of sharps
Exactly one-third was biohazard waste and 5.1% was sharps, they found. A single procedure, on average, sent 2.19 kg of waste to landfill. Extrapolated to 1 year, the waste total amounts to 9,189 kg (equivalent to just over 10 U.S. tons) and per 100 procedures to 219 kg (about 483 pounds).
They estimated 20% of the landfill waste was potentially recyclable (such as plastic CO2 tubing, O2 connector, syringes, etc.), which could reduce the total landfill burden by 8.6 kg per day or 2,580 kg per year (or 61 kg per 100 procedures). Reprocessing endoscopes generated 194 gallons of liquid waste (735.26 kg) per day or 1,385 gallons per 100 procedures.
Turning to energy consumption, Dr. Desai reported that daily use in the endoscopy unit was 277.1 kW-hours (equivalent to 8.2 gallons of gasoline), adding up to about 1,980 kW per 100 procedures. “That 100-procedure amount is the equivalent of the energy used for an average fuel efficiency car to travel 1,200 miles, the distance from Seattle to San Diego,” he said.
“One next step,” Dr. Desai said, “is getting help from GI societies to come together and have endoscopy units track their own performance. You need benchmarks so that you can determine how good an endoscopist you are with respect to waste.”
He commented further:“We all owe it to the environment. And, we have all witnessed what Mother Nature can do to you.”
Working on the potentially recyclable materials that account for 20% of the total waste would be a simple initial step to reduce waste going to landfills, Dr. Desai and colleagues concluded in the meeting abstract. “These data could serve as an actionable model for health systems to reduce total waste generation and move toward environmentally sustainable endoscopy units,” they wrote.
The authors reported no disclosures.
DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.
CHICAGO – Madhav Desai, MD, MPH, assistant professor of medicine at the University of Minnesota, Minneapolis. About 20% of the waste, most of which went to landfills, was potentially recyclable, he said in a presentation given at the annual Digestive Disease Week® meeting.
Gastrointestinal endoscopies are critical for the screening, diagnosis, and treatment of a variety of gastrointestinal conditions. But like other medical procedures, endoscopies are a source of environmental waste, including plastic, sharps, personal protective equipment (PPE), and cleaning supplies, and also energy waste.
“This all goes back to the damage that mankind is inflicting on the environment in general, with the health care sector as one of the top contributors to plastic waste generation, landfills and water wastage,” Dr. Desai said. “Endoscopies, with their numerous benefits, substantially increase waste generation through landfill waste and liquid consumption and waste through the cleaning of endoscopes. We have a responsibility to look into this topic.”
To prospectively assess total waste generation from their institution, Dr. Desai, who was with the Kansas City (Mo.) Veterans Administration Medical Center, when the research was conducted, collected data on the items used in 450 consecutive procedures from May to June 2022. The data included procedure type, accessory use, intravenous tubing, numbers of biopsy jars, linens, PPE, and more, beginning at the point of patient entry to the endoscopy unit until discharge. They also collected data on waste generation related to reprocessing after each procedure and daily energy use (including endoscopy equipment, lights, and computers). With an eye toward finding opportunities to improve and maximize waste recycling, they stratified waste into the three categories of biohazardous, nonbiohazardous, or potentially recyclable.
“We found that the total waste generated during the time period was 1,398.6 kg, with more than half of it, 61.6%, going directly to landfill,” Dr. Desai said in an interview. “That’s an amount that an average family in the U.S. would use for 2 months. That’s a huge amount.”
Most waste consists of sharps
Exactly one-third was biohazard waste and 5.1% was sharps, they found. A single procedure, on average, sent 2.19 kg of waste to landfill. Extrapolated to 1 year, the waste total amounts to 9,189 kg (equivalent to just over 10 U.S. tons) and per 100 procedures to 219 kg (about 483 pounds).
They estimated 20% of the landfill waste was potentially recyclable (such as plastic CO2 tubing, O2 connector, syringes, etc.), which could reduce the total landfill burden by 8.6 kg per day or 2,580 kg per year (or 61 kg per 100 procedures). Reprocessing endoscopes generated 194 gallons of liquid waste (735.26 kg) per day or 1,385 gallons per 100 procedures.
Turning to energy consumption, Dr. Desai reported that daily use in the endoscopy unit was 277.1 kW-hours (equivalent to 8.2 gallons of gasoline), adding up to about 1,980 kW per 100 procedures. “That 100-procedure amount is the equivalent of the energy used for an average fuel efficiency car to travel 1,200 miles, the distance from Seattle to San Diego,” he said.
“One next step,” Dr. Desai said, “is getting help from GI societies to come together and have endoscopy units track their own performance. You need benchmarks so that you can determine how good an endoscopist you are with respect to waste.”
He commented further:“We all owe it to the environment. And, we have all witnessed what Mother Nature can do to you.”
Working on the potentially recyclable materials that account for 20% of the total waste would be a simple initial step to reduce waste going to landfills, Dr. Desai and colleagues concluded in the meeting abstract. “These data could serve as an actionable model for health systems to reduce total waste generation and move toward environmentally sustainable endoscopy units,” they wrote.
The authors reported no disclosures.
DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.
CHICAGO – Madhav Desai, MD, MPH, assistant professor of medicine at the University of Minnesota, Minneapolis. About 20% of the waste, most of which went to landfills, was potentially recyclable, he said in a presentation given at the annual Digestive Disease Week® meeting.
Gastrointestinal endoscopies are critical for the screening, diagnosis, and treatment of a variety of gastrointestinal conditions. But like other medical procedures, endoscopies are a source of environmental waste, including plastic, sharps, personal protective equipment (PPE), and cleaning supplies, and also energy waste.
“This all goes back to the damage that mankind is inflicting on the environment in general, with the health care sector as one of the top contributors to plastic waste generation, landfills and water wastage,” Dr. Desai said. “Endoscopies, with their numerous benefits, substantially increase waste generation through landfill waste and liquid consumption and waste through the cleaning of endoscopes. We have a responsibility to look into this topic.”
To prospectively assess total waste generation from their institution, Dr. Desai, who was with the Kansas City (Mo.) Veterans Administration Medical Center, when the research was conducted, collected data on the items used in 450 consecutive procedures from May to June 2022. The data included procedure type, accessory use, intravenous tubing, numbers of biopsy jars, linens, PPE, and more, beginning at the point of patient entry to the endoscopy unit until discharge. They also collected data on waste generation related to reprocessing after each procedure and daily energy use (including endoscopy equipment, lights, and computers). With an eye toward finding opportunities to improve and maximize waste recycling, they stratified waste into the three categories of biohazardous, nonbiohazardous, or potentially recyclable.
“We found that the total waste generated during the time period was 1,398.6 kg, with more than half of it, 61.6%, going directly to landfill,” Dr. Desai said in an interview. “That’s an amount that an average family in the U.S. would use for 2 months. That’s a huge amount.”
Most waste consists of sharps
Exactly one-third was biohazard waste and 5.1% was sharps, they found. A single procedure, on average, sent 2.19 kg of waste to landfill. Extrapolated to 1 year, the waste total amounts to 9,189 kg (equivalent to just over 10 U.S. tons) and per 100 procedures to 219 kg (about 483 pounds).
They estimated 20% of the landfill waste was potentially recyclable (such as plastic CO2 tubing, O2 connector, syringes, etc.), which could reduce the total landfill burden by 8.6 kg per day or 2,580 kg per year (or 61 kg per 100 procedures). Reprocessing endoscopes generated 194 gallons of liquid waste (735.26 kg) per day or 1,385 gallons per 100 procedures.
Turning to energy consumption, Dr. Desai reported that daily use in the endoscopy unit was 277.1 kW-hours (equivalent to 8.2 gallons of gasoline), adding up to about 1,980 kW per 100 procedures. “That 100-procedure amount is the equivalent of the energy used for an average fuel efficiency car to travel 1,200 miles, the distance from Seattle to San Diego,” he said.
“One next step,” Dr. Desai said, “is getting help from GI societies to come together and have endoscopy units track their own performance. You need benchmarks so that you can determine how good an endoscopist you are with respect to waste.”
He commented further:“We all owe it to the environment. And, we have all witnessed what Mother Nature can do to you.”
Working on the potentially recyclable materials that account for 20% of the total waste would be a simple initial step to reduce waste going to landfills, Dr. Desai and colleagues concluded in the meeting abstract. “These data could serve as an actionable model for health systems to reduce total waste generation and move toward environmentally sustainable endoscopy units,” they wrote.
The authors reported no disclosures.
DDW is sponsored by the American Association for the Study of Liver Diseases, the American Gastroenterological Association, the American Society for Gastrointestinal Endoscopy, and The Society for Surgery of the Alimentary Tract.
AT DDW 2023
The 30th-birthday gift that could save a life
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
Milestone birthdays are always memorable – those ages when your life seems to fundamentally change somehow. Age 16: A license to drive. Age 18: You can vote to determine your own future and serve in the military. At 21, 3 years after adulthood, you are finally allowed to drink alcohol, for some reason. And then ... nothing much happens. At least until you turn 65 and become eligible for Medicare.
But imagine a future when turning 30 might be the biggest milestone birthday of all. Imagine a future when, at 30, you get your genome sequenced and doctors tell you what needs to be done to save your life.
That future may not be far off, as a new study shows us that
Getting your genome sequenced is a double-edged sword. Of course, there is the potential for substantial benefit; finding certain mutations allows for definitive therapy before it’s too late. That said, there are genetic diseases without a cure and without a treatment. Knowing about that destiny may do more harm than good.
Three conditions are described by the CDC as “Tier 1” conditions, genetic syndromes with a significant impact on life expectancy that also have definitive, effective therapies.
These include mutations like BRCA1/2, associated with a high risk for breast and ovarian cancer; mutations associated with Lynch syndrome, which confer an elevated risk for colon cancer; and mutations associated with familial hypercholesterolemia, which confer elevated risk for cardiovascular events.
In each of these cases, there is clear evidence that early intervention can save lives. Individuals at high risk for breast and ovarian cancer can get prophylactic mastectomy and salpingo-oophorectomy. Those with Lynch syndrome can get more frequent screening for colon cancer and polypectomy, and those with familial hypercholesterolemia can get aggressive lipid-lowering therapy.
I think most of us would probably want to know if we had one of these conditions. Most of us would use that information to take concrete steps to decrease our risk. But just because a rational person would choose to do something doesn’t mean it’s feasible. After all, we’re talking about tests and treatments that have significant costs.
In a recent issue of Annals of Internal Medicine, Josh Peterson and David Veenstra present a detailed accounting of the cost and benefit of a hypothetical nationwide, universal screening program for Tier 1 conditions. And in the end, it may actually be worth it.
Cost-benefit analyses work by comparing two independent policy choices: the status quo – in this case, a world in which some people get tested for these conditions, but generally only if they are at high risk based on strong family history; and an alternative policy – in this case, universal screening for these conditions starting at some age.
After that, it’s time to play the assumption game. Using the best available data, the authors estimated the percentage of the population that will have each condition, the percentage of those individuals who will definitively act on the information, and how effective those actions would be if taken.
The authors provide an example. First, they assume that the prevalence of mutations leading to a high risk for breast and ovarian cancer is around 0.7%, and that up to 40% of people who learn that they have one of these mutations would undergo prophylactic mastectomy, which would reduce the risk for breast cancer by around 94%. (I ran these numbers past my wife, a breast surgical oncologist, who agreed that they seem reasonable.)
Assumptions in place, it’s time to consider costs. The cost of the screening test itself: The authors use $250 as their average per-person cost. But we also have the cost of treatment – around $22,000 per person for a bilateral prophylactic mastectomy; the cost of statin therapy for those with familial hypercholesterolemia; or the cost of all of those colonoscopies for those with Lynch syndrome.
Finally, we assess quality of life. Obviously, living longer is generally considered better than living shorter, but marginal increases in life expectancy at the cost of quality of life might not be a rational choice.
You then churn these assumptions through a computer and see what comes out. How many dollars does it take to save one quality-adjusted life-year (QALY)? I’ll tell you right now that $50,000 per QALY used to be the unofficial standard for a “cost-effective” intervention in the United States. Researchers have more recently used $100,000 as that threshold.
Let’s look at some hard numbers.
If you screened 100,000 people at age 30 years, 1,500 would get news that something in their genetics was, more or less, a ticking time bomb. Some would choose to get definitive treatment and the authors estimate that the strategy would prevent 85 cases of cancer. You’d prevent nine heart attacks and five strokes by lowering cholesterol levels among those with familial hypercholesterolemia. Obviously, these aren’t huge numbers, but of course most people don’t have these hereditary risk factors. For your average 30-year-old, the genetic screening test will be completely uneventful, but for those 1,500 it will be life-changing, and potentially life-saving.
But is it worth it? The authors estimate that, at the midpoint of all their assumptions, the cost of this program would be $68,000 per QALY saved.
Of course, that depends on all those assumptions we talked about. Interestingly, the single factor that changes the cost-effectiveness the most in this analysis is the cost of the genetic test itself, which I guess makes sense, considering we’d be talking about testing a huge segment of the population. If the test cost $100 instead of $250, the cost per QALY would be $39,700 – well within the range that most policymakers would support. And given the rate at which the cost of genetic testing is decreasing, and the obvious economies of scale here, I think $100 per test is totally feasible.
The future will bring other changes as well. Right now, there are only three hereditary conditions designated as Tier 1 by the CDC. If conditions are added, that might also swing the calculation more heavily toward benefit.
This will represent a stark change from how we think about genetic testing currently, focusing on those whose pretest probability of an abnormal result is high due to family history or other risk factors. But for the 20-year-olds out there, I wouldn’t be surprised if your 30th birthday is a bit more significant than you have been anticipating.
Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
Milestone birthdays are always memorable – those ages when your life seems to fundamentally change somehow. Age 16: A license to drive. Age 18: You can vote to determine your own future and serve in the military. At 21, 3 years after adulthood, you are finally allowed to drink alcohol, for some reason. And then ... nothing much happens. At least until you turn 65 and become eligible for Medicare.
But imagine a future when turning 30 might be the biggest milestone birthday of all. Imagine a future when, at 30, you get your genome sequenced and doctors tell you what needs to be done to save your life.
That future may not be far off, as a new study shows us that
Getting your genome sequenced is a double-edged sword. Of course, there is the potential for substantial benefit; finding certain mutations allows for definitive therapy before it’s too late. That said, there are genetic diseases without a cure and without a treatment. Knowing about that destiny may do more harm than good.
Three conditions are described by the CDC as “Tier 1” conditions, genetic syndromes with a significant impact on life expectancy that also have definitive, effective therapies.
These include mutations like BRCA1/2, associated with a high risk for breast and ovarian cancer; mutations associated with Lynch syndrome, which confer an elevated risk for colon cancer; and mutations associated with familial hypercholesterolemia, which confer elevated risk for cardiovascular events.
In each of these cases, there is clear evidence that early intervention can save lives. Individuals at high risk for breast and ovarian cancer can get prophylactic mastectomy and salpingo-oophorectomy. Those with Lynch syndrome can get more frequent screening for colon cancer and polypectomy, and those with familial hypercholesterolemia can get aggressive lipid-lowering therapy.
I think most of us would probably want to know if we had one of these conditions. Most of us would use that information to take concrete steps to decrease our risk. But just because a rational person would choose to do something doesn’t mean it’s feasible. After all, we’re talking about tests and treatments that have significant costs.
In a recent issue of Annals of Internal Medicine, Josh Peterson and David Veenstra present a detailed accounting of the cost and benefit of a hypothetical nationwide, universal screening program for Tier 1 conditions. And in the end, it may actually be worth it.
Cost-benefit analyses work by comparing two independent policy choices: the status quo – in this case, a world in which some people get tested for these conditions, but generally only if they are at high risk based on strong family history; and an alternative policy – in this case, universal screening for these conditions starting at some age.
After that, it’s time to play the assumption game. Using the best available data, the authors estimated the percentage of the population that will have each condition, the percentage of those individuals who will definitively act on the information, and how effective those actions would be if taken.
The authors provide an example. First, they assume that the prevalence of mutations leading to a high risk for breast and ovarian cancer is around 0.7%, and that up to 40% of people who learn that they have one of these mutations would undergo prophylactic mastectomy, which would reduce the risk for breast cancer by around 94%. (I ran these numbers past my wife, a breast surgical oncologist, who agreed that they seem reasonable.)
Assumptions in place, it’s time to consider costs. The cost of the screening test itself: The authors use $250 as their average per-person cost. But we also have the cost of treatment – around $22,000 per person for a bilateral prophylactic mastectomy; the cost of statin therapy for those with familial hypercholesterolemia; or the cost of all of those colonoscopies for those with Lynch syndrome.
Finally, we assess quality of life. Obviously, living longer is generally considered better than living shorter, but marginal increases in life expectancy at the cost of quality of life might not be a rational choice.
You then churn these assumptions through a computer and see what comes out. How many dollars does it take to save one quality-adjusted life-year (QALY)? I’ll tell you right now that $50,000 per QALY used to be the unofficial standard for a “cost-effective” intervention in the United States. Researchers have more recently used $100,000 as that threshold.
Let’s look at some hard numbers.
If you screened 100,000 people at age 30 years, 1,500 would get news that something in their genetics was, more or less, a ticking time bomb. Some would choose to get definitive treatment and the authors estimate that the strategy would prevent 85 cases of cancer. You’d prevent nine heart attacks and five strokes by lowering cholesterol levels among those with familial hypercholesterolemia. Obviously, these aren’t huge numbers, but of course most people don’t have these hereditary risk factors. For your average 30-year-old, the genetic screening test will be completely uneventful, but for those 1,500 it will be life-changing, and potentially life-saving.
But is it worth it? The authors estimate that, at the midpoint of all their assumptions, the cost of this program would be $68,000 per QALY saved.
Of course, that depends on all those assumptions we talked about. Interestingly, the single factor that changes the cost-effectiveness the most in this analysis is the cost of the genetic test itself, which I guess makes sense, considering we’d be talking about testing a huge segment of the population. If the test cost $100 instead of $250, the cost per QALY would be $39,700 – well within the range that most policymakers would support. And given the rate at which the cost of genetic testing is decreasing, and the obvious economies of scale here, I think $100 per test is totally feasible.
The future will bring other changes as well. Right now, there are only three hereditary conditions designated as Tier 1 by the CDC. If conditions are added, that might also swing the calculation more heavily toward benefit.
This will represent a stark change from how we think about genetic testing currently, focusing on those whose pretest probability of an abnormal result is high due to family history or other risk factors. But for the 20-year-olds out there, I wouldn’t be surprised if your 30th birthday is a bit more significant than you have been anticipating.
Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
Milestone birthdays are always memorable – those ages when your life seems to fundamentally change somehow. Age 16: A license to drive. Age 18: You can vote to determine your own future and serve in the military. At 21, 3 years after adulthood, you are finally allowed to drink alcohol, for some reason. And then ... nothing much happens. At least until you turn 65 and become eligible for Medicare.
But imagine a future when turning 30 might be the biggest milestone birthday of all. Imagine a future when, at 30, you get your genome sequenced and doctors tell you what needs to be done to save your life.
That future may not be far off, as a new study shows us that
Getting your genome sequenced is a double-edged sword. Of course, there is the potential for substantial benefit; finding certain mutations allows for definitive therapy before it’s too late. That said, there are genetic diseases without a cure and without a treatment. Knowing about that destiny may do more harm than good.
Three conditions are described by the CDC as “Tier 1” conditions, genetic syndromes with a significant impact on life expectancy that also have definitive, effective therapies.
These include mutations like BRCA1/2, associated with a high risk for breast and ovarian cancer; mutations associated with Lynch syndrome, which confer an elevated risk for colon cancer; and mutations associated with familial hypercholesterolemia, which confer elevated risk for cardiovascular events.
In each of these cases, there is clear evidence that early intervention can save lives. Individuals at high risk for breast and ovarian cancer can get prophylactic mastectomy and salpingo-oophorectomy. Those with Lynch syndrome can get more frequent screening for colon cancer and polypectomy, and those with familial hypercholesterolemia can get aggressive lipid-lowering therapy.
I think most of us would probably want to know if we had one of these conditions. Most of us would use that information to take concrete steps to decrease our risk. But just because a rational person would choose to do something doesn’t mean it’s feasible. After all, we’re talking about tests and treatments that have significant costs.
In a recent issue of Annals of Internal Medicine, Josh Peterson and David Veenstra present a detailed accounting of the cost and benefit of a hypothetical nationwide, universal screening program for Tier 1 conditions. And in the end, it may actually be worth it.
Cost-benefit analyses work by comparing two independent policy choices: the status quo – in this case, a world in which some people get tested for these conditions, but generally only if they are at high risk based on strong family history; and an alternative policy – in this case, universal screening for these conditions starting at some age.
After that, it’s time to play the assumption game. Using the best available data, the authors estimated the percentage of the population that will have each condition, the percentage of those individuals who will definitively act on the information, and how effective those actions would be if taken.
The authors provide an example. First, they assume that the prevalence of mutations leading to a high risk for breast and ovarian cancer is around 0.7%, and that up to 40% of people who learn that they have one of these mutations would undergo prophylactic mastectomy, which would reduce the risk for breast cancer by around 94%. (I ran these numbers past my wife, a breast surgical oncologist, who agreed that they seem reasonable.)
Assumptions in place, it’s time to consider costs. The cost of the screening test itself: The authors use $250 as their average per-person cost. But we also have the cost of treatment – around $22,000 per person for a bilateral prophylactic mastectomy; the cost of statin therapy for those with familial hypercholesterolemia; or the cost of all of those colonoscopies for those with Lynch syndrome.
Finally, we assess quality of life. Obviously, living longer is generally considered better than living shorter, but marginal increases in life expectancy at the cost of quality of life might not be a rational choice.
You then churn these assumptions through a computer and see what comes out. How many dollars does it take to save one quality-adjusted life-year (QALY)? I’ll tell you right now that $50,000 per QALY used to be the unofficial standard for a “cost-effective” intervention in the United States. Researchers have more recently used $100,000 as that threshold.
Let’s look at some hard numbers.
If you screened 100,000 people at age 30 years, 1,500 would get news that something in their genetics was, more or less, a ticking time bomb. Some would choose to get definitive treatment and the authors estimate that the strategy would prevent 85 cases of cancer. You’d prevent nine heart attacks and five strokes by lowering cholesterol levels among those with familial hypercholesterolemia. Obviously, these aren’t huge numbers, but of course most people don’t have these hereditary risk factors. For your average 30-year-old, the genetic screening test will be completely uneventful, but for those 1,500 it will be life-changing, and potentially life-saving.
But is it worth it? The authors estimate that, at the midpoint of all their assumptions, the cost of this program would be $68,000 per QALY saved.
Of course, that depends on all those assumptions we talked about. Interestingly, the single factor that changes the cost-effectiveness the most in this analysis is the cost of the genetic test itself, which I guess makes sense, considering we’d be talking about testing a huge segment of the population. If the test cost $100 instead of $250, the cost per QALY would be $39,700 – well within the range that most policymakers would support. And given the rate at which the cost of genetic testing is decreasing, and the obvious economies of scale here, I think $100 per test is totally feasible.
The future will bring other changes as well. Right now, there are only three hereditary conditions designated as Tier 1 by the CDC. If conditions are added, that might also swing the calculation more heavily toward benefit.
This will represent a stark change from how we think about genetic testing currently, focusing on those whose pretest probability of an abnormal result is high due to family history or other risk factors. But for the 20-year-olds out there, I wouldn’t be surprised if your 30th birthday is a bit more significant than you have been anticipating.
Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He disclosed no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Veterans Will Benefit if the VA Includes Telehealth in its Access Standards
The VA MISSION Act of 2018 expanded options for veterans to receive government-paid health care from private sector community health care practitioners. The act tasked the US Department of Veterans Affairs (VA) to develop rules that determine eligibility for outside care based on appointment wait times or distance to the nearest VA facility. As a part of those standards, VA opted not to include the availability of VA telehealth in its wait time calculations—a decision that we believe was a gross misjudgment with far-reaching consequences for veterans. Excluding telehealth from the guidelines has unnecessarily restricted veterans’ access to high-quality health care and has squandered large sums of taxpayer dollars.
The VA has reviewed its initial MISSION Act eligibility standards and proposed a correction that recognizes telehealth as a valid means of providing health care to veterans who prefer that option.1 Telehealth may not have been an essential component of health care before the COVID-19 pandemic, but now it is clear that the best action VA can take is to swiftly enact its recommended change, stipulating that both VA telehealth and in-person health care constitute access to treatment. If implemented, this correction would save taxpayers an astronomical sum—according to a VA reportto Congress, about $1.1 billion in fiscal year 2021 alone.2 The cost savings from this proposed correction is reason enough to implement it. But just as importantly, increased use of VA telehealth also means higher quality, quicker, and more convenient care for veterans.
The VA is the recognized world leader in providing telehealth that is effective, timely, and veteran centric. Veterans across the country have access to telehealth services in more than 30 specialties.3 To ensure accessibility, the VA has established partnerships with major mobile broadband carriers so that veterans can receive telehealth at home without additional charges.4 The VA project Accessing Telehealth through Local Area Stations (ATLAS) brings VA telehealth to areas where existing internet infrastructure may not be adequate to support video telehealth. ATLAS is a collaboration with private organizations, including Veterans of Foreign Wars, The American Legion, and Walmart.4The agency also provides tablets to veterans who might not have access to telehealth, fostering higher access and patient satisfaction.4
The VA can initiate telehealth care rapidly. The “Anywhere to Anywhere” VA Health Care initiative and telecare hubs eliminate geographic constraints, allowing clinicians to provide team-based services across county and state lines to veterans’ homes and communities.
VA’s telehealth effort maximizes convenience for veterans. It reduces travel time, travel expenses, depletion of sick leave, and the need for childcare. Veterans with posttraumatic stress disorder or military sexual trauma who are triggered by traffic and waiting rooms, those with mobility issues, or those facing the stigma of mental health treatment often prefer to receive care in the familiarity of their home. Nonetheless, any veteran who desires an in-person appointment would continue to have that option under the proposed VA rule change.
VA telehealth is often used for mental health care, using the same evidence-based psychotherapies that VA has championed and are superior to that available in the private sector.5,6 This advantage is largely due to VA’s rigorous training, consultation, case review, care delivery, measurement standards, and integrated care model. In a recent survey of veterans engaged in mental health care, 80% reported that VA virtual care via video and/or telephone is as helpful or more helpful than in‐person services.7And yet, because of existing regulations, VA telemental health (TMH) does not qualify as access, resulting in hundreds of thousands of TMH visits being outsourced yearly to community practitioners that could be quickly and beneficially furnished by VA clinicians.
Telehealth has been shown to be as clinically effective as in-person care. A recent review of 38 meta-analyses covering telehealth with 10 medical disciplines found that for all disciplines, telehealth was as effective, if not more so, than conventional care.8 And because the likelihood of not showing up for telehealth appointments is lower than for in-person appointments, continuity of care is uninterrupted, and health care outcomes are improved.
Telehealth is health care. The VA must end the double standard that has handicapped it from including telehealth availability in determinations of eligibility for community care. The VA has voiced its intention to seek stakeholder input before implementing its proposed correction. The change is long overdue. It will save the VA a billion dollars annually while ensuring that veterans have quicker access to better treatment.
1 McDonough D. Statement of the honorable Denis McDonough Secretary of Veterans Affairs Department of Veterans Affairs (VA) before the Committee on Veterans’ Affairs United States Senate on veterans access to care. 117th Cong, 2nd Sess. September 21, 2022. Accessed May 8, 2023. https://www.veterans.senate.gov/2022/9/ensuring-veterans-timely-access-to-care-in-va-and-the-community/63b521ff-d308-449a-b3a3-918f4badb805
2 US Department of Veterans Affairs, Congressionally mandated report: access to care standards. September 2022.
3 US Department of Veterans Affairs. VA Secretary Press Conference, Thursday March 2, 2023. Accessed May 8, 2023. https://www.youtube.com/watch?v=WnkNl2whPoQ
4 US Department of Veterans Affairs, VA Telehealth: bridging the digital divide. Accessed May 8, 2023. https://telehealth.va.gov/digital-divide
5 Rand Corporation. Improving the Quality of Mental Health Care for Veterans: Lessons from RAND Research. Santa Monica, CA: RAND Corporation, 2019. https://www.rand.org/pubs/research_briefs/RB10087.html.
6 Lemle, R. Choice program expansion jeopardizes high-quality VHA mental health services. Federal Pract. 2018:35(3):18-24. [link to: https://www.mdedge.com/fedprac/article/159219/mental-health/choice-program-expansion-jeopardizes-high-quality-vha-mental
7 Campbell TM. Overview of the state of mental health care services in the VHA health care system. Presentation to the National Academies’ improving access to high-quality mental health care for veterans: a workshop. April 20, 2023. Accessed May 8, 2023. https://www.nationalacademies.org/documents/embed/link/LF2255DA3DD1C41C0A42D3BEF0989ACAECE3053A6A9B/file/D2C4B73BA6FFCAA81E6C4FC7C57020A5BA54376245AD?noSaveAs=1
8 Snoswell CL, Chelberg G, De Guzman KR, et al. The clinical effectiveness of telehealth: A systematic review of meta-analyses from 2010 to 2019. J Telemed Telecare. 2021;1357633X211022907. doi:10.1177/1357633X211022907
The VA MISSION Act of 2018 expanded options for veterans to receive government-paid health care from private sector community health care practitioners. The act tasked the US Department of Veterans Affairs (VA) to develop rules that determine eligibility for outside care based on appointment wait times or distance to the nearest VA facility. As a part of those standards, VA opted not to include the availability of VA telehealth in its wait time calculations—a decision that we believe was a gross misjudgment with far-reaching consequences for veterans. Excluding telehealth from the guidelines has unnecessarily restricted veterans’ access to high-quality health care and has squandered large sums of taxpayer dollars.
The VA has reviewed its initial MISSION Act eligibility standards and proposed a correction that recognizes telehealth as a valid means of providing health care to veterans who prefer that option.1 Telehealth may not have been an essential component of health care before the COVID-19 pandemic, but now it is clear that the best action VA can take is to swiftly enact its recommended change, stipulating that both VA telehealth and in-person health care constitute access to treatment. If implemented, this correction would save taxpayers an astronomical sum—according to a VA reportto Congress, about $1.1 billion in fiscal year 2021 alone.2 The cost savings from this proposed correction is reason enough to implement it. But just as importantly, increased use of VA telehealth also means higher quality, quicker, and more convenient care for veterans.
The VA is the recognized world leader in providing telehealth that is effective, timely, and veteran centric. Veterans across the country have access to telehealth services in more than 30 specialties.3 To ensure accessibility, the VA has established partnerships with major mobile broadband carriers so that veterans can receive telehealth at home without additional charges.4 The VA project Accessing Telehealth through Local Area Stations (ATLAS) brings VA telehealth to areas where existing internet infrastructure may not be adequate to support video telehealth. ATLAS is a collaboration with private organizations, including Veterans of Foreign Wars, The American Legion, and Walmart.4The agency also provides tablets to veterans who might not have access to telehealth, fostering higher access and patient satisfaction.4
The VA can initiate telehealth care rapidly. The “Anywhere to Anywhere” VA Health Care initiative and telecare hubs eliminate geographic constraints, allowing clinicians to provide team-based services across county and state lines to veterans’ homes and communities.
VA’s telehealth effort maximizes convenience for veterans. It reduces travel time, travel expenses, depletion of sick leave, and the need for childcare. Veterans with posttraumatic stress disorder or military sexual trauma who are triggered by traffic and waiting rooms, those with mobility issues, or those facing the stigma of mental health treatment often prefer to receive care in the familiarity of their home. Nonetheless, any veteran who desires an in-person appointment would continue to have that option under the proposed VA rule change.
VA telehealth is often used for mental health care, using the same evidence-based psychotherapies that VA has championed and are superior to that available in the private sector.5,6 This advantage is largely due to VA’s rigorous training, consultation, case review, care delivery, measurement standards, and integrated care model. In a recent survey of veterans engaged in mental health care, 80% reported that VA virtual care via video and/or telephone is as helpful or more helpful than in‐person services.7And yet, because of existing regulations, VA telemental health (TMH) does not qualify as access, resulting in hundreds of thousands of TMH visits being outsourced yearly to community practitioners that could be quickly and beneficially furnished by VA clinicians.
Telehealth has been shown to be as clinically effective as in-person care. A recent review of 38 meta-analyses covering telehealth with 10 medical disciplines found that for all disciplines, telehealth was as effective, if not more so, than conventional care.8 And because the likelihood of not showing up for telehealth appointments is lower than for in-person appointments, continuity of care is uninterrupted, and health care outcomes are improved.
Telehealth is health care. The VA must end the double standard that has handicapped it from including telehealth availability in determinations of eligibility for community care. The VA has voiced its intention to seek stakeholder input before implementing its proposed correction. The change is long overdue. It will save the VA a billion dollars annually while ensuring that veterans have quicker access to better treatment.
The VA MISSION Act of 2018 expanded options for veterans to receive government-paid health care from private sector community health care practitioners. The act tasked the US Department of Veterans Affairs (VA) to develop rules that determine eligibility for outside care based on appointment wait times or distance to the nearest VA facility. As a part of those standards, VA opted not to include the availability of VA telehealth in its wait time calculations—a decision that we believe was a gross misjudgment with far-reaching consequences for veterans. Excluding telehealth from the guidelines has unnecessarily restricted veterans’ access to high-quality health care and has squandered large sums of taxpayer dollars.
The VA has reviewed its initial MISSION Act eligibility standards and proposed a correction that recognizes telehealth as a valid means of providing health care to veterans who prefer that option.1 Telehealth may not have been an essential component of health care before the COVID-19 pandemic, but now it is clear that the best action VA can take is to swiftly enact its recommended change, stipulating that both VA telehealth and in-person health care constitute access to treatment. If implemented, this correction would save taxpayers an astronomical sum—according to a VA reportto Congress, about $1.1 billion in fiscal year 2021 alone.2 The cost savings from this proposed correction is reason enough to implement it. But just as importantly, increased use of VA telehealth also means higher quality, quicker, and more convenient care for veterans.
The VA is the recognized world leader in providing telehealth that is effective, timely, and veteran centric. Veterans across the country have access to telehealth services in more than 30 specialties.3 To ensure accessibility, the VA has established partnerships with major mobile broadband carriers so that veterans can receive telehealth at home without additional charges.4 The VA project Accessing Telehealth through Local Area Stations (ATLAS) brings VA telehealth to areas where existing internet infrastructure may not be adequate to support video telehealth. ATLAS is a collaboration with private organizations, including Veterans of Foreign Wars, The American Legion, and Walmart.4The agency also provides tablets to veterans who might not have access to telehealth, fostering higher access and patient satisfaction.4
The VA can initiate telehealth care rapidly. The “Anywhere to Anywhere” VA Health Care initiative and telecare hubs eliminate geographic constraints, allowing clinicians to provide team-based services across county and state lines to veterans’ homes and communities.
VA’s telehealth effort maximizes convenience for veterans. It reduces travel time, travel expenses, depletion of sick leave, and the need for childcare. Veterans with posttraumatic stress disorder or military sexual trauma who are triggered by traffic and waiting rooms, those with mobility issues, or those facing the stigma of mental health treatment often prefer to receive care in the familiarity of their home. Nonetheless, any veteran who desires an in-person appointment would continue to have that option under the proposed VA rule change.
VA telehealth is often used for mental health care, using the same evidence-based psychotherapies that VA has championed and are superior to that available in the private sector.5,6 This advantage is largely due to VA’s rigorous training, consultation, case review, care delivery, measurement standards, and integrated care model. In a recent survey of veterans engaged in mental health care, 80% reported that VA virtual care via video and/or telephone is as helpful or more helpful than in‐person services.7And yet, because of existing regulations, VA telemental health (TMH) does not qualify as access, resulting in hundreds of thousands of TMH visits being outsourced yearly to community practitioners that could be quickly and beneficially furnished by VA clinicians.
Telehealth has been shown to be as clinically effective as in-person care. A recent review of 38 meta-analyses covering telehealth with 10 medical disciplines found that for all disciplines, telehealth was as effective, if not more so, than conventional care.8 And because the likelihood of not showing up for telehealth appointments is lower than for in-person appointments, continuity of care is uninterrupted, and health care outcomes are improved.
Telehealth is health care. The VA must end the double standard that has handicapped it from including telehealth availability in determinations of eligibility for community care. The VA has voiced its intention to seek stakeholder input before implementing its proposed correction. The change is long overdue. It will save the VA a billion dollars annually while ensuring that veterans have quicker access to better treatment.
1 McDonough D. Statement of the honorable Denis McDonough Secretary of Veterans Affairs Department of Veterans Affairs (VA) before the Committee on Veterans’ Affairs United States Senate on veterans access to care. 117th Cong, 2nd Sess. September 21, 2022. Accessed May 8, 2023. https://www.veterans.senate.gov/2022/9/ensuring-veterans-timely-access-to-care-in-va-and-the-community/63b521ff-d308-449a-b3a3-918f4badb805
2 US Department of Veterans Affairs, Congressionally mandated report: access to care standards. September 2022.
3 US Department of Veterans Affairs. VA Secretary Press Conference, Thursday March 2, 2023. Accessed May 8, 2023. https://www.youtube.com/watch?v=WnkNl2whPoQ
4 US Department of Veterans Affairs, VA Telehealth: bridging the digital divide. Accessed May 8, 2023. https://telehealth.va.gov/digital-divide
5 Rand Corporation. Improving the Quality of Mental Health Care for Veterans: Lessons from RAND Research. Santa Monica, CA: RAND Corporation, 2019. https://www.rand.org/pubs/research_briefs/RB10087.html.
6 Lemle, R. Choice program expansion jeopardizes high-quality VHA mental health services. Federal Pract. 2018:35(3):18-24. [link to: https://www.mdedge.com/fedprac/article/159219/mental-health/choice-program-expansion-jeopardizes-high-quality-vha-mental
7 Campbell TM. Overview of the state of mental health care services in the VHA health care system. Presentation to the National Academies’ improving access to high-quality mental health care for veterans: a workshop. April 20, 2023. Accessed May 8, 2023. https://www.nationalacademies.org/documents/embed/link/LF2255DA3DD1C41C0A42D3BEF0989ACAECE3053A6A9B/file/D2C4B73BA6FFCAA81E6C4FC7C57020A5BA54376245AD?noSaveAs=1
8 Snoswell CL, Chelberg G, De Guzman KR, et al. The clinical effectiveness of telehealth: A systematic review of meta-analyses from 2010 to 2019. J Telemed Telecare. 2021;1357633X211022907. doi:10.1177/1357633X211022907
1 McDonough D. Statement of the honorable Denis McDonough Secretary of Veterans Affairs Department of Veterans Affairs (VA) before the Committee on Veterans’ Affairs United States Senate on veterans access to care. 117th Cong, 2nd Sess. September 21, 2022. Accessed May 8, 2023. https://www.veterans.senate.gov/2022/9/ensuring-veterans-timely-access-to-care-in-va-and-the-community/63b521ff-d308-449a-b3a3-918f4badb805
2 US Department of Veterans Affairs, Congressionally mandated report: access to care standards. September 2022.
3 US Department of Veterans Affairs. VA Secretary Press Conference, Thursday March 2, 2023. Accessed May 8, 2023. https://www.youtube.com/watch?v=WnkNl2whPoQ
4 US Department of Veterans Affairs, VA Telehealth: bridging the digital divide. Accessed May 8, 2023. https://telehealth.va.gov/digital-divide
5 Rand Corporation. Improving the Quality of Mental Health Care for Veterans: Lessons from RAND Research. Santa Monica, CA: RAND Corporation, 2019. https://www.rand.org/pubs/research_briefs/RB10087.html.
6 Lemle, R. Choice program expansion jeopardizes high-quality VHA mental health services. Federal Pract. 2018:35(3):18-24. [link to: https://www.mdedge.com/fedprac/article/159219/mental-health/choice-program-expansion-jeopardizes-high-quality-vha-mental
7 Campbell TM. Overview of the state of mental health care services in the VHA health care system. Presentation to the National Academies’ improving access to high-quality mental health care for veterans: a workshop. April 20, 2023. Accessed May 8, 2023. https://www.nationalacademies.org/documents/embed/link/LF2255DA3DD1C41C0A42D3BEF0989ACAECE3053A6A9B/file/D2C4B73BA6FFCAA81E6C4FC7C57020A5BA54376245AD?noSaveAs=1
8 Snoswell CL, Chelberg G, De Guzman KR, et al. The clinical effectiveness of telehealth: A systematic review of meta-analyses from 2010 to 2019. J Telemed Telecare. 2021;1357633X211022907. doi:10.1177/1357633X211022907
Depression: Clinical Presentation
AHA flags differing CVD risk in Asian American subgroups
Asian Americans have significant differences in genetics, socioeconomic factors, culture, diet, lifestyle, and acculturation levels based on the Asian region of their ancestry that likely have unique effects on their risk for type 2 diabetes and heart disease, the statement noted.
“Examining Asian subgroups separately is crucial to better understand the distinctions among them, how these differences translate into their risk of type 2 diabetes and atherosclerotic disease, and how health care professionals may provide care and support in a culturally appropriate manner,” writing group chair Tak W. Kwan, MD, chief of cardiology, Lenox Health Greenwich Village, and clinical professor of medicine, Northwell Health, New York City, said in a news release.
The statement was published online in the journal Circulation.
Impact on health outcomes
Asian American subgroups are broadly categorized by the geographic region of Asian descent and include South Asia (India, Pakistan, Sri Lanka, Bangladesh, Nepal, or Bhutan); East Asia (Japan, China, or Korea); Southeast Asia (Philippines, Vietnam, Thailand, Cambodia, Laos, Indonesia, Malaysia, Singapore, Hmong); and Native Hawaiian/Pacific Islander (Hawaii, Guam, Samoa, or other Pacific islands).
Asian Americans make up the fastest growing racial and ethnic group in the United States. Together, type 2 diabetes (T2D) and atherosclerotic cardiovascular disease (ASCVD) are the leading causes of illness and death among Asian American adults.
Yet, there is significant variability in prevalence and risk factors within the different subgroups, the writing group pointed out.
For example, based on available data, rates of coronary artery disease (CAD) among Asian Americans indicate an overall prevalence of 8% in men and about 3% in women.
However, available data for subgroups suggest higher CAD rates among Asian Indian Americans (13% for men and 4.4% for women) and Filipino Americans (about 9% and 4%, respectively).
Available data on T2D among Asian American subgroups also show varied prevalence and risk.
A study from California found overall, Asian American adults had higher rates of T2D (range of 15.6%-34.5%) compared with non-Hispanic White adults (12.8%). Among Chinese Americans, the rate was 15.8%. Among Korean and Japanese Americans, rates were about 18% and among Americans with Filipino ancestry, the rate was nearly 32%.
Yet most studies to date aggregate Asian Americans in a single group and do not examine the subgroups individually, which is a challenge to providing evidence-based recommendations, the writing group said.
“Particular attention should focus on the T2D and ASCVD risk differences among the different Asian American subgroups because they may affect the precision in clinical and health outcomes,” the group suggested.
“Culturally specific recommendations and interventions across the different Asian American subgroups related to T2D and ASCVD will help improve primary and secondary prevention and health outcomes in this population,” they added.
The writing group noted that existing CVD risk calculators, which are based on data validated in non-Hispanic Black adults and non-Hispanic White adults and less extensively studied in Asian Americans, may underestimate the risk of T2D and heart disease in South Asian adults, those of lower socioeconomic status, or those with chronic inflammatory diseases.
On the other hand, these tools may overestimate CVD risk among East Asians, those with higher socioeconomic status or those who are already participating in preventive healthcare services.
Advances in epidemiology and data analysis and the availability of larger, representative cohorts will allow for refinement of pooled cohort equations to better gauge ASCVD risk in Asian American subgroups, the group said.
Filling in the gaps
The writing group outlined several key areas to consider for strengthening the data about Asian American adults. Chief among them is the need to include disaggregated data on Asian American subgroups in clinical trials and government-sponsored studies.
Another is to standardize ways of collecting ethnic and subgroup data for Asian Americans for national health systems, surveys, and registries. National surveillance surveys should consider oversampling Asian Americans to increase representation for the various subgroups, the writing group suggested.
“All of us – health care professionals, policymakers, community leaders and patients – must advocate for more health research funding for Asian Americans and demand inclusion of Asian American subgroup information in clinical trials and government-sponsored research,” Dr. Kwan said.
“Having a platform to share and disseminate data on Asian Americans for the scientific and research community would also be an asset for the health care professionals who care for this population,” Dr. Kwan added.
The new scientific statement is a follow-up to a 2010 AHA “call to action” to seek data on health disparities among Asian American subgroups and a 2018 scientific statement addressing CVD risk in South Asians (Asian Indian, Pakistani, Sri Lankan, Bangladeshi, Nepali, or Bhutanese).
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Council on Epidemiology and Prevention; the Council on Lifestyle and Cardiometabolic Health; the Council on Arteriosclerosis, Thrombosis and Vascular Biology; the Council on Clinical Cardiology; the Council on Cardiovascular and Stroke Nursing; and the Council on Genomic and Precision Medicine.
A version of this article first appeared on Medscape.com.
Asian Americans have significant differences in genetics, socioeconomic factors, culture, diet, lifestyle, and acculturation levels based on the Asian region of their ancestry that likely have unique effects on their risk for type 2 diabetes and heart disease, the statement noted.
“Examining Asian subgroups separately is crucial to better understand the distinctions among them, how these differences translate into their risk of type 2 diabetes and atherosclerotic disease, and how health care professionals may provide care and support in a culturally appropriate manner,” writing group chair Tak W. Kwan, MD, chief of cardiology, Lenox Health Greenwich Village, and clinical professor of medicine, Northwell Health, New York City, said in a news release.
The statement was published online in the journal Circulation.
Impact on health outcomes
Asian American subgroups are broadly categorized by the geographic region of Asian descent and include South Asia (India, Pakistan, Sri Lanka, Bangladesh, Nepal, or Bhutan); East Asia (Japan, China, or Korea); Southeast Asia (Philippines, Vietnam, Thailand, Cambodia, Laos, Indonesia, Malaysia, Singapore, Hmong); and Native Hawaiian/Pacific Islander (Hawaii, Guam, Samoa, or other Pacific islands).
Asian Americans make up the fastest growing racial and ethnic group in the United States. Together, type 2 diabetes (T2D) and atherosclerotic cardiovascular disease (ASCVD) are the leading causes of illness and death among Asian American adults.
Yet, there is significant variability in prevalence and risk factors within the different subgroups, the writing group pointed out.
For example, based on available data, rates of coronary artery disease (CAD) among Asian Americans indicate an overall prevalence of 8% in men and about 3% in women.
However, available data for subgroups suggest higher CAD rates among Asian Indian Americans (13% for men and 4.4% for women) and Filipino Americans (about 9% and 4%, respectively).
Available data on T2D among Asian American subgroups also show varied prevalence and risk.
A study from California found overall, Asian American adults had higher rates of T2D (range of 15.6%-34.5%) compared with non-Hispanic White adults (12.8%). Among Chinese Americans, the rate was 15.8%. Among Korean and Japanese Americans, rates were about 18% and among Americans with Filipino ancestry, the rate was nearly 32%.
Yet most studies to date aggregate Asian Americans in a single group and do not examine the subgroups individually, which is a challenge to providing evidence-based recommendations, the writing group said.
“Particular attention should focus on the T2D and ASCVD risk differences among the different Asian American subgroups because they may affect the precision in clinical and health outcomes,” the group suggested.
“Culturally specific recommendations and interventions across the different Asian American subgroups related to T2D and ASCVD will help improve primary and secondary prevention and health outcomes in this population,” they added.
The writing group noted that existing CVD risk calculators, which are based on data validated in non-Hispanic Black adults and non-Hispanic White adults and less extensively studied in Asian Americans, may underestimate the risk of T2D and heart disease in South Asian adults, those of lower socioeconomic status, or those with chronic inflammatory diseases.
On the other hand, these tools may overestimate CVD risk among East Asians, those with higher socioeconomic status or those who are already participating in preventive healthcare services.
Advances in epidemiology and data analysis and the availability of larger, representative cohorts will allow for refinement of pooled cohort equations to better gauge ASCVD risk in Asian American subgroups, the group said.
Filling in the gaps
The writing group outlined several key areas to consider for strengthening the data about Asian American adults. Chief among them is the need to include disaggregated data on Asian American subgroups in clinical trials and government-sponsored studies.
Another is to standardize ways of collecting ethnic and subgroup data for Asian Americans for national health systems, surveys, and registries. National surveillance surveys should consider oversampling Asian Americans to increase representation for the various subgroups, the writing group suggested.
“All of us – health care professionals, policymakers, community leaders and patients – must advocate for more health research funding for Asian Americans and demand inclusion of Asian American subgroup information in clinical trials and government-sponsored research,” Dr. Kwan said.
“Having a platform to share and disseminate data on Asian Americans for the scientific and research community would also be an asset for the health care professionals who care for this population,” Dr. Kwan added.
The new scientific statement is a follow-up to a 2010 AHA “call to action” to seek data on health disparities among Asian American subgroups and a 2018 scientific statement addressing CVD risk in South Asians (Asian Indian, Pakistani, Sri Lankan, Bangladeshi, Nepali, or Bhutanese).
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Council on Epidemiology and Prevention; the Council on Lifestyle and Cardiometabolic Health; the Council on Arteriosclerosis, Thrombosis and Vascular Biology; the Council on Clinical Cardiology; the Council on Cardiovascular and Stroke Nursing; and the Council on Genomic and Precision Medicine.
A version of this article first appeared on Medscape.com.
Asian Americans have significant differences in genetics, socioeconomic factors, culture, diet, lifestyle, and acculturation levels based on the Asian region of their ancestry that likely have unique effects on their risk for type 2 diabetes and heart disease, the statement noted.
“Examining Asian subgroups separately is crucial to better understand the distinctions among them, how these differences translate into their risk of type 2 diabetes and atherosclerotic disease, and how health care professionals may provide care and support in a culturally appropriate manner,” writing group chair Tak W. Kwan, MD, chief of cardiology, Lenox Health Greenwich Village, and clinical professor of medicine, Northwell Health, New York City, said in a news release.
The statement was published online in the journal Circulation.
Impact on health outcomes
Asian American subgroups are broadly categorized by the geographic region of Asian descent and include South Asia (India, Pakistan, Sri Lanka, Bangladesh, Nepal, or Bhutan); East Asia (Japan, China, or Korea); Southeast Asia (Philippines, Vietnam, Thailand, Cambodia, Laos, Indonesia, Malaysia, Singapore, Hmong); and Native Hawaiian/Pacific Islander (Hawaii, Guam, Samoa, or other Pacific islands).
Asian Americans make up the fastest growing racial and ethnic group in the United States. Together, type 2 diabetes (T2D) and atherosclerotic cardiovascular disease (ASCVD) are the leading causes of illness and death among Asian American adults.
Yet, there is significant variability in prevalence and risk factors within the different subgroups, the writing group pointed out.
For example, based on available data, rates of coronary artery disease (CAD) among Asian Americans indicate an overall prevalence of 8% in men and about 3% in women.
However, available data for subgroups suggest higher CAD rates among Asian Indian Americans (13% for men and 4.4% for women) and Filipino Americans (about 9% and 4%, respectively).
Available data on T2D among Asian American subgroups also show varied prevalence and risk.
A study from California found overall, Asian American adults had higher rates of T2D (range of 15.6%-34.5%) compared with non-Hispanic White adults (12.8%). Among Chinese Americans, the rate was 15.8%. Among Korean and Japanese Americans, rates were about 18% and among Americans with Filipino ancestry, the rate was nearly 32%.
Yet most studies to date aggregate Asian Americans in a single group and do not examine the subgroups individually, which is a challenge to providing evidence-based recommendations, the writing group said.
“Particular attention should focus on the T2D and ASCVD risk differences among the different Asian American subgroups because they may affect the precision in clinical and health outcomes,” the group suggested.
“Culturally specific recommendations and interventions across the different Asian American subgroups related to T2D and ASCVD will help improve primary and secondary prevention and health outcomes in this population,” they added.
The writing group noted that existing CVD risk calculators, which are based on data validated in non-Hispanic Black adults and non-Hispanic White adults and less extensively studied in Asian Americans, may underestimate the risk of T2D and heart disease in South Asian adults, those of lower socioeconomic status, or those with chronic inflammatory diseases.
On the other hand, these tools may overestimate CVD risk among East Asians, those with higher socioeconomic status or those who are already participating in preventive healthcare services.
Advances in epidemiology and data analysis and the availability of larger, representative cohorts will allow for refinement of pooled cohort equations to better gauge ASCVD risk in Asian American subgroups, the group said.
Filling in the gaps
The writing group outlined several key areas to consider for strengthening the data about Asian American adults. Chief among them is the need to include disaggregated data on Asian American subgroups in clinical trials and government-sponsored studies.
Another is to standardize ways of collecting ethnic and subgroup data for Asian Americans for national health systems, surveys, and registries. National surveillance surveys should consider oversampling Asian Americans to increase representation for the various subgroups, the writing group suggested.
“All of us – health care professionals, policymakers, community leaders and patients – must advocate for more health research funding for Asian Americans and demand inclusion of Asian American subgroup information in clinical trials and government-sponsored research,” Dr. Kwan said.
“Having a platform to share and disseminate data on Asian Americans for the scientific and research community would also be an asset for the health care professionals who care for this population,” Dr. Kwan added.
The new scientific statement is a follow-up to a 2010 AHA “call to action” to seek data on health disparities among Asian American subgroups and a 2018 scientific statement addressing CVD risk in South Asians (Asian Indian, Pakistani, Sri Lankan, Bangladeshi, Nepali, or Bhutanese).
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Council on Epidemiology and Prevention; the Council on Lifestyle and Cardiometabolic Health; the Council on Arteriosclerosis, Thrombosis and Vascular Biology; the Council on Clinical Cardiology; the Council on Cardiovascular and Stroke Nursing; and the Council on Genomic and Precision Medicine.
A version of this article first appeared on Medscape.com.
FROM CIRCULATION
Medications that scare me
An 85-year-old woman is brought to the emergency department after a syncopal episode. Her caregivers report a similar episode 2 weeks ago, but she recovered so quickly they did not seek evaluation for her.
Medications: Omeprazole 20 mg, pravastatin 40 mg, citalopram 10 mg, albuterol, donepezil 10 mg, isosorbide mononitrate 60 mg, and calcium. On exam, blood pressure is 100/60 mm Hg, pulse 55. ECG indicates bradycardia with normal intervals. What drug most likely caused her syncope?
A. Citalopram
B. Pravastatin
C. Donepezil
D. Isosorbide
E. Calcium
This woman’s syncope is likely caused by donepezil. Citalopram can lengthen the QT interval, especially in elderly patients, but the normal intervals on ECG eliminate this possibility. Donepezil can cause bradycardia, which can contribute to syncope.
Hernandez and colleagues evaluated a cohort of veterans with dementia over an 8-year period.1 They found that there was a 1.4-fold increased risk of bradycardia in patients with dementia treated with an acetylcholine inhibitor (compared with that in patients who were not taking these medications) and that there was a dose-dependent increase in risk for patients on donepezil.
Park-Wyllie et al. found in a study of 1.4 million older adults a greater than twofold risk of hospitalization for bradycardia in patients treated with a cholinesterase inhibitor.2 Gill and colleagues performed a population-based cohort study of 19,803 elderly patients with dementia who were prescribed cholinesterase inhibitors, and compared them to age-matched controls.3 They found increased hospital visits for syncope in people receiving cholinesterase inhibitors (hazard ratio, 1.76; 95% confidence interval, 1.57-1.98). Other syncope-related events were also more common in people receiving cholinesterase inhibitors, compared with controls: hospital visits for bradycardia (HR, 1.69; 95% CI, 1.32-2.15), permanent pacemaker insertion (HR, 1.49; 95% CI, 1.12-2.00), and hip fracture (HR, 1.18; (95% CI, 1.04-1.34).
Nausea, vomiting, and weight loss are much more common than the rarer side effects of bradycardia and syncope. The frequency of gastroenterological side effects is up to 25%. Cholinesterase inhibitors have modest effects on cognitive function with a high number needed to treat (NNT) of 10, and an NNT as high as 100 for global function. The number needed to harm (NNH) is 4, when gastrointestinal symptoms are added in.4 Another important, problematic side effect of cholinesterase inhibitors is urinary incontinence. This often leads to patients receiving medications, to combat this side effect, that may worsen cognitive function.
Another commonly used medication that scares me in certain circumstances is trimethoprim-sulfamethoxazole. My main concern is when it is used in patients who are elderly, have chronic kidney disease, or are taking other medications that can cause hyperkalemia (ACEIs, ARBs, potassium-sparing diuretics including spironolactone). Hyperkalemia is a real concern in these patient populations. Trimethoprim reduces renal potassium excretion through the competitive inhibition of sodium channels in the distal nephron, in a manner similar to the potassium-sparing diuretic amiloride. Hospitalizations for hyperkalemia are more common in patients who take ACEIs and ARBs and are prescribed trimethoprim-sulfamethoxazole, compared with other antibiotics.5
Sudden cardiac death is also more common in patients who are taking ACEIs or ARBs and receive trimethoprim-sulfamethoxazole.6 Trimethoprim-sulfamethoxazole also has a powerful interaction with warfarin, both displacing warfarin from albumin and inhibiting its metabolism. It raises the INR (international normalized ratio) in warfarin-treated patients much greater than do other antibiotics.7
Pearls
- Think carefully about the use of cholinesterase inhibitors because of the unfavorable NNH vs. NNT.
- Use caution prescribing trimethoprim for patients who are elderly, especially if they are on an ACEI, an ARB, or spironolactone, and in patients with chronic kidney disease.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at dpaauw@uw.edu.
References
1. Hernandez RK et al. J Am Geriatr Soc. 2009;57:1997-2003.
2. Park-Wyllie LY et al. PLoS Med. 2009;6:e1000157.
3. Gill SS et al. Arch Intern Med 2009;169:867-73.
4. Peters KR. J Am Geriatr Soc. 2013 Jul;61(7):1170-4.
5. Antoniou TN et al. Arch Intern Med. 2010;170(12):1045-9.
6. Fralick M et al. BMJ. 2014 Oct 30;349:g6196.
7. Glasheen JJ et al. J Gen Intern Med. 2005 Jul;20(7):653-6.
An 85-year-old woman is brought to the emergency department after a syncopal episode. Her caregivers report a similar episode 2 weeks ago, but she recovered so quickly they did not seek evaluation for her.
Medications: Omeprazole 20 mg, pravastatin 40 mg, citalopram 10 mg, albuterol, donepezil 10 mg, isosorbide mononitrate 60 mg, and calcium. On exam, blood pressure is 100/60 mm Hg, pulse 55. ECG indicates bradycardia with normal intervals. What drug most likely caused her syncope?
A. Citalopram
B. Pravastatin
C. Donepezil
D. Isosorbide
E. Calcium
This woman’s syncope is likely caused by donepezil. Citalopram can lengthen the QT interval, especially in elderly patients, but the normal intervals on ECG eliminate this possibility. Donepezil can cause bradycardia, which can contribute to syncope.
Hernandez and colleagues evaluated a cohort of veterans with dementia over an 8-year period.1 They found that there was a 1.4-fold increased risk of bradycardia in patients with dementia treated with an acetylcholine inhibitor (compared with that in patients who were not taking these medications) and that there was a dose-dependent increase in risk for patients on donepezil.
Park-Wyllie et al. found in a study of 1.4 million older adults a greater than twofold risk of hospitalization for bradycardia in patients treated with a cholinesterase inhibitor.2 Gill and colleagues performed a population-based cohort study of 19,803 elderly patients with dementia who were prescribed cholinesterase inhibitors, and compared them to age-matched controls.3 They found increased hospital visits for syncope in people receiving cholinesterase inhibitors (hazard ratio, 1.76; 95% confidence interval, 1.57-1.98). Other syncope-related events were also more common in people receiving cholinesterase inhibitors, compared with controls: hospital visits for bradycardia (HR, 1.69; 95% CI, 1.32-2.15), permanent pacemaker insertion (HR, 1.49; 95% CI, 1.12-2.00), and hip fracture (HR, 1.18; (95% CI, 1.04-1.34).
Nausea, vomiting, and weight loss are much more common than the rarer side effects of bradycardia and syncope. The frequency of gastroenterological side effects is up to 25%. Cholinesterase inhibitors have modest effects on cognitive function with a high number needed to treat (NNT) of 10, and an NNT as high as 100 for global function. The number needed to harm (NNH) is 4, when gastrointestinal symptoms are added in.4 Another important, problematic side effect of cholinesterase inhibitors is urinary incontinence. This often leads to patients receiving medications, to combat this side effect, that may worsen cognitive function.
Another commonly used medication that scares me in certain circumstances is trimethoprim-sulfamethoxazole. My main concern is when it is used in patients who are elderly, have chronic kidney disease, or are taking other medications that can cause hyperkalemia (ACEIs, ARBs, potassium-sparing diuretics including spironolactone). Hyperkalemia is a real concern in these patient populations. Trimethoprim reduces renal potassium excretion through the competitive inhibition of sodium channels in the distal nephron, in a manner similar to the potassium-sparing diuretic amiloride. Hospitalizations for hyperkalemia are more common in patients who take ACEIs and ARBs and are prescribed trimethoprim-sulfamethoxazole, compared with other antibiotics.5
Sudden cardiac death is also more common in patients who are taking ACEIs or ARBs and receive trimethoprim-sulfamethoxazole.6 Trimethoprim-sulfamethoxazole also has a powerful interaction with warfarin, both displacing warfarin from albumin and inhibiting its metabolism. It raises the INR (international normalized ratio) in warfarin-treated patients much greater than do other antibiotics.7
Pearls
- Think carefully about the use of cholinesterase inhibitors because of the unfavorable NNH vs. NNT.
- Use caution prescribing trimethoprim for patients who are elderly, especially if they are on an ACEI, an ARB, or spironolactone, and in patients with chronic kidney disease.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at dpaauw@uw.edu.
References
1. Hernandez RK et al. J Am Geriatr Soc. 2009;57:1997-2003.
2. Park-Wyllie LY et al. PLoS Med. 2009;6:e1000157.
3. Gill SS et al. Arch Intern Med 2009;169:867-73.
4. Peters KR. J Am Geriatr Soc. 2013 Jul;61(7):1170-4.
5. Antoniou TN et al. Arch Intern Med. 2010;170(12):1045-9.
6. Fralick M et al. BMJ. 2014 Oct 30;349:g6196.
7. Glasheen JJ et al. J Gen Intern Med. 2005 Jul;20(7):653-6.
An 85-year-old woman is brought to the emergency department after a syncopal episode. Her caregivers report a similar episode 2 weeks ago, but she recovered so quickly they did not seek evaluation for her.
Medications: Omeprazole 20 mg, pravastatin 40 mg, citalopram 10 mg, albuterol, donepezil 10 mg, isosorbide mononitrate 60 mg, and calcium. On exam, blood pressure is 100/60 mm Hg, pulse 55. ECG indicates bradycardia with normal intervals. What drug most likely caused her syncope?
A. Citalopram
B. Pravastatin
C. Donepezil
D. Isosorbide
E. Calcium
This woman’s syncope is likely caused by donepezil. Citalopram can lengthen the QT interval, especially in elderly patients, but the normal intervals on ECG eliminate this possibility. Donepezil can cause bradycardia, which can contribute to syncope.
Hernandez and colleagues evaluated a cohort of veterans with dementia over an 8-year period.1 They found that there was a 1.4-fold increased risk of bradycardia in patients with dementia treated with an acetylcholine inhibitor (compared with that in patients who were not taking these medications) and that there was a dose-dependent increase in risk for patients on donepezil.
Park-Wyllie et al. found in a study of 1.4 million older adults a greater than twofold risk of hospitalization for bradycardia in patients treated with a cholinesterase inhibitor.2 Gill and colleagues performed a population-based cohort study of 19,803 elderly patients with dementia who were prescribed cholinesterase inhibitors, and compared them to age-matched controls.3 They found increased hospital visits for syncope in people receiving cholinesterase inhibitors (hazard ratio, 1.76; 95% confidence interval, 1.57-1.98). Other syncope-related events were also more common in people receiving cholinesterase inhibitors, compared with controls: hospital visits for bradycardia (HR, 1.69; 95% CI, 1.32-2.15), permanent pacemaker insertion (HR, 1.49; 95% CI, 1.12-2.00), and hip fracture (HR, 1.18; (95% CI, 1.04-1.34).
Nausea, vomiting, and weight loss are much more common than the rarer side effects of bradycardia and syncope. The frequency of gastroenterological side effects is up to 25%. Cholinesterase inhibitors have modest effects on cognitive function with a high number needed to treat (NNT) of 10, and an NNT as high as 100 for global function. The number needed to harm (NNH) is 4, when gastrointestinal symptoms are added in.4 Another important, problematic side effect of cholinesterase inhibitors is urinary incontinence. This often leads to patients receiving medications, to combat this side effect, that may worsen cognitive function.
Another commonly used medication that scares me in certain circumstances is trimethoprim-sulfamethoxazole. My main concern is when it is used in patients who are elderly, have chronic kidney disease, or are taking other medications that can cause hyperkalemia (ACEIs, ARBs, potassium-sparing diuretics including spironolactone). Hyperkalemia is a real concern in these patient populations. Trimethoprim reduces renal potassium excretion through the competitive inhibition of sodium channels in the distal nephron, in a manner similar to the potassium-sparing diuretic amiloride. Hospitalizations for hyperkalemia are more common in patients who take ACEIs and ARBs and are prescribed trimethoprim-sulfamethoxazole, compared with other antibiotics.5
Sudden cardiac death is also more common in patients who are taking ACEIs or ARBs and receive trimethoprim-sulfamethoxazole.6 Trimethoprim-sulfamethoxazole also has a powerful interaction with warfarin, both displacing warfarin from albumin and inhibiting its metabolism. It raises the INR (international normalized ratio) in warfarin-treated patients much greater than do other antibiotics.7
Pearls
- Think carefully about the use of cholinesterase inhibitors because of the unfavorable NNH vs. NNT.
- Use caution prescribing trimethoprim for patients who are elderly, especially if they are on an ACEI, an ARB, or spironolactone, and in patients with chronic kidney disease.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at dpaauw@uw.edu.
References
1. Hernandez RK et al. J Am Geriatr Soc. 2009;57:1997-2003.
2. Park-Wyllie LY et al. PLoS Med. 2009;6:e1000157.
3. Gill SS et al. Arch Intern Med 2009;169:867-73.
4. Peters KR. J Am Geriatr Soc. 2013 Jul;61(7):1170-4.
5. Antoniou TN et al. Arch Intern Med. 2010;170(12):1045-9.
6. Fralick M et al. BMJ. 2014 Oct 30;349:g6196.
7. Glasheen JJ et al. J Gen Intern Med. 2005 Jul;20(7):653-6.
Young men at highest schizophrenia risk from cannabis abuse
A new study confirms the robust link between cannabis use and schizophrenia among men and women but suggests that young men may be especially susceptible to schizophrenia from cannabis abuse.
Of note,
“The entanglement of substance use disorders and mental illnesses is a major public health issue, requiring urgent action and support for people who need it,” study coauthor Nora Volkow, MD, director of the National Institute on Drug Abuse, said in a news release.
“As access to potent cannabis products continues to expand, it is crucial that we also expand prevention, screening, and treatment for people who may experience mental illnesses associated with cannabis use,” Dr. Volkow added.
The study was published online in Psychological Medicine.
A modifiable risk factor
The researchers analyzed Danish registry data spanning 5 decades and representing more than 6.9 million people in Denmark to estimate the population-level percentage of schizophrenia cases attributable to CUD.
A total of 60,563 participants were diagnosed with CUD. Three-quarters of cases were in men; there were 45,327 incident cases of schizophrenia during the study period.
The overall adjusted hazard ratio for CUD on schizophrenia was slightly higher among males than females (aHR, 2.42 vs. 2.02); however, among those aged 16 to 20 years, the adjusted incidence risk ratio for males was more than twice that for females (aIRR, 3.84 vs. 1.81).
The researchers estimate that, in 2021, about 15% of schizophrenia cases among males aged 16-49 could have been avoided by preventing CUD, compared with 4% among females in this age range.
For young men aged 21-30, the proportion of preventable schizophrenia cases related to CUD may be as high as 30%, the authors reported.
“Alongside the increasing evidence that CUD is a modifiable risk factor for schizophrenia, our findings underscore the importance of evidence-based strategies to regulate cannabis use and to effectively prevent, screen for, and treat CUD as well as schizophrenia,” the researchers wrote.
Legalization sends the wrong message
In a press statement, lead investigator Carsten Hjorthøj, PhD, with the University of Copenhagen, noted that “increases in the legalization of cannabis over the past few decades have made it one of the most frequently used psychoactive substances in the world, while also decreasing the public’s perception of its harm. This study adds to our growing understanding that cannabis use is not harmless, and that risks are not fixed at one point in time.”
In a prior study, Dr. Hjorthøj and colleagues found that the proportion of new schizophrenia cases attributable to CUD has consistently increased over the past 20 years.
“In my view, the association is most likely causative, at least to a large extent,” Dr. Hjorthøj said at the time this research was published.
“It is of course nearly impossible to use epidemiological studies to actually prove causation, but all the numbers behave exactly in the way that would be expected under the theory of causation,” Dr. Hjorthøj added.
The study received no specific funding. The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study confirms the robust link between cannabis use and schizophrenia among men and women but suggests that young men may be especially susceptible to schizophrenia from cannabis abuse.
Of note,
“The entanglement of substance use disorders and mental illnesses is a major public health issue, requiring urgent action and support for people who need it,” study coauthor Nora Volkow, MD, director of the National Institute on Drug Abuse, said in a news release.
“As access to potent cannabis products continues to expand, it is crucial that we also expand prevention, screening, and treatment for people who may experience mental illnesses associated with cannabis use,” Dr. Volkow added.
The study was published online in Psychological Medicine.
A modifiable risk factor
The researchers analyzed Danish registry data spanning 5 decades and representing more than 6.9 million people in Denmark to estimate the population-level percentage of schizophrenia cases attributable to CUD.
A total of 60,563 participants were diagnosed with CUD. Three-quarters of cases were in men; there were 45,327 incident cases of schizophrenia during the study period.
The overall adjusted hazard ratio for CUD on schizophrenia was slightly higher among males than females (aHR, 2.42 vs. 2.02); however, among those aged 16 to 20 years, the adjusted incidence risk ratio for males was more than twice that for females (aIRR, 3.84 vs. 1.81).
The researchers estimate that, in 2021, about 15% of schizophrenia cases among males aged 16-49 could have been avoided by preventing CUD, compared with 4% among females in this age range.
For young men aged 21-30, the proportion of preventable schizophrenia cases related to CUD may be as high as 30%, the authors reported.
“Alongside the increasing evidence that CUD is a modifiable risk factor for schizophrenia, our findings underscore the importance of evidence-based strategies to regulate cannabis use and to effectively prevent, screen for, and treat CUD as well as schizophrenia,” the researchers wrote.
Legalization sends the wrong message
In a press statement, lead investigator Carsten Hjorthøj, PhD, with the University of Copenhagen, noted that “increases in the legalization of cannabis over the past few decades have made it one of the most frequently used psychoactive substances in the world, while also decreasing the public’s perception of its harm. This study adds to our growing understanding that cannabis use is not harmless, and that risks are not fixed at one point in time.”
In a prior study, Dr. Hjorthøj and colleagues found that the proportion of new schizophrenia cases attributable to CUD has consistently increased over the past 20 years.
“In my view, the association is most likely causative, at least to a large extent,” Dr. Hjorthøj said at the time this research was published.
“It is of course nearly impossible to use epidemiological studies to actually prove causation, but all the numbers behave exactly in the way that would be expected under the theory of causation,” Dr. Hjorthøj added.
The study received no specific funding. The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A new study confirms the robust link between cannabis use and schizophrenia among men and women but suggests that young men may be especially susceptible to schizophrenia from cannabis abuse.
Of note,
“The entanglement of substance use disorders and mental illnesses is a major public health issue, requiring urgent action and support for people who need it,” study coauthor Nora Volkow, MD, director of the National Institute on Drug Abuse, said in a news release.
“As access to potent cannabis products continues to expand, it is crucial that we also expand prevention, screening, and treatment for people who may experience mental illnesses associated with cannabis use,” Dr. Volkow added.
The study was published online in Psychological Medicine.
A modifiable risk factor
The researchers analyzed Danish registry data spanning 5 decades and representing more than 6.9 million people in Denmark to estimate the population-level percentage of schizophrenia cases attributable to CUD.
A total of 60,563 participants were diagnosed with CUD. Three-quarters of cases were in men; there were 45,327 incident cases of schizophrenia during the study period.
The overall adjusted hazard ratio for CUD on schizophrenia was slightly higher among males than females (aHR, 2.42 vs. 2.02); however, among those aged 16 to 20 years, the adjusted incidence risk ratio for males was more than twice that for females (aIRR, 3.84 vs. 1.81).
The researchers estimate that, in 2021, about 15% of schizophrenia cases among males aged 16-49 could have been avoided by preventing CUD, compared with 4% among females in this age range.
For young men aged 21-30, the proportion of preventable schizophrenia cases related to CUD may be as high as 30%, the authors reported.
“Alongside the increasing evidence that CUD is a modifiable risk factor for schizophrenia, our findings underscore the importance of evidence-based strategies to regulate cannabis use and to effectively prevent, screen for, and treat CUD as well as schizophrenia,” the researchers wrote.
Legalization sends the wrong message
In a press statement, lead investigator Carsten Hjorthøj, PhD, with the University of Copenhagen, noted that “increases in the legalization of cannabis over the past few decades have made it one of the most frequently used psychoactive substances in the world, while also decreasing the public’s perception of its harm. This study adds to our growing understanding that cannabis use is not harmless, and that risks are not fixed at one point in time.”
In a prior study, Dr. Hjorthøj and colleagues found that the proportion of new schizophrenia cases attributable to CUD has consistently increased over the past 20 years.
“In my view, the association is most likely causative, at least to a large extent,” Dr. Hjorthøj said at the time this research was published.
“It is of course nearly impossible to use epidemiological studies to actually prove causation, but all the numbers behave exactly in the way that would be expected under the theory of causation,” Dr. Hjorthøj added.
The study received no specific funding. The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM PSYCHOLOGICAL MEDICINE
Helmet interface for ventilation likely superior in acute hypoxemic respiratory failure
For adults with acute hypoxemic respiratory failure (AHRF), treatment using a helmet interface is likely superior to a face mask interface, according to a systematic review of recent randomized controlled trials examining different noninvasive oxygenation strategies for AHRF treatment.
The COVID-19 pandemic has underscored the benefits of optimizing noninvasive strategies to avoid unnecessary intubation. Intubation may be avoided in patients with AHRF through noninvasive oxygenation strategies, including high flow nasal cannula (HFNC), continuous positive airway pressure (CPAP) and noninvasive bilevel ventilation, noted Tyler Pitre, MD, department of medicine, McMaster University, Hamilton, Ont., and colleagues. CPAP and bilevel ventilation can be delivered through different interfaces, most commonly face mask or helmet. While research has shown noninvasive strategies to be associated with reductions in risk for invasive mechanical ventilation, mortality assessments and analyses comparing specific modalities (i.e., CPAP vs. bilevel ventilation) have been limited. The incremental reduction in diaphragmatic effort and improved gas exchange demonstrated for bilevel ventilation compared with CPAP in COPD patients suggests that responses in AHRF may differ for CPAP and bilevel ventilation, state Dr. Pitre and colleagues. On the other hand, the increased drive pressure of bilevel ventilation may compound patient self-induced lung injury with concomitant lung inflammation and need for prolonged respiratory support. New evidence from several large, high quality randomized controlled trials (RCTs) in COVID-19-related AHRF offered an opportunity to reassess comparative efficacies, the researchers noted.
The retrospective study encompassed RCTs with all types of AHRF, including COVID-19 related, with a total of 7,046 patients whose median age was 59.4 years (61.4% were males). Thirty of the 36 RCTs reported on mortality (6,114 patients and 1,539 deaths). The study’s analysis showed with moderate certainty that helmet CPAP reduces mortality (231 fewer deaths per 1,000 [95% confidence interval (CI), 126-273]) while the 63 fewer deaths per 1,000 (95% CI, 15-102) indicated with low certainty that HFNC may reduce mortality compared with standard oxygen therapy (SOT). The analysis showed also that face mask bilevel (36 fewer deaths per 1,000 [84.0 fewer to 24.0 more]) and helmet bilevel ventilation (129.0 fewer deaths per 1,000 [195.0 to 24.0 fewer]) may reduce death compared with SOT (all low certainty). The mortality benefit for face mask CPAP compared with SOT was uncertain (very low certainty) (9 fewer deaths per 1,000 [81 fewer deaths to 84 more]). For helmet CPAP vs. HFNC ventilation, the mortality benefit had moderate certainty (198.1 fewer events per 1,000 [95% CI, 69.75-248.31].
Mechanical ventilation and ICU duration
The authors found that HFNC probably reduces the need for invasive mechanical ventilation (103.5 fewer events per 1,000 [40.5-157.5 fewer]; moderate certainty). Helmet bilevel ventilation and helmet CPAP may reduce the duration of ICU stay compared with SOT (both low certainty) at (4.84 days fewer [95% CI 2.33 to 16 7.36 days fewer]) and (1.74 days fewer [95% CI 4.49 fewer to 1.01 more]), respectively. Also, SOT may be more comfortable than face mask noninvasive ventilation (NIV) and no different in comfort compared with HFNC (both low certainty).
“Helmet noninvasive ventilation interfaces is probably effective in acute hypoxic respiratory failure and is superior to face mask interfaces. All modalities including HFNC probably reduce the risk of need for invasive mechanical ventilation,” the researchers wrote.
“This meta-analysis shows that helmet noninvasive ventilation is effective in reducing death, and need for invasive mechanical ventilation based on a moderate certainty of evidence,” Shyamsunder Subramanian, MD, chief, division of pulmonary critical care and sleep medicine, Sutter Health, Tracy, Calif., said in an interview. “It is premature based on the results of this meta-analysis to conclude that guideline changes are needed. Use of helmet based ventilation remains limited in scope. We need appropriately designed prospective trials across multiple centers to get sufficient rigor of scientific evidence before any change in guidelines or practice recommendations can be formulated about the appropriate use of helmet NIV in acute respiratory failure.”
The researchers cited the relative heterogeneity of the population included in this analysis as a study limitation.
Dr. Pitre and Dr. Subramanian disclosed that they have no relevant conflicts of interest.
For adults with acute hypoxemic respiratory failure (AHRF), treatment using a helmet interface is likely superior to a face mask interface, according to a systematic review of recent randomized controlled trials examining different noninvasive oxygenation strategies for AHRF treatment.
The COVID-19 pandemic has underscored the benefits of optimizing noninvasive strategies to avoid unnecessary intubation. Intubation may be avoided in patients with AHRF through noninvasive oxygenation strategies, including high flow nasal cannula (HFNC), continuous positive airway pressure (CPAP) and noninvasive bilevel ventilation, noted Tyler Pitre, MD, department of medicine, McMaster University, Hamilton, Ont., and colleagues. CPAP and bilevel ventilation can be delivered through different interfaces, most commonly face mask or helmet. While research has shown noninvasive strategies to be associated with reductions in risk for invasive mechanical ventilation, mortality assessments and analyses comparing specific modalities (i.e., CPAP vs. bilevel ventilation) have been limited. The incremental reduction in diaphragmatic effort and improved gas exchange demonstrated for bilevel ventilation compared with CPAP in COPD patients suggests that responses in AHRF may differ for CPAP and bilevel ventilation, state Dr. Pitre and colleagues. On the other hand, the increased drive pressure of bilevel ventilation may compound patient self-induced lung injury with concomitant lung inflammation and need for prolonged respiratory support. New evidence from several large, high quality randomized controlled trials (RCTs) in COVID-19-related AHRF offered an opportunity to reassess comparative efficacies, the researchers noted.
The retrospective study encompassed RCTs with all types of AHRF, including COVID-19 related, with a total of 7,046 patients whose median age was 59.4 years (61.4% were males). Thirty of the 36 RCTs reported on mortality (6,114 patients and 1,539 deaths). The study’s analysis showed with moderate certainty that helmet CPAP reduces mortality (231 fewer deaths per 1,000 [95% confidence interval (CI), 126-273]) while the 63 fewer deaths per 1,000 (95% CI, 15-102) indicated with low certainty that HFNC may reduce mortality compared with standard oxygen therapy (SOT). The analysis showed also that face mask bilevel (36 fewer deaths per 1,000 [84.0 fewer to 24.0 more]) and helmet bilevel ventilation (129.0 fewer deaths per 1,000 [195.0 to 24.0 fewer]) may reduce death compared with SOT (all low certainty). The mortality benefit for face mask CPAP compared with SOT was uncertain (very low certainty) (9 fewer deaths per 1,000 [81 fewer deaths to 84 more]). For helmet CPAP vs. HFNC ventilation, the mortality benefit had moderate certainty (198.1 fewer events per 1,000 [95% CI, 69.75-248.31].
Mechanical ventilation and ICU duration
The authors found that HFNC probably reduces the need for invasive mechanical ventilation (103.5 fewer events per 1,000 [40.5-157.5 fewer]; moderate certainty). Helmet bilevel ventilation and helmet CPAP may reduce the duration of ICU stay compared with SOT (both low certainty) at (4.84 days fewer [95% CI 2.33 to 16 7.36 days fewer]) and (1.74 days fewer [95% CI 4.49 fewer to 1.01 more]), respectively. Also, SOT may be more comfortable than face mask noninvasive ventilation (NIV) and no different in comfort compared with HFNC (both low certainty).
“Helmet noninvasive ventilation interfaces is probably effective in acute hypoxic respiratory failure and is superior to face mask interfaces. All modalities including HFNC probably reduce the risk of need for invasive mechanical ventilation,” the researchers wrote.
“This meta-analysis shows that helmet noninvasive ventilation is effective in reducing death, and need for invasive mechanical ventilation based on a moderate certainty of evidence,” Shyamsunder Subramanian, MD, chief, division of pulmonary critical care and sleep medicine, Sutter Health, Tracy, Calif., said in an interview. “It is premature based on the results of this meta-analysis to conclude that guideline changes are needed. Use of helmet based ventilation remains limited in scope. We need appropriately designed prospective trials across multiple centers to get sufficient rigor of scientific evidence before any change in guidelines or practice recommendations can be formulated about the appropriate use of helmet NIV in acute respiratory failure.”
The researchers cited the relative heterogeneity of the population included in this analysis as a study limitation.
Dr. Pitre and Dr. Subramanian disclosed that they have no relevant conflicts of interest.
For adults with acute hypoxemic respiratory failure (AHRF), treatment using a helmet interface is likely superior to a face mask interface, according to a systematic review of recent randomized controlled trials examining different noninvasive oxygenation strategies for AHRF treatment.
The COVID-19 pandemic has underscored the benefits of optimizing noninvasive strategies to avoid unnecessary intubation. Intubation may be avoided in patients with AHRF through noninvasive oxygenation strategies, including high flow nasal cannula (HFNC), continuous positive airway pressure (CPAP) and noninvasive bilevel ventilation, noted Tyler Pitre, MD, department of medicine, McMaster University, Hamilton, Ont., and colleagues. CPAP and bilevel ventilation can be delivered through different interfaces, most commonly face mask or helmet. While research has shown noninvasive strategies to be associated with reductions in risk for invasive mechanical ventilation, mortality assessments and analyses comparing specific modalities (i.e., CPAP vs. bilevel ventilation) have been limited. The incremental reduction in diaphragmatic effort and improved gas exchange demonstrated for bilevel ventilation compared with CPAP in COPD patients suggests that responses in AHRF may differ for CPAP and bilevel ventilation, state Dr. Pitre and colleagues. On the other hand, the increased drive pressure of bilevel ventilation may compound patient self-induced lung injury with concomitant lung inflammation and need for prolonged respiratory support. New evidence from several large, high quality randomized controlled trials (RCTs) in COVID-19-related AHRF offered an opportunity to reassess comparative efficacies, the researchers noted.
The retrospective study encompassed RCTs with all types of AHRF, including COVID-19 related, with a total of 7,046 patients whose median age was 59.4 years (61.4% were males). Thirty of the 36 RCTs reported on mortality (6,114 patients and 1,539 deaths). The study’s analysis showed with moderate certainty that helmet CPAP reduces mortality (231 fewer deaths per 1,000 [95% confidence interval (CI), 126-273]) while the 63 fewer deaths per 1,000 (95% CI, 15-102) indicated with low certainty that HFNC may reduce mortality compared with standard oxygen therapy (SOT). The analysis showed also that face mask bilevel (36 fewer deaths per 1,000 [84.0 fewer to 24.0 more]) and helmet bilevel ventilation (129.0 fewer deaths per 1,000 [195.0 to 24.0 fewer]) may reduce death compared with SOT (all low certainty). The mortality benefit for face mask CPAP compared with SOT was uncertain (very low certainty) (9 fewer deaths per 1,000 [81 fewer deaths to 84 more]). For helmet CPAP vs. HFNC ventilation, the mortality benefit had moderate certainty (198.1 fewer events per 1,000 [95% CI, 69.75-248.31].
Mechanical ventilation and ICU duration
The authors found that HFNC probably reduces the need for invasive mechanical ventilation (103.5 fewer events per 1,000 [40.5-157.5 fewer]; moderate certainty). Helmet bilevel ventilation and helmet CPAP may reduce the duration of ICU stay compared with SOT (both low certainty) at (4.84 days fewer [95% CI 2.33 to 16 7.36 days fewer]) and (1.74 days fewer [95% CI 4.49 fewer to 1.01 more]), respectively. Also, SOT may be more comfortable than face mask noninvasive ventilation (NIV) and no different in comfort compared with HFNC (both low certainty).
“Helmet noninvasive ventilation interfaces is probably effective in acute hypoxic respiratory failure and is superior to face mask interfaces. All modalities including HFNC probably reduce the risk of need for invasive mechanical ventilation,” the researchers wrote.
“This meta-analysis shows that helmet noninvasive ventilation is effective in reducing death, and need for invasive mechanical ventilation based on a moderate certainty of evidence,” Shyamsunder Subramanian, MD, chief, division of pulmonary critical care and sleep medicine, Sutter Health, Tracy, Calif., said in an interview. “It is premature based on the results of this meta-analysis to conclude that guideline changes are needed. Use of helmet based ventilation remains limited in scope. We need appropriately designed prospective trials across multiple centers to get sufficient rigor of scientific evidence before any change in guidelines or practice recommendations can be formulated about the appropriate use of helmet NIV in acute respiratory failure.”
The researchers cited the relative heterogeneity of the population included in this analysis as a study limitation.
Dr. Pitre and Dr. Subramanian disclosed that they have no relevant conflicts of interest.
FROM CHEST
1,726-nm lasers poised to revolutionize acne treatment, expert predicts
PHOENIX – When Jeffrey Dover, MD, addressed audience members gathered for a session on cutting-edge technologies at the annual conference of the American Society for Laser Medicine and Surgery, he reflected on a conversation he had with R. Rox Anderson, MD, almost 40 years ago, about eventually finding a cure for acne.
“Despite the fact that we have over-the-counter therapies, prescription therapies, and all kinds of devices available to treat acne, there are still barriers to care that get in the way of treatment,” said Dr. Dover, director of SkinCare Physicians in Chestnut Hill, Mass. “If we had a device based on innovative light science that could meet the needs of the acne patient to get rid of these barriers, wouldn’t that be something wonderful?”
The answer to this question, he said, is now “yes,” because of advances in lasers that target sebaceous glands.
In a seminal paper published in 2012, Fernanda H. Sakamoto, MD, PhD, Dr. Anderson, and colleagues demonstrated the potential for a free electron laser to target sebaceous glands . Following several years of refinement, Dr. Dover said.
“With the 1,726-nm laser, there is some selective absorption in sebum in skin, which beats out absorption in the other chromophores,” he said. “But it’s not a big difference like it is, for example, for pulsed-dye lasers and vascular targets. ... This means that the therapeutic window is relatively small and protecting the rest of the epidermis and dermis is crucial to be able to target these lesions or the sebaceous gland without unnecessary damage. If we can protect the epidermis and heat just the sebaceous glands, we should be able to get Accutane-like results if we get durability [by] shrinking sebaceous glands.”
Effective cooling, whether contact cooling, bulk cooling, or air cooling, is crucial to success, he continued. “It’s got to be robust and highly specific to protect the skin, so you don’t end up with side effects that are worse than the disease.”
The AviClear laser delivers seven 3-mm spots, which takes into account the thermal relaxation times of the sebaceous glands. The algorithm delivers a treatment imprint at roughly 0.3 Hz and a 1.5-mm depth of penetration, and the device relies on contact cooling. In pivotal data submitted to the FDA, 104 individuals with moderate to severe acne received three treatments with the AviClear 1 month apart, with follow-up at 1, 3, 6, and 12 months post treatment. They had no other treatment regimens, and the primary endpoint was the percentage of patients who achieved a 50% reduction in inflammatory lesion count 3 months after the final treatment. The secondary endpoint was an Investigator’s Global Assessment (IGA) improvement of 2 or greater.
Dr. Dover, who helped design the study, said that, at 3 months, 80% of those treated achieved a 50% or greater reduction in inflammatory lesion count (P < .001). As for secondary endpoints, 36% of individuals were assessed as having clear or almost clear skin; 47% achieved a 2-point or greater improvement in IGA score, compared with baseline, and 87% achieved a 1-point or greater improvement in IGA score, compared with baseline. By 6 months, 88% of individuals achieved a 50% or greater reduction in inflammatory lesion count; this improved to 92% by 12 months (P < .001).
“All of these procedures were done with no topical anesthetic, no intralesional anesthetic, and they tolerated these quite well,” he said. “There was no down time that required medical intervention after the treatments. All posttreatment erythema and swelling resolved quickly,” and 75% of the patients were “very satisfied” with the treatments.
The Accure Laser System features a proprietary technology that precisely controls thermal gradient depth. “So instead of guessing whether you are delivering the correct amount of heat, it actually tells you,” said Dr. Dover, a past president of the ASLMS and the American Society for Dermatologic Surgery. “It correlates surface and at-depth temperatures, and there’s an infrared camera for real-time accurate temperature monitoring.” The device features highly controlled air cooling and a pulsing pattern that ensures treatment of sebaceous glands of all sizes and at all depths. The clinical end marker is peak epidermal temperature.
In a study supported by Accure, the manufacturer, researchers evaluated the efficacy of the Accure Laser System in 35 subjects with types I to VI skin, who received four monthly treatments 30-45 minutes each, and were followed 12, 26, 39, and 52 weeks following their last treatment. To date, data out to 52 weeks is available for 17 study participants. According to Dr. Dover, the researchers found 80% clearance at 12 weeks following the last treatment, with continued improvement at 52 weeks. One hundred percent of subjects responded. Side effects included erythema, edema, crusting, blisters, and inflammatory papules. “None of these were medically significant,” he said.
As dermatologists begin to incorporate the AviClear and Accure devices into their practices, Dr. Dover said that he is reminded of the conversation he had some 40 years ago with Dr. Anderson about finding a cure for acne, and he feels a bit awestruck. “These 1,726-nm lasers are effective for treating acne. I personally think they are going to revolutionize the way we treat at least some of our patients with acne. They may both be effective for treating facial acne scars. Time will tell. Further study of both scarring and acne are needed to fully categorize the benefit and to optimize treatments.”
To date no direct clinical comparisons have been made between the AviClear and Accure devices.
Dr. Dover reported that he is a consultant for Cutera, the manufacturer for AviClear. He also performs research for the company.
PHOENIX – When Jeffrey Dover, MD, addressed audience members gathered for a session on cutting-edge technologies at the annual conference of the American Society for Laser Medicine and Surgery, he reflected on a conversation he had with R. Rox Anderson, MD, almost 40 years ago, about eventually finding a cure for acne.
“Despite the fact that we have over-the-counter therapies, prescription therapies, and all kinds of devices available to treat acne, there are still barriers to care that get in the way of treatment,” said Dr. Dover, director of SkinCare Physicians in Chestnut Hill, Mass. “If we had a device based on innovative light science that could meet the needs of the acne patient to get rid of these barriers, wouldn’t that be something wonderful?”
The answer to this question, he said, is now “yes,” because of advances in lasers that target sebaceous glands.
In a seminal paper published in 2012, Fernanda H. Sakamoto, MD, PhD, Dr. Anderson, and colleagues demonstrated the potential for a free electron laser to target sebaceous glands . Following several years of refinement, Dr. Dover said.
“With the 1,726-nm laser, there is some selective absorption in sebum in skin, which beats out absorption in the other chromophores,” he said. “But it’s not a big difference like it is, for example, for pulsed-dye lasers and vascular targets. ... This means that the therapeutic window is relatively small and protecting the rest of the epidermis and dermis is crucial to be able to target these lesions or the sebaceous gland without unnecessary damage. If we can protect the epidermis and heat just the sebaceous glands, we should be able to get Accutane-like results if we get durability [by] shrinking sebaceous glands.”
Effective cooling, whether contact cooling, bulk cooling, or air cooling, is crucial to success, he continued. “It’s got to be robust and highly specific to protect the skin, so you don’t end up with side effects that are worse than the disease.”
The AviClear laser delivers seven 3-mm spots, which takes into account the thermal relaxation times of the sebaceous glands. The algorithm delivers a treatment imprint at roughly 0.3 Hz and a 1.5-mm depth of penetration, and the device relies on contact cooling. In pivotal data submitted to the FDA, 104 individuals with moderate to severe acne received three treatments with the AviClear 1 month apart, with follow-up at 1, 3, 6, and 12 months post treatment. They had no other treatment regimens, and the primary endpoint was the percentage of patients who achieved a 50% reduction in inflammatory lesion count 3 months after the final treatment. The secondary endpoint was an Investigator’s Global Assessment (IGA) improvement of 2 or greater.
Dr. Dover, who helped design the study, said that, at 3 months, 80% of those treated achieved a 50% or greater reduction in inflammatory lesion count (P < .001). As for secondary endpoints, 36% of individuals were assessed as having clear or almost clear skin; 47% achieved a 2-point or greater improvement in IGA score, compared with baseline, and 87% achieved a 1-point or greater improvement in IGA score, compared with baseline. By 6 months, 88% of individuals achieved a 50% or greater reduction in inflammatory lesion count; this improved to 92% by 12 months (P < .001).
“All of these procedures were done with no topical anesthetic, no intralesional anesthetic, and they tolerated these quite well,” he said. “There was no down time that required medical intervention after the treatments. All posttreatment erythema and swelling resolved quickly,” and 75% of the patients were “very satisfied” with the treatments.
The Accure Laser System features a proprietary technology that precisely controls thermal gradient depth. “So instead of guessing whether you are delivering the correct amount of heat, it actually tells you,” said Dr. Dover, a past president of the ASLMS and the American Society for Dermatologic Surgery. “It correlates surface and at-depth temperatures, and there’s an infrared camera for real-time accurate temperature monitoring.” The device features highly controlled air cooling and a pulsing pattern that ensures treatment of sebaceous glands of all sizes and at all depths. The clinical end marker is peak epidermal temperature.
In a study supported by Accure, the manufacturer, researchers evaluated the efficacy of the Accure Laser System in 35 subjects with types I to VI skin, who received four monthly treatments 30-45 minutes each, and were followed 12, 26, 39, and 52 weeks following their last treatment. To date, data out to 52 weeks is available for 17 study participants. According to Dr. Dover, the researchers found 80% clearance at 12 weeks following the last treatment, with continued improvement at 52 weeks. One hundred percent of subjects responded. Side effects included erythema, edema, crusting, blisters, and inflammatory papules. “None of these were medically significant,” he said.
As dermatologists begin to incorporate the AviClear and Accure devices into their practices, Dr. Dover said that he is reminded of the conversation he had some 40 years ago with Dr. Anderson about finding a cure for acne, and he feels a bit awestruck. “These 1,726-nm lasers are effective for treating acne. I personally think they are going to revolutionize the way we treat at least some of our patients with acne. They may both be effective for treating facial acne scars. Time will tell. Further study of both scarring and acne are needed to fully categorize the benefit and to optimize treatments.”
To date no direct clinical comparisons have been made between the AviClear and Accure devices.
Dr. Dover reported that he is a consultant for Cutera, the manufacturer for AviClear. He also performs research for the company.
PHOENIX – When Jeffrey Dover, MD, addressed audience members gathered for a session on cutting-edge technologies at the annual conference of the American Society for Laser Medicine and Surgery, he reflected on a conversation he had with R. Rox Anderson, MD, almost 40 years ago, about eventually finding a cure for acne.
“Despite the fact that we have over-the-counter therapies, prescription therapies, and all kinds of devices available to treat acne, there are still barriers to care that get in the way of treatment,” said Dr. Dover, director of SkinCare Physicians in Chestnut Hill, Mass. “If we had a device based on innovative light science that could meet the needs of the acne patient to get rid of these barriers, wouldn’t that be something wonderful?”
The answer to this question, he said, is now “yes,” because of advances in lasers that target sebaceous glands.
In a seminal paper published in 2012, Fernanda H. Sakamoto, MD, PhD, Dr. Anderson, and colleagues demonstrated the potential for a free electron laser to target sebaceous glands . Following several years of refinement, Dr. Dover said.
“With the 1,726-nm laser, there is some selective absorption in sebum in skin, which beats out absorption in the other chromophores,” he said. “But it’s not a big difference like it is, for example, for pulsed-dye lasers and vascular targets. ... This means that the therapeutic window is relatively small and protecting the rest of the epidermis and dermis is crucial to be able to target these lesions or the sebaceous gland without unnecessary damage. If we can protect the epidermis and heat just the sebaceous glands, we should be able to get Accutane-like results if we get durability [by] shrinking sebaceous glands.”
Effective cooling, whether contact cooling, bulk cooling, or air cooling, is crucial to success, he continued. “It’s got to be robust and highly specific to protect the skin, so you don’t end up with side effects that are worse than the disease.”
The AviClear laser delivers seven 3-mm spots, which takes into account the thermal relaxation times of the sebaceous glands. The algorithm delivers a treatment imprint at roughly 0.3 Hz and a 1.5-mm depth of penetration, and the device relies on contact cooling. In pivotal data submitted to the FDA, 104 individuals with moderate to severe acne received three treatments with the AviClear 1 month apart, with follow-up at 1, 3, 6, and 12 months post treatment. They had no other treatment regimens, and the primary endpoint was the percentage of patients who achieved a 50% reduction in inflammatory lesion count 3 months after the final treatment. The secondary endpoint was an Investigator’s Global Assessment (IGA) improvement of 2 or greater.
Dr. Dover, who helped design the study, said that, at 3 months, 80% of those treated achieved a 50% or greater reduction in inflammatory lesion count (P < .001). As for secondary endpoints, 36% of individuals were assessed as having clear or almost clear skin; 47% achieved a 2-point or greater improvement in IGA score, compared with baseline, and 87% achieved a 1-point or greater improvement in IGA score, compared with baseline. By 6 months, 88% of individuals achieved a 50% or greater reduction in inflammatory lesion count; this improved to 92% by 12 months (P < .001).
“All of these procedures were done with no topical anesthetic, no intralesional anesthetic, and they tolerated these quite well,” he said. “There was no down time that required medical intervention after the treatments. All posttreatment erythema and swelling resolved quickly,” and 75% of the patients were “very satisfied” with the treatments.
The Accure Laser System features a proprietary technology that precisely controls thermal gradient depth. “So instead of guessing whether you are delivering the correct amount of heat, it actually tells you,” said Dr. Dover, a past president of the ASLMS and the American Society for Dermatologic Surgery. “It correlates surface and at-depth temperatures, and there’s an infrared camera for real-time accurate temperature monitoring.” The device features highly controlled air cooling and a pulsing pattern that ensures treatment of sebaceous glands of all sizes and at all depths. The clinical end marker is peak epidermal temperature.
In a study supported by Accure, the manufacturer, researchers evaluated the efficacy of the Accure Laser System in 35 subjects with types I to VI skin, who received four monthly treatments 30-45 minutes each, and were followed 12, 26, 39, and 52 weeks following their last treatment. To date, data out to 52 weeks is available for 17 study participants. According to Dr. Dover, the researchers found 80% clearance at 12 weeks following the last treatment, with continued improvement at 52 weeks. One hundred percent of subjects responded. Side effects included erythema, edema, crusting, blisters, and inflammatory papules. “None of these were medically significant,” he said.
As dermatologists begin to incorporate the AviClear and Accure devices into their practices, Dr. Dover said that he is reminded of the conversation he had some 40 years ago with Dr. Anderson about finding a cure for acne, and he feels a bit awestruck. “These 1,726-nm lasers are effective for treating acne. I personally think they are going to revolutionize the way we treat at least some of our patients with acne. They may both be effective for treating facial acne scars. Time will tell. Further study of both scarring and acne are needed to fully categorize the benefit and to optimize treatments.”
To date no direct clinical comparisons have been made between the AviClear and Accure devices.
Dr. Dover reported that he is a consultant for Cutera, the manufacturer for AviClear. He also performs research for the company.
AT ASLMS 2023