High-dose melphalan remains most effective option for newly diagnosed multiple myeloma

Article Type
Changed
Fri, 01/04/2019 - 12:38
Display Headline
High-dose melphalan remains most effective option for newly diagnosed multiple myeloma

Among patients with newly diagnosed multiple myeloma who were eligible for stem-cell transplantation, progression-free survival was significantly longer for those who received standard high-dose melphalan than for those who received melphalan/prednisone/lenalidomide as consolidation therapy in an open-label, phase III, randomized trial, investigators reported online Sept. 4 in the New England Journal of Medicine.

This finding confirms that high-dose melphalan "remains the more effective therapeutic option" in this patient population, said Dr. Antonio Palumbo, chief of the myeloma unit at the University of Turin, Italy, and his associates.

In addition, maintenance therapy with lenalidomide, as compared with no maintenance therapy, also significantly extended progression-free survival. The optimal combination of these approaches – high-dose melphalan plus stem-cell transplantation followed by lenalidomide maintenance – yielded a 5-year progression-free survival of approximately 48% and an overall survival of 78%, the investigators noted.

Dr. Palumbo and his associates performed this industry-sponsored trial at 62 medical centers in Italy and Israel. A total of 399 patients aged 65 years and younger underwent induction therapy (lenalidomide plus dexamethasone) and received cyclophosphamide and granulocyte colony-stimulating factor to mobilize stem cells. The 273 study participants who then entered the consolidation phase of treatment were randomly assigned to receive either standard high-dose melphalan plus stem-cell transplantation (141 patients) or melphalan/prednisone/lenalidomide (MPR) plus stem-cell transplantation (132 patients).

The 251 participants who remained in the trial after stem-cell transplantation were then randomly assigned to receive no maintenance therapy (125 patients) or lenalidomide maintenance therapy (126 patients) until disease progressed or they developed unacceptable adverse effects. At the end of the study, 237 patients had disease progression or had died, 45 were still receiving lenalidomide maintenance therapy, and 24 were not receiving maintenance therapy.

After a median follow-up of 51 months (range, 1-66 months), the primary end point – progression-free survival – was 54.7 months for high-dose melphalan plus lenalidomide maintenance, 37.4 months for high-dose melphalan without maintenance therapy, 34.2 months for MPR plus lenalidomide maintenance, and 21.8 months for MPR without maintenance therapy. The respective 5-year overall survival rates were 78.4%, 66.6%, 70.2%, and 58.7%, the investigators said (N. Engl. J. Med. 2014;371:895-905).

Both hematologic and nonhematologic adverse events were more frequent with high-dose melphalan than with MPR. The most concerning were neutropenia and infections. "However, toxic effects were manageable and did not affect the rate of early death or treatment discontinuation, or patients’ ability to proceed to the maintenance phase," Dr. Palumbo and his associates said.

In addition, "the rate of second primary cancers was low, and no between-group differences were reported," they said.

Celgene funded this trial but was not involved in data collection or analysis. Dr. Palumbo reported receiving fees from Celgene, Amgen, and other companies, and his associates reported ties to numerous industry sources.

References

Click for Credit Link
Author and Disclosure Information

Publications
Topics
Legacy Keywords
multiple myeloma, stem-cell transplantation, high-dose, melphalan, prednisone, lenalidomide, consolidation therapy, New England Journal of Medicine, Antonio Palumbo, University of Turin, Italy,
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Among patients with newly diagnosed multiple myeloma who were eligible for stem-cell transplantation, progression-free survival was significantly longer for those who received standard high-dose melphalan than for those who received melphalan/prednisone/lenalidomide as consolidation therapy in an open-label, phase III, randomized trial, investigators reported online Sept. 4 in the New England Journal of Medicine.

This finding confirms that high-dose melphalan "remains the more effective therapeutic option" in this patient population, said Dr. Antonio Palumbo, chief of the myeloma unit at the University of Turin, Italy, and his associates.

In addition, maintenance therapy with lenalidomide, as compared with no maintenance therapy, also significantly extended progression-free survival. The optimal combination of these approaches – high-dose melphalan plus stem-cell transplantation followed by lenalidomide maintenance – yielded a 5-year progression-free survival of approximately 48% and an overall survival of 78%, the investigators noted.

Dr. Palumbo and his associates performed this industry-sponsored trial at 62 medical centers in Italy and Israel. A total of 399 patients aged 65 years and younger underwent induction therapy (lenalidomide plus dexamethasone) and received cyclophosphamide and granulocyte colony-stimulating factor to mobilize stem cells. The 273 study participants who then entered the consolidation phase of treatment were randomly assigned to receive either standard high-dose melphalan plus stem-cell transplantation (141 patients) or melphalan/prednisone/lenalidomide (MPR) plus stem-cell transplantation (132 patients).

The 251 participants who remained in the trial after stem-cell transplantation were then randomly assigned to receive no maintenance therapy (125 patients) or lenalidomide maintenance therapy (126 patients) until disease progressed or they developed unacceptable adverse effects. At the end of the study, 237 patients had disease progression or had died, 45 were still receiving lenalidomide maintenance therapy, and 24 were not receiving maintenance therapy.

After a median follow-up of 51 months (range, 1-66 months), the primary end point – progression-free survival – was 54.7 months for high-dose melphalan plus lenalidomide maintenance, 37.4 months for high-dose melphalan without maintenance therapy, 34.2 months for MPR plus lenalidomide maintenance, and 21.8 months for MPR without maintenance therapy. The respective 5-year overall survival rates were 78.4%, 66.6%, 70.2%, and 58.7%, the investigators said (N. Engl. J. Med. 2014;371:895-905).

Both hematologic and nonhematologic adverse events were more frequent with high-dose melphalan than with MPR. The most concerning were neutropenia and infections. "However, toxic effects were manageable and did not affect the rate of early death or treatment discontinuation, or patients’ ability to proceed to the maintenance phase," Dr. Palumbo and his associates said.

In addition, "the rate of second primary cancers was low, and no between-group differences were reported," they said.

Celgene funded this trial but was not involved in data collection or analysis. Dr. Palumbo reported receiving fees from Celgene, Amgen, and other companies, and his associates reported ties to numerous industry sources.

Among patients with newly diagnosed multiple myeloma who were eligible for stem-cell transplantation, progression-free survival was significantly longer for those who received standard high-dose melphalan than for those who received melphalan/prednisone/lenalidomide as consolidation therapy in an open-label, phase III, randomized trial, investigators reported online Sept. 4 in the New England Journal of Medicine.

This finding confirms that high-dose melphalan "remains the more effective therapeutic option" in this patient population, said Dr. Antonio Palumbo, chief of the myeloma unit at the University of Turin, Italy, and his associates.

In addition, maintenance therapy with lenalidomide, as compared with no maintenance therapy, also significantly extended progression-free survival. The optimal combination of these approaches – high-dose melphalan plus stem-cell transplantation followed by lenalidomide maintenance – yielded a 5-year progression-free survival of approximately 48% and an overall survival of 78%, the investigators noted.

Dr. Palumbo and his associates performed this industry-sponsored trial at 62 medical centers in Italy and Israel. A total of 399 patients aged 65 years and younger underwent induction therapy (lenalidomide plus dexamethasone) and received cyclophosphamide and granulocyte colony-stimulating factor to mobilize stem cells. The 273 study participants who then entered the consolidation phase of treatment were randomly assigned to receive either standard high-dose melphalan plus stem-cell transplantation (141 patients) or melphalan/prednisone/lenalidomide (MPR) plus stem-cell transplantation (132 patients).

The 251 participants who remained in the trial after stem-cell transplantation were then randomly assigned to receive no maintenance therapy (125 patients) or lenalidomide maintenance therapy (126 patients) until disease progressed or they developed unacceptable adverse effects. At the end of the study, 237 patients had disease progression or had died, 45 were still receiving lenalidomide maintenance therapy, and 24 were not receiving maintenance therapy.

After a median follow-up of 51 months (range, 1-66 months), the primary end point – progression-free survival – was 54.7 months for high-dose melphalan plus lenalidomide maintenance, 37.4 months for high-dose melphalan without maintenance therapy, 34.2 months for MPR plus lenalidomide maintenance, and 21.8 months for MPR without maintenance therapy. The respective 5-year overall survival rates were 78.4%, 66.6%, 70.2%, and 58.7%, the investigators said (N. Engl. J. Med. 2014;371:895-905).

Both hematologic and nonhematologic adverse events were more frequent with high-dose melphalan than with MPR. The most concerning were neutropenia and infections. "However, toxic effects were manageable and did not affect the rate of early death or treatment discontinuation, or patients’ ability to proceed to the maintenance phase," Dr. Palumbo and his associates said.

In addition, "the rate of second primary cancers was low, and no between-group differences were reported," they said.

Celgene funded this trial but was not involved in data collection or analysis. Dr. Palumbo reported receiving fees from Celgene, Amgen, and other companies, and his associates reported ties to numerous industry sources.

References

References

Publications
Publications
Topics
Article Type
Display Headline
High-dose melphalan remains most effective option for newly diagnosed multiple myeloma
Display Headline
High-dose melphalan remains most effective option for newly diagnosed multiple myeloma
Legacy Keywords
multiple myeloma, stem-cell transplantation, high-dose, melphalan, prednisone, lenalidomide, consolidation therapy, New England Journal of Medicine, Antonio Palumbo, University of Turin, Italy,
Legacy Keywords
multiple myeloma, stem-cell transplantation, high-dose, melphalan, prednisone, lenalidomide, consolidation therapy, New England Journal of Medicine, Antonio Palumbo, University of Turin, Italy,
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

PURLs Copyright

Inside the Article

Vitals

Key clinical point: High-dose melphalan remains the more effective therapeutic option in patients with newly diagnosed multiple myeloma.

Major finding: After a median follow-up of 51 months (range, 1-66 months), the primary end point – progression-free survival – was 54.7 months for high-dose melphalan plus lenalidomide maintenance, 37.4 months for high-dose melphalan without maintenance therapy, 34.2 months for MPR plus lenalidomide maintenance, and 21.8 months for MPR without maintenance therapy.

Data source: An open-label, phase III, randomized trial involving patients with multiple myeloma who were eligible for stem-cell transplantation.

Disclosures: Celgene funded this trial but was not involved in data collection or analysis. Dr. Palumbo reported receiving fees from Celgene, Amgen, and other companies, and his associates reported ties to numerous industry sources.

Metabolic Syndrome, Insulin Resistance Occur Frequently in Psoriatic Arthritis Patients

Article Type
Changed
Tue, 12/13/2016 - 12:08
Display Headline
Metabolic Syndrome, Insulin Resistance Occur Frequently in Psoriatic Arthritis Patients

Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the posttest. 

The metabolic syndrome and insulin resistance are not just common among patients with psoriatic arthritis, but both also correlate with the severity of the inflammatory musculoskeletal disease, according to a single-center, cross-sectional cohort study.

In the study of 283 consecutive white patients with longstanding psoriatic arthritis who attended a rheumatology clinic during a 1-year period, 44% were found to have the metabolic syndrome and 16% to have insulin resistance. "Our findings are novel and support our pretest hypothesis that the risk of metabolic syndrome and insulin resistance increases with the severity of underlying psoriatic arthritis, probably reflecting the increasing burden of inflammation," said Dr. Muhammad Haroon of the department of rheumatology and his associates at St. Vincent’s University Hospital, Dublin.

Psoriatic arthritis is known to be associated with heightened cardiovascular risk, and CV diseases are the leading causes of death in patients with psoriatic arthritis. Until now, however, the prevalences of these two major CV risk factors have not been well studied in patients with psoriatic arthritis. "We hypothesized, therefore, that there might be a greater burden of metabolic syndrome and insulin resistance in psoriatic arthritis, and consequently of cardiovascular diseases because of a greater inflammatory load," Dr. Haroon and his colleagues wrote (J. Rheumatol. 2014;41:1357-65).

The patients had psoriatic arthritis for a duration of at least 10 years (mean of 19 years), and just over half of the patients were women. Their mean age was 54.6 years.

A total of 124 patients (44%) were found to have the metabolic syndrome. "Even more alarming was the finding that about 50% of these newly diagnosed patients with metabolic syndrome had a combination of 4 or 5 of these risk features," the investigators said. In particular, elevated blood pressure (74%), greater waist circumference (56%), and elevated triglycerides (44%) were common among these psoriatic arthritis patients.

On multivariate analysis, metabolic syndrome was significantly associated with more severe disease, higher smoking pack-years, and worse EuroQol-5 dimension (EQ-5D). Metabolic syndrome was significantly associated with severe disease, even after adjustment for the presence of insulin resistance.

A total of 41 patients (16%) had insulin resistance of the 263 for whom insulin resistance data were available, which on multivariate analysis was significantly associated more severe disease, older age at the onset of psoriasis, and higher body-mass index, even after adjusting for the presence of metabolic syndrome.

Both the metabolic syndrome and insulin resistance were more frequent among patients with the most severe psoriatic arthritis, as measured by current and previous skin assessments; inflammatory markers; measures of disease activity; the number of deformed joints; and the presence of dactylitis, enthesitis, peripheral joint erosions, osteolysis, and sacroiliitis, the investigators said.

The study findings suggest, but do not establish, that "the higher burden of inflammatory arthritis or the combination of severe psoriatic disease features play major roles in the development of the metabolic syndrome and/or insulin resistance," Dr. Haroon and his associates said.

Their observations "can also help inform risk stratification" of patients with psoriatic arthritis, they added.

No disclosures were provided.

To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.

References

Author and Disclosure Information

Mary Ann Moon, Family Practice News Digital Network

Publications
Topics
Legacy Keywords
metabolic syndrome, insulin resistance, psoriatic arthritis, inflammatory musculoskeletal disease, Dr. Muhammad Haroon,
Author and Disclosure Information

Mary Ann Moon, Family Practice News Digital Network

Author and Disclosure Information

Mary Ann Moon, Family Practice News Digital Network

Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the posttest. 

The metabolic syndrome and insulin resistance are not just common among patients with psoriatic arthritis, but both also correlate with the severity of the inflammatory musculoskeletal disease, according to a single-center, cross-sectional cohort study.

In the study of 283 consecutive white patients with longstanding psoriatic arthritis who attended a rheumatology clinic during a 1-year period, 44% were found to have the metabolic syndrome and 16% to have insulin resistance. "Our findings are novel and support our pretest hypothesis that the risk of metabolic syndrome and insulin resistance increases with the severity of underlying psoriatic arthritis, probably reflecting the increasing burden of inflammation," said Dr. Muhammad Haroon of the department of rheumatology and his associates at St. Vincent’s University Hospital, Dublin.

Psoriatic arthritis is known to be associated with heightened cardiovascular risk, and CV diseases are the leading causes of death in patients with psoriatic arthritis. Until now, however, the prevalences of these two major CV risk factors have not been well studied in patients with psoriatic arthritis. "We hypothesized, therefore, that there might be a greater burden of metabolic syndrome and insulin resistance in psoriatic arthritis, and consequently of cardiovascular diseases because of a greater inflammatory load," Dr. Haroon and his colleagues wrote (J. Rheumatol. 2014;41:1357-65).

The patients had psoriatic arthritis for a duration of at least 10 years (mean of 19 years), and just over half of the patients were women. Their mean age was 54.6 years.

A total of 124 patients (44%) were found to have the metabolic syndrome. "Even more alarming was the finding that about 50% of these newly diagnosed patients with metabolic syndrome had a combination of 4 or 5 of these risk features," the investigators said. In particular, elevated blood pressure (74%), greater waist circumference (56%), and elevated triglycerides (44%) were common among these psoriatic arthritis patients.

On multivariate analysis, metabolic syndrome was significantly associated with more severe disease, higher smoking pack-years, and worse EuroQol-5 dimension (EQ-5D). Metabolic syndrome was significantly associated with severe disease, even after adjustment for the presence of insulin resistance.

A total of 41 patients (16%) had insulin resistance of the 263 for whom insulin resistance data were available, which on multivariate analysis was significantly associated more severe disease, older age at the onset of psoriasis, and higher body-mass index, even after adjusting for the presence of metabolic syndrome.

Both the metabolic syndrome and insulin resistance were more frequent among patients with the most severe psoriatic arthritis, as measured by current and previous skin assessments; inflammatory markers; measures of disease activity; the number of deformed joints; and the presence of dactylitis, enthesitis, peripheral joint erosions, osteolysis, and sacroiliitis, the investigators said.

The study findings suggest, but do not establish, that "the higher burden of inflammatory arthritis or the combination of severe psoriatic disease features play major roles in the development of the metabolic syndrome and/or insulin resistance," Dr. Haroon and his associates said.

Their observations "can also help inform risk stratification" of patients with psoriatic arthritis, they added.

No disclosures were provided.

To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.

Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the posttest. 

The metabolic syndrome and insulin resistance are not just common among patients with psoriatic arthritis, but both also correlate with the severity of the inflammatory musculoskeletal disease, according to a single-center, cross-sectional cohort study.

In the study of 283 consecutive white patients with longstanding psoriatic arthritis who attended a rheumatology clinic during a 1-year period, 44% were found to have the metabolic syndrome and 16% to have insulin resistance. "Our findings are novel and support our pretest hypothesis that the risk of metabolic syndrome and insulin resistance increases with the severity of underlying psoriatic arthritis, probably reflecting the increasing burden of inflammation," said Dr. Muhammad Haroon of the department of rheumatology and his associates at St. Vincent’s University Hospital, Dublin.

Psoriatic arthritis is known to be associated with heightened cardiovascular risk, and CV diseases are the leading causes of death in patients with psoriatic arthritis. Until now, however, the prevalences of these two major CV risk factors have not been well studied in patients with psoriatic arthritis. "We hypothesized, therefore, that there might be a greater burden of metabolic syndrome and insulin resistance in psoriatic arthritis, and consequently of cardiovascular diseases because of a greater inflammatory load," Dr. Haroon and his colleagues wrote (J. Rheumatol. 2014;41:1357-65).

The patients had psoriatic arthritis for a duration of at least 10 years (mean of 19 years), and just over half of the patients were women. Their mean age was 54.6 years.

A total of 124 patients (44%) were found to have the metabolic syndrome. "Even more alarming was the finding that about 50% of these newly diagnosed patients with metabolic syndrome had a combination of 4 or 5 of these risk features," the investigators said. In particular, elevated blood pressure (74%), greater waist circumference (56%), and elevated triglycerides (44%) were common among these psoriatic arthritis patients.

On multivariate analysis, metabolic syndrome was significantly associated with more severe disease, higher smoking pack-years, and worse EuroQol-5 dimension (EQ-5D). Metabolic syndrome was significantly associated with severe disease, even after adjustment for the presence of insulin resistance.

A total of 41 patients (16%) had insulin resistance of the 263 for whom insulin resistance data were available, which on multivariate analysis was significantly associated more severe disease, older age at the onset of psoriasis, and higher body-mass index, even after adjusting for the presence of metabolic syndrome.

Both the metabolic syndrome and insulin resistance were more frequent among patients with the most severe psoriatic arthritis, as measured by current and previous skin assessments; inflammatory markers; measures of disease activity; the number of deformed joints; and the presence of dactylitis, enthesitis, peripheral joint erosions, osteolysis, and sacroiliitis, the investigators said.

The study findings suggest, but do not establish, that "the higher burden of inflammatory arthritis or the combination of severe psoriatic disease features play major roles in the development of the metabolic syndrome and/or insulin resistance," Dr. Haroon and his associates said.

Their observations "can also help inform risk stratification" of patients with psoriatic arthritis, they added.

No disclosures were provided.

To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Metabolic Syndrome, Insulin Resistance Occur Frequently in Psoriatic Arthritis Patients
Display Headline
Metabolic Syndrome, Insulin Resistance Occur Frequently in Psoriatic Arthritis Patients
Legacy Keywords
metabolic syndrome, insulin resistance, psoriatic arthritis, inflammatory musculoskeletal disease, Dr. Muhammad Haroon,
Legacy Keywords
metabolic syndrome, insulin resistance, psoriatic arthritis, inflammatory musculoskeletal disease, Dr. Muhammad Haroon,
Article Source

FROM THE JOURNAL OF RHEUMATOLOGY

PURLs Copyright

Inside the Article

Sleeve gastrectomy replacing other bariatric surgery

Article Type
Changed
Fri, 01/18/2019 - 13:56
Display Headline
Sleeve gastrectomy replacing other bariatric surgery

Sleeve gastrectomy appears to be replacing other types of bariatric surgery in Michigan, even among subgroups of patients in whom its use is controversial, according to a Research Letter to the Editor published in the Sept. 3 issue of JAMA.

To examine time trends in the use of various bariatric procedures, investigators reviewed the medical records of 43,732 adults who had bariatric surgery and were enrolled in a statewide database between June 2006 and December 2013. They found that the relative use of sleeve gastrectomy rose from 6.0% of all procedures in 2008 to 67.3% 5 years later, which represents an increase of 61%, said Dr. Bradley N. Reames of the department of surgery and the Center for Healthcare Outcomes and Policy, University of Michigan, Ann Arbor, and his associates.

Dr. Bradley N. Reames

At the same time, the use of Roux-en-Y gastric bypass dropped from 58.0% to 27.4% of all procedures, and the use of laparoscopic adjustable gastric banding decreased from 34.5% to 4.6%, they said (JAMA 2014;312:959-61).

"Moreover, despite controversy regarding the optimal procedure for patients with gastroesophageal reflux disease and type 2 diabetes, sleeve gastrectomy has become the predominant procedure in both groups," Dr. Reames and his associates said.

The long-term outcomes after sleeve gastrectomy are not yet clear, so this trend "may reflect the favorable perioperative safety profile and emerging evidence of successful weight loss at 2-3 years after" the procedure, they added.

Dr. Reames’s work is supported by the National Cancer Institute. He and his associates reported no relevant financial conflicts of interest.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Sleeve gastrectomy, bariatric surgery, JAMA, bariatric procedures, surgery, Dr. Bradley N. Reames,
Author and Disclosure Information

Author and Disclosure Information

Sleeve gastrectomy appears to be replacing other types of bariatric surgery in Michigan, even among subgroups of patients in whom its use is controversial, according to a Research Letter to the Editor published in the Sept. 3 issue of JAMA.

To examine time trends in the use of various bariatric procedures, investigators reviewed the medical records of 43,732 adults who had bariatric surgery and were enrolled in a statewide database between June 2006 and December 2013. They found that the relative use of sleeve gastrectomy rose from 6.0% of all procedures in 2008 to 67.3% 5 years later, which represents an increase of 61%, said Dr. Bradley N. Reames of the department of surgery and the Center for Healthcare Outcomes and Policy, University of Michigan, Ann Arbor, and his associates.

Dr. Bradley N. Reames

At the same time, the use of Roux-en-Y gastric bypass dropped from 58.0% to 27.4% of all procedures, and the use of laparoscopic adjustable gastric banding decreased from 34.5% to 4.6%, they said (JAMA 2014;312:959-61).

"Moreover, despite controversy regarding the optimal procedure for patients with gastroesophageal reflux disease and type 2 diabetes, sleeve gastrectomy has become the predominant procedure in both groups," Dr. Reames and his associates said.

The long-term outcomes after sleeve gastrectomy are not yet clear, so this trend "may reflect the favorable perioperative safety profile and emerging evidence of successful weight loss at 2-3 years after" the procedure, they added.

Dr. Reames’s work is supported by the National Cancer Institute. He and his associates reported no relevant financial conflicts of interest.

Sleeve gastrectomy appears to be replacing other types of bariatric surgery in Michigan, even among subgroups of patients in whom its use is controversial, according to a Research Letter to the Editor published in the Sept. 3 issue of JAMA.

To examine time trends in the use of various bariatric procedures, investigators reviewed the medical records of 43,732 adults who had bariatric surgery and were enrolled in a statewide database between June 2006 and December 2013. They found that the relative use of sleeve gastrectomy rose from 6.0% of all procedures in 2008 to 67.3% 5 years later, which represents an increase of 61%, said Dr. Bradley N. Reames of the department of surgery and the Center for Healthcare Outcomes and Policy, University of Michigan, Ann Arbor, and his associates.

Dr. Bradley N. Reames

At the same time, the use of Roux-en-Y gastric bypass dropped from 58.0% to 27.4% of all procedures, and the use of laparoscopic adjustable gastric banding decreased from 34.5% to 4.6%, they said (JAMA 2014;312:959-61).

"Moreover, despite controversy regarding the optimal procedure for patients with gastroesophageal reflux disease and type 2 diabetes, sleeve gastrectomy has become the predominant procedure in both groups," Dr. Reames and his associates said.

The long-term outcomes after sleeve gastrectomy are not yet clear, so this trend "may reflect the favorable perioperative safety profile and emerging evidence of successful weight loss at 2-3 years after" the procedure, they added.

Dr. Reames’s work is supported by the National Cancer Institute. He and his associates reported no relevant financial conflicts of interest.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Sleeve gastrectomy replacing other bariatric surgery
Display Headline
Sleeve gastrectomy replacing other bariatric surgery
Legacy Keywords
Sleeve gastrectomy, bariatric surgery, JAMA, bariatric procedures, surgery, Dr. Bradley N. Reames,
Legacy Keywords
Sleeve gastrectomy, bariatric surgery, JAMA, bariatric procedures, surgery, Dr. Bradley N. Reames,
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Major finding: The relative use of sleeve gastrectomy rose from 6.0% of all procedures in 2008 to 67.3% 5 years later, while the use of Roux-en-Y gastric bypass dropped from 58.0% to 27.4% of all procedures, and the use of laparoscopic adjustable gastric banding decreased from 34.5% to 4.6%.

Data Source: An analysis of data regarding 43,732 adults who had bariatric surgery and were registered in a statewide database in Michigan between 2006 and 2013.

Disclosures: Dr. Reames’s work is supported by the National Cancer Institute. He and his associates reported no relevant financial conflicts of interest.

Weight loss comparable among ‘named’ diets

What about outcomes other than weight loss?
Article Type
Changed
Fri, 01/18/2019 - 13:56
Display Headline
Weight loss comparable among ‘named’ diets

All the "named" dietary programs are likely to yield the same 6-kg weight loss at 1 year, so patients should be encouraged to follow whichever one they are most likely to adhere to, according to a report published online Sept. 2 in JAMA.

Despite advertising claims that certain named diets are superior to others, there have been very few head-to-head comparison studies, and none have had adequate numbers to achieve a definitive conclusion. In addition, no studies that have pooled the sparse available data have used rigorous meta-analytic techniques to allow quantitative comparisons among the diets, said Bradley C. Johnston, Ph.D., of the Hospital for Sick Children Research Institute, Toronto, and his associates.

©wragg/iStockphoto.com
All the dietary programs were likely to yield the same 6-kg weight loss at 1 year.

The investigators performed a network meta-analysis that included all available data from 48 randomized clinical trials (7,286 participants) to estimate the relative effects of three low-carbohydrate diets (Atkins, South Beach, and Zone), two low-fat diets (Ornish and Rosemary Conley), and five moderate-macronutrient diets (Biggest Loser, Jenny Craig, Nutrisystem, Volumetrics, and Weight Watchers). "We included dietary programs with recommendations for daily macronutrient, caloric intake, or both for a defined period (12 weeks or longer), with or without exercise (e.g., jogging, strength training) or behavioral support (e.g., counseling, group support). Eligible programs included meal replacement products but had to consist primarily of whole foods and could not include pharmacologic agents," they said.

All the diets were superior to no diet and all showed an approximate 8-kg weight loss at 6 months with a regain of approximately 2 kg at 12 months. "Although statistical differences existed among several of the diets, the differences were small and unlikely to be important to those seeking weight loss," Dr. Johnston and his associates said (JAMA 2014 Sept. 2 [doi:10.1001/jama.2014.10397]).

"Because different diets are variably tolerated by individuals, the ideal diet is one that is best adhered to by individuals so that they can stay on the diet as long as possible," the investigators noted.

Only 5 of the 48 trials reported on adverse events, and only 1 of them directly compared adverse events between low-carbohydrate against low-fat diets. That study found all adverse events to be more common with low-carbohydrate diets, including constipation (68% vs 35%), headache (60% vs 40%), halitosis (38% vs 8%), muscle cramps (35% vs 7%), diarrhea (23% vs 7%), general weakness (25% vs 8%), and rash (13% vs 0%).

References

Body

It would be helpful to know more about the differences among these diets for outcomes other than weight loss, since they have very different compositions. The low-carbohydrate diets require protein intakes that are double what the other diets provide, which raises questions about long-term effects on kidney function and calcium loss, among other things. The reported adverse effects appear benign, but what happens after months or years of this high-protein intake?

Linda Van Horn, Ph.D., is in the department of preventive medicine at Northwestern University, Chicago. These remarks were taken from an editorial accompanying Dr. Johnston’s report (JAMA 2014 Sept. 2;312:900-1). Dr. Van Horn reported no financial conflicts of interest.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
dietary programs, weight loss, advertising, diets, weight, obesity
Author and Disclosure Information

Author and Disclosure Information

Body

It would be helpful to know more about the differences among these diets for outcomes other than weight loss, since they have very different compositions. The low-carbohydrate diets require protein intakes that are double what the other diets provide, which raises questions about long-term effects on kidney function and calcium loss, among other things. The reported adverse effects appear benign, but what happens after months or years of this high-protein intake?

Linda Van Horn, Ph.D., is in the department of preventive medicine at Northwestern University, Chicago. These remarks were taken from an editorial accompanying Dr. Johnston’s report (JAMA 2014 Sept. 2;312:900-1). Dr. Van Horn reported no financial conflicts of interest.

Body

It would be helpful to know more about the differences among these diets for outcomes other than weight loss, since they have very different compositions. The low-carbohydrate diets require protein intakes that are double what the other diets provide, which raises questions about long-term effects on kidney function and calcium loss, among other things. The reported adverse effects appear benign, but what happens after months or years of this high-protein intake?

Linda Van Horn, Ph.D., is in the department of preventive medicine at Northwestern University, Chicago. These remarks were taken from an editorial accompanying Dr. Johnston’s report (JAMA 2014 Sept. 2;312:900-1). Dr. Van Horn reported no financial conflicts of interest.

Title
What about outcomes other than weight loss?
What about outcomes other than weight loss?

All the "named" dietary programs are likely to yield the same 6-kg weight loss at 1 year, so patients should be encouraged to follow whichever one they are most likely to adhere to, according to a report published online Sept. 2 in JAMA.

Despite advertising claims that certain named diets are superior to others, there have been very few head-to-head comparison studies, and none have had adequate numbers to achieve a definitive conclusion. In addition, no studies that have pooled the sparse available data have used rigorous meta-analytic techniques to allow quantitative comparisons among the diets, said Bradley C. Johnston, Ph.D., of the Hospital for Sick Children Research Institute, Toronto, and his associates.

©wragg/iStockphoto.com
All the dietary programs were likely to yield the same 6-kg weight loss at 1 year.

The investigators performed a network meta-analysis that included all available data from 48 randomized clinical trials (7,286 participants) to estimate the relative effects of three low-carbohydrate diets (Atkins, South Beach, and Zone), two low-fat diets (Ornish and Rosemary Conley), and five moderate-macronutrient diets (Biggest Loser, Jenny Craig, Nutrisystem, Volumetrics, and Weight Watchers). "We included dietary programs with recommendations for daily macronutrient, caloric intake, or both for a defined period (12 weeks or longer), with or without exercise (e.g., jogging, strength training) or behavioral support (e.g., counseling, group support). Eligible programs included meal replacement products but had to consist primarily of whole foods and could not include pharmacologic agents," they said.

All the diets were superior to no diet and all showed an approximate 8-kg weight loss at 6 months with a regain of approximately 2 kg at 12 months. "Although statistical differences existed among several of the diets, the differences were small and unlikely to be important to those seeking weight loss," Dr. Johnston and his associates said (JAMA 2014 Sept. 2 [doi:10.1001/jama.2014.10397]).

"Because different diets are variably tolerated by individuals, the ideal diet is one that is best adhered to by individuals so that they can stay on the diet as long as possible," the investigators noted.

Only 5 of the 48 trials reported on adverse events, and only 1 of them directly compared adverse events between low-carbohydrate against low-fat diets. That study found all adverse events to be more common with low-carbohydrate diets, including constipation (68% vs 35%), headache (60% vs 40%), halitosis (38% vs 8%), muscle cramps (35% vs 7%), diarrhea (23% vs 7%), general weakness (25% vs 8%), and rash (13% vs 0%).

All the "named" dietary programs are likely to yield the same 6-kg weight loss at 1 year, so patients should be encouraged to follow whichever one they are most likely to adhere to, according to a report published online Sept. 2 in JAMA.

Despite advertising claims that certain named diets are superior to others, there have been very few head-to-head comparison studies, and none have had adequate numbers to achieve a definitive conclusion. In addition, no studies that have pooled the sparse available data have used rigorous meta-analytic techniques to allow quantitative comparisons among the diets, said Bradley C. Johnston, Ph.D., of the Hospital for Sick Children Research Institute, Toronto, and his associates.

©wragg/iStockphoto.com
All the dietary programs were likely to yield the same 6-kg weight loss at 1 year.

The investigators performed a network meta-analysis that included all available data from 48 randomized clinical trials (7,286 participants) to estimate the relative effects of three low-carbohydrate diets (Atkins, South Beach, and Zone), two low-fat diets (Ornish and Rosemary Conley), and five moderate-macronutrient diets (Biggest Loser, Jenny Craig, Nutrisystem, Volumetrics, and Weight Watchers). "We included dietary programs with recommendations for daily macronutrient, caloric intake, or both for a defined period (12 weeks or longer), with or without exercise (e.g., jogging, strength training) or behavioral support (e.g., counseling, group support). Eligible programs included meal replacement products but had to consist primarily of whole foods and could not include pharmacologic agents," they said.

All the diets were superior to no diet and all showed an approximate 8-kg weight loss at 6 months with a regain of approximately 2 kg at 12 months. "Although statistical differences existed among several of the diets, the differences were small and unlikely to be important to those seeking weight loss," Dr. Johnston and his associates said (JAMA 2014 Sept. 2 [doi:10.1001/jama.2014.10397]).

"Because different diets are variably tolerated by individuals, the ideal diet is one that is best adhered to by individuals so that they can stay on the diet as long as possible," the investigators noted.

Only 5 of the 48 trials reported on adverse events, and only 1 of them directly compared adverse events between low-carbohydrate against low-fat diets. That study found all adverse events to be more common with low-carbohydrate diets, including constipation (68% vs 35%), headache (60% vs 40%), halitosis (38% vs 8%), muscle cramps (35% vs 7%), diarrhea (23% vs 7%), general weakness (25% vs 8%), and rash (13% vs 0%).

References

References

Publications
Publications
Topics
Article Type
Display Headline
Weight loss comparable among ‘named’ diets
Display Headline
Weight loss comparable among ‘named’ diets
Legacy Keywords
dietary programs, weight loss, advertising, diets, weight, obesity
Legacy Keywords
dietary programs, weight loss, advertising, diets, weight, obesity
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Key clinical point: All name-brand diets are better than nothing and in equal measure.

Major finding: All the diets were superior to no diet, and all showed an approximate 8-kg weight loss at 6 months with a regain of approximately 2 kg at 12 months.

Data source: A network meta-analysis comparing weight loss at 6 months and 1 year among adults following the Atkins, South Beach, Zone, Ornish, Rosemary Conley, Biggest Loser, Jenny Craig, Nutrisystem, Volumetrics, or Weight Watchers diets.

Disclosures: This study was supported by the Canadian Institute of Health. Dr. Johnston and his associates reported no financial conflicts of interest.

Vary CRC Screening by Age, Sex, Race, Ethnicity?

Understanding phenotypic features may enhance identification
Article Type
Changed
Tue, 12/13/2016 - 12:08
Display Headline
Vary CRC Screening by Age, Sex, Race, Ethnicity?

Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the posttest. 

Among adults at average risk for colorectal cancer who undergo screening colonoscopy, the yield of large polyps and tumors varies widely by patient age, sex, race, and ethnicity. This means that an across-the-board recommendation to initiate screening for all patients at a particular age "may not appear rational and could negatively impact adherence," Dr. David A. Lieberman and his colleagues said in the August issue of Gastroenterology (doi:org/10.1053/j.gastro.2014.04.037).

In what they described as "the largest and most comprehensive analysis of average-risk screening with colonoscopy in the United States," the investigators found that the rate of detecting large (and therefore likely advanced) neoplastic lesions was much higher or lower at any given age, depending on the patient’s sex, race, and ethnicity. So the fact that most practice guidelines uniformly recommend initiating colonoscopy screens at age 50 years doesn’t make sense.

"We believe these data, combined with [those of] prior studies, are compelling enough to consider customization of the initiation age of screening based on the risk of large polyps," said Dr. Lieberman and his associates at Oregon Health & Science University, Portland.

They analyzed information from the Clinical Outcomes Research Initiative database, which includes endoscopy findings from 84 diverse practice settings across the country that are representative of all U.S. community, academic, and Veterans Administration endoscopy centers. They included 327,785 exams of patients aged 40 years and older who were at average risk for colorectal cancer and were screened in 2000 through 2011.

Women accounted for half of the study subjects, except for those examined at VA centers, who were predominately male. The study population was 83.6% white, 5.7% black, and 7.7% Hispanic. More than 95% of the participants were aged 50-79 years.

The outcome of interest was the detection of a polyp or tumor larger than 9 mm. "These large lesions are a surrogate for advanced neoplasia," the investigators said.

The prevalence of large neoplasias rose steadily with increasing age across all races and ethnicities and in both sexes.

Across all age groups, women had a lower prevalence of large polyps than men did, suggesting that the initiation of screening colonoscopy can be safely delayed until age 60 years, at least in white and Hispanic women.

Among men younger than 50 years, the prevalence of large polyps was similar between whites (5.3%) and blacks (5.0%). However, the sample of black men in this age group was small (380 patients), so it is possible that this study was simply underpowered to detect the well-known excess of colorectal neoplasias in younger black men.

The prevalence of large polyps was higher in black men than did white men at all other ages until the age of 70 years, at which point the rates even out. Prevalences were 7.1% vs. 6.2% at 50-54 years, 8.5% vs. 7.4% at 55-59 years, 11.5% vs. 8.6% at 60-64 years, and 12.0% vs. 9.7% at 65-69 years.

Black women had a higher prevalence of large polyps than did white women until age 65 years, when the rates evened out. Prevalences were 5.2% vs. 4.2% at 50-54 years, 6.6% vs. 4.5% at 55-59 years, and 6.9% vs. 5.2% at 60-64 years.

These findings "support intensification of screening in black men and women at age 50 years," Dr. Lieberman and his associates said.

Hispanic men and women had a 25% lower rate of polyps than did whites from age 50 through age 79. This finding suggests that initiation of screening colonoscopy can be delayed in Hispanic men and especially in Hispanic women.

The researchers used white men aged 50-54 years as a reference group to compare rates of detection across the study groups. Screening colonoscopy detected large polyps in 6.2% of white men aged 50-54 years. To achieve a similar yield in Hispanic men or black women, they wouldn’t need to be screened until age 55-59 years. To achieve it in Hispanic women, they wouldn’t need to be screened until age 70-74 years.

This study was supported by the National Institute of Digestive and Kidney Diseases. Dr. Lieberman and his associates reported no relevant financial conflicts of interest.

To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.

References

Body

In a perfect world, we would be able to identify persons who are going to develop colon cancer. We would screen only these individuals in order to reduce their risk. The next best thing would be to try to identify those at increased risk of colon cancer and target them for appropriate screening. We should aim to minimize unnecessary screening procedures and reduce the risk of avoidable complications. The study by Dr. Lieberman brings us closer to stratifying average-risk individuals by race/ethnicity and sex, based on the prevalence of colonic neoplasia using polyps greater than 9 mm as a surrogate.

The authors studied more than 300,000 average-risk persons undergoing screening colonoscopy from diverse clinical practice settings. In general, the prevalence of large polyps and tumors increased with age, but women had lower prevalence when compared with men. Furthermore, they found that, when compared with whites, blacks had higher prevalence of large polyps while Hispanics had lower risk.

As we move deeper into the era of personalized medicine, understanding the phenotypic features of individuals with a higher risk of colon cancer such as those suggested in the current study may enhance identification of more reliable molecular predictors of higher risk among average-risk persons.

Dr. Adeyinka O. Laiyemo is in division of gastroenterology, department of medicine, Howard University, Washington. He has no conflicts of interest.

Author and Disclosure Information

Mary Ann Moon, Family Practice News Digital Network

Publications
Topics
Legacy Keywords
colorectal cancer, screening colonoscopy, polyps, tumors, age, sex, race, ethnicity, screening, Dr. David A. Lieberman, Gastroenterology
Author and Disclosure Information

Mary Ann Moon, Family Practice News Digital Network

Author and Disclosure Information

Mary Ann Moon, Family Practice News Digital Network

Body

In a perfect world, we would be able to identify persons who are going to develop colon cancer. We would screen only these individuals in order to reduce their risk. The next best thing would be to try to identify those at increased risk of colon cancer and target them for appropriate screening. We should aim to minimize unnecessary screening procedures and reduce the risk of avoidable complications. The study by Dr. Lieberman brings us closer to stratifying average-risk individuals by race/ethnicity and sex, based on the prevalence of colonic neoplasia using polyps greater than 9 mm as a surrogate.

The authors studied more than 300,000 average-risk persons undergoing screening colonoscopy from diverse clinical practice settings. In general, the prevalence of large polyps and tumors increased with age, but women had lower prevalence when compared with men. Furthermore, they found that, when compared with whites, blacks had higher prevalence of large polyps while Hispanics had lower risk.

As we move deeper into the era of personalized medicine, understanding the phenotypic features of individuals with a higher risk of colon cancer such as those suggested in the current study may enhance identification of more reliable molecular predictors of higher risk among average-risk persons.

Dr. Adeyinka O. Laiyemo is in division of gastroenterology, department of medicine, Howard University, Washington. He has no conflicts of interest.

Body

In a perfect world, we would be able to identify persons who are going to develop colon cancer. We would screen only these individuals in order to reduce their risk. The next best thing would be to try to identify those at increased risk of colon cancer and target them for appropriate screening. We should aim to minimize unnecessary screening procedures and reduce the risk of avoidable complications. The study by Dr. Lieberman brings us closer to stratifying average-risk individuals by race/ethnicity and sex, based on the prevalence of colonic neoplasia using polyps greater than 9 mm as a surrogate.

The authors studied more than 300,000 average-risk persons undergoing screening colonoscopy from diverse clinical practice settings. In general, the prevalence of large polyps and tumors increased with age, but women had lower prevalence when compared with men. Furthermore, they found that, when compared with whites, blacks had higher prevalence of large polyps while Hispanics had lower risk.

As we move deeper into the era of personalized medicine, understanding the phenotypic features of individuals with a higher risk of colon cancer such as those suggested in the current study may enhance identification of more reliable molecular predictors of higher risk among average-risk persons.

Dr. Adeyinka O. Laiyemo is in division of gastroenterology, department of medicine, Howard University, Washington. He has no conflicts of interest.

Title
Understanding phenotypic features may enhance identification
Understanding phenotypic features may enhance identification

Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the posttest. 

Among adults at average risk for colorectal cancer who undergo screening colonoscopy, the yield of large polyps and tumors varies widely by patient age, sex, race, and ethnicity. This means that an across-the-board recommendation to initiate screening for all patients at a particular age "may not appear rational and could negatively impact adherence," Dr. David A. Lieberman and his colleagues said in the August issue of Gastroenterology (doi:org/10.1053/j.gastro.2014.04.037).

In what they described as "the largest and most comprehensive analysis of average-risk screening with colonoscopy in the United States," the investigators found that the rate of detecting large (and therefore likely advanced) neoplastic lesions was much higher or lower at any given age, depending on the patient’s sex, race, and ethnicity. So the fact that most practice guidelines uniformly recommend initiating colonoscopy screens at age 50 years doesn’t make sense.

"We believe these data, combined with [those of] prior studies, are compelling enough to consider customization of the initiation age of screening based on the risk of large polyps," said Dr. Lieberman and his associates at Oregon Health & Science University, Portland.

They analyzed information from the Clinical Outcomes Research Initiative database, which includes endoscopy findings from 84 diverse practice settings across the country that are representative of all U.S. community, academic, and Veterans Administration endoscopy centers. They included 327,785 exams of patients aged 40 years and older who were at average risk for colorectal cancer and were screened in 2000 through 2011.

Women accounted for half of the study subjects, except for those examined at VA centers, who were predominately male. The study population was 83.6% white, 5.7% black, and 7.7% Hispanic. More than 95% of the participants were aged 50-79 years.

The outcome of interest was the detection of a polyp or tumor larger than 9 mm. "These large lesions are a surrogate for advanced neoplasia," the investigators said.

The prevalence of large neoplasias rose steadily with increasing age across all races and ethnicities and in both sexes.

Across all age groups, women had a lower prevalence of large polyps than men did, suggesting that the initiation of screening colonoscopy can be safely delayed until age 60 years, at least in white and Hispanic women.

Among men younger than 50 years, the prevalence of large polyps was similar between whites (5.3%) and blacks (5.0%). However, the sample of black men in this age group was small (380 patients), so it is possible that this study was simply underpowered to detect the well-known excess of colorectal neoplasias in younger black men.

The prevalence of large polyps was higher in black men than did white men at all other ages until the age of 70 years, at which point the rates even out. Prevalences were 7.1% vs. 6.2% at 50-54 years, 8.5% vs. 7.4% at 55-59 years, 11.5% vs. 8.6% at 60-64 years, and 12.0% vs. 9.7% at 65-69 years.

Black women had a higher prevalence of large polyps than did white women until age 65 years, when the rates evened out. Prevalences were 5.2% vs. 4.2% at 50-54 years, 6.6% vs. 4.5% at 55-59 years, and 6.9% vs. 5.2% at 60-64 years.

These findings "support intensification of screening in black men and women at age 50 years," Dr. Lieberman and his associates said.

Hispanic men and women had a 25% lower rate of polyps than did whites from age 50 through age 79. This finding suggests that initiation of screening colonoscopy can be delayed in Hispanic men and especially in Hispanic women.

The researchers used white men aged 50-54 years as a reference group to compare rates of detection across the study groups. Screening colonoscopy detected large polyps in 6.2% of white men aged 50-54 years. To achieve a similar yield in Hispanic men or black women, they wouldn’t need to be screened until age 55-59 years. To achieve it in Hispanic women, they wouldn’t need to be screened until age 70-74 years.

This study was supported by the National Institute of Digestive and Kidney Diseases. Dr. Lieberman and his associates reported no relevant financial conflicts of interest.

To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.

Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the posttest. 

Among adults at average risk for colorectal cancer who undergo screening colonoscopy, the yield of large polyps and tumors varies widely by patient age, sex, race, and ethnicity. This means that an across-the-board recommendation to initiate screening for all patients at a particular age "may not appear rational and could negatively impact adherence," Dr. David A. Lieberman and his colleagues said in the August issue of Gastroenterology (doi:org/10.1053/j.gastro.2014.04.037).

In what they described as "the largest and most comprehensive analysis of average-risk screening with colonoscopy in the United States," the investigators found that the rate of detecting large (and therefore likely advanced) neoplastic lesions was much higher or lower at any given age, depending on the patient’s sex, race, and ethnicity. So the fact that most practice guidelines uniformly recommend initiating colonoscopy screens at age 50 years doesn’t make sense.

"We believe these data, combined with [those of] prior studies, are compelling enough to consider customization of the initiation age of screening based on the risk of large polyps," said Dr. Lieberman and his associates at Oregon Health & Science University, Portland.

They analyzed information from the Clinical Outcomes Research Initiative database, which includes endoscopy findings from 84 diverse practice settings across the country that are representative of all U.S. community, academic, and Veterans Administration endoscopy centers. They included 327,785 exams of patients aged 40 years and older who were at average risk for colorectal cancer and were screened in 2000 through 2011.

Women accounted for half of the study subjects, except for those examined at VA centers, who were predominately male. The study population was 83.6% white, 5.7% black, and 7.7% Hispanic. More than 95% of the participants were aged 50-79 years.

The outcome of interest was the detection of a polyp or tumor larger than 9 mm. "These large lesions are a surrogate for advanced neoplasia," the investigators said.

The prevalence of large neoplasias rose steadily with increasing age across all races and ethnicities and in both sexes.

Across all age groups, women had a lower prevalence of large polyps than men did, suggesting that the initiation of screening colonoscopy can be safely delayed until age 60 years, at least in white and Hispanic women.

Among men younger than 50 years, the prevalence of large polyps was similar between whites (5.3%) and blacks (5.0%). However, the sample of black men in this age group was small (380 patients), so it is possible that this study was simply underpowered to detect the well-known excess of colorectal neoplasias in younger black men.

The prevalence of large polyps was higher in black men than did white men at all other ages until the age of 70 years, at which point the rates even out. Prevalences were 7.1% vs. 6.2% at 50-54 years, 8.5% vs. 7.4% at 55-59 years, 11.5% vs. 8.6% at 60-64 years, and 12.0% vs. 9.7% at 65-69 years.

Black women had a higher prevalence of large polyps than did white women until age 65 years, when the rates evened out. Prevalences were 5.2% vs. 4.2% at 50-54 years, 6.6% vs. 4.5% at 55-59 years, and 6.9% vs. 5.2% at 60-64 years.

These findings "support intensification of screening in black men and women at age 50 years," Dr. Lieberman and his associates said.

Hispanic men and women had a 25% lower rate of polyps than did whites from age 50 through age 79. This finding suggests that initiation of screening colonoscopy can be delayed in Hispanic men and especially in Hispanic women.

The researchers used white men aged 50-54 years as a reference group to compare rates of detection across the study groups. Screening colonoscopy detected large polyps in 6.2% of white men aged 50-54 years. To achieve a similar yield in Hispanic men or black women, they wouldn’t need to be screened until age 55-59 years. To achieve it in Hispanic women, they wouldn’t need to be screened until age 70-74 years.

This study was supported by the National Institute of Digestive and Kidney Diseases. Dr. Lieberman and his associates reported no relevant financial conflicts of interest.

To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Vary CRC Screening by Age, Sex, Race, Ethnicity?
Display Headline
Vary CRC Screening by Age, Sex, Race, Ethnicity?
Legacy Keywords
colorectal cancer, screening colonoscopy, polyps, tumors, age, sex, race, ethnicity, screening, Dr. David A. Lieberman, Gastroenterology
Legacy Keywords
colorectal cancer, screening colonoscopy, polyps, tumors, age, sex, race, ethnicity, screening, Dr. David A. Lieberman, Gastroenterology
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Beta-blockers no help for heart failure with atrial fib

Article Type
Changed
Fri, 01/18/2019 - 13:56
Display Headline
Beta-blockers no help for heart failure with atrial fib

Beta-blocker therapy doesn’t reduce all-cause mortality, cardiovascular mortality, cardiovascular hospitalization, or stroke in patients with heart failure who also have atrial fibrillation, Dipak Kotecha, Ph.D., reported at the annual congress of the European Society of Cardiology in Barcelona.

In a meta-analysis that used the "arduous" process of analyzing individual-patient data from all the high-quality randomized controlled trials available, Dr. Kotecha and his associates found no evidence that beta-blockers prevent adverse clinical events in this patient population. "Beta-blockers should no longer be regarded as standard therapy to improve prognosis" in patients with concomitant heart failure (HF) and atrial fibrillation (AF), he said.

In contrast, the drugs are effective and are strongly recommended for patients with HF who are in sinus rhythm, Dr. Kotecha said in a paper presented at the meeting and simultaneously published online Sept. 2 (Lancet 2014 [doi:10.1016/S0140-6736(14)61373-8]).

Heart failure and atrial fibrillation are common disorders, and both are becoming more prevalent. They often coexist, and an estimated 14%-50% of patients who have symptomatic HF also have AF.

At present, both European and American guidelines advise the use of beta-blockers for symptomatic heart failure without regard to AF status, based on trials that predominantly enrolled patients in sinus rhythm. There have been concerns about the drugs’ efficacy in certain subgroups of patients, including those with AF. "Patients with AF are often prescribed beta-blockers for both prognostic benefit in HF and heart-rate control, although there is little and underpowered evidence for efficacy in terms of clinical outcomes," he noted.

Dr. Kotecha and his associates in the Beta-Blockers in Heart Failure Collaborative Group – a multinational organization "formed to provide definitive answers to open questions about HF and beta-blocker therapy" and to provide clinicians with clear guidance – reviewed data from 18,254 study participants who were followed for a mean of 1.5 years, which allowed a robust and adequately powered analysis of this issue. They included "only unconfounded head-to-head trials with recruitment of more than 300 patients and a planned follow-up of more than 6 months."

A total of 3,066 (17%) of the study participants had AF as well as heart failure, and 633 of them died during follow-up. "A consistent benefit of beta-blockers versus placebo was noted for all death or hospital admission outcomes in patients with sinus rhythm, but the differences were not significant in patients with AF," said Dr. Kotecha of the University of Birmingham (England) Centre for Cardiovascular Sciences.

"The substantial benefit identified in patients with sinus rhythm should not be extrapolated to patients with AF," he said.

It is important to note that beta-blockers did appear to be safe, with no increase in mortality or adverse events, compared with placebo. "This should reassure clinicians, particularly for patients with another indication for beta-blockers, such as MI or the need for rate control of rapid AF with ongoing symptoms," he added.

This study was funded by Menarini Farmaceutica Internazionale, and GlaxoSmithKline provided data extraction support. Neither pharmaceutical group had a role in data analysis or manuscript preparation. Dr. Kotecha is supported by the U.K. National Institute for Health Research and reported receiving grants and honoraria from Menarini. His associates reported ties to numerous device and pharmaceutical sources.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
Beta-blocker, therapy, all-cause mortality, cardiovascular mortality, cardiovascular hospitalization, stroke, heart failure, atrial fibrillation, Dipak Kotecha, European Society of Cardiology, Barcelona, HF, AF,
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

Beta-blocker therapy doesn’t reduce all-cause mortality, cardiovascular mortality, cardiovascular hospitalization, or stroke in patients with heart failure who also have atrial fibrillation, Dipak Kotecha, Ph.D., reported at the annual congress of the European Society of Cardiology in Barcelona.

In a meta-analysis that used the "arduous" process of analyzing individual-patient data from all the high-quality randomized controlled trials available, Dr. Kotecha and his associates found no evidence that beta-blockers prevent adverse clinical events in this patient population. "Beta-blockers should no longer be regarded as standard therapy to improve prognosis" in patients with concomitant heart failure (HF) and atrial fibrillation (AF), he said.

In contrast, the drugs are effective and are strongly recommended for patients with HF who are in sinus rhythm, Dr. Kotecha said in a paper presented at the meeting and simultaneously published online Sept. 2 (Lancet 2014 [doi:10.1016/S0140-6736(14)61373-8]).

Heart failure and atrial fibrillation are common disorders, and both are becoming more prevalent. They often coexist, and an estimated 14%-50% of patients who have symptomatic HF also have AF.

At present, both European and American guidelines advise the use of beta-blockers for symptomatic heart failure without regard to AF status, based on trials that predominantly enrolled patients in sinus rhythm. There have been concerns about the drugs’ efficacy in certain subgroups of patients, including those with AF. "Patients with AF are often prescribed beta-blockers for both prognostic benefit in HF and heart-rate control, although there is little and underpowered evidence for efficacy in terms of clinical outcomes," he noted.

Dr. Kotecha and his associates in the Beta-Blockers in Heart Failure Collaborative Group – a multinational organization "formed to provide definitive answers to open questions about HF and beta-blocker therapy" and to provide clinicians with clear guidance – reviewed data from 18,254 study participants who were followed for a mean of 1.5 years, which allowed a robust and adequately powered analysis of this issue. They included "only unconfounded head-to-head trials with recruitment of more than 300 patients and a planned follow-up of more than 6 months."

A total of 3,066 (17%) of the study participants had AF as well as heart failure, and 633 of them died during follow-up. "A consistent benefit of beta-blockers versus placebo was noted for all death or hospital admission outcomes in patients with sinus rhythm, but the differences were not significant in patients with AF," said Dr. Kotecha of the University of Birmingham (England) Centre for Cardiovascular Sciences.

"The substantial benefit identified in patients with sinus rhythm should not be extrapolated to patients with AF," he said.

It is important to note that beta-blockers did appear to be safe, with no increase in mortality or adverse events, compared with placebo. "This should reassure clinicians, particularly for patients with another indication for beta-blockers, such as MI or the need for rate control of rapid AF with ongoing symptoms," he added.

This study was funded by Menarini Farmaceutica Internazionale, and GlaxoSmithKline provided data extraction support. Neither pharmaceutical group had a role in data analysis or manuscript preparation. Dr. Kotecha is supported by the U.K. National Institute for Health Research and reported receiving grants and honoraria from Menarini. His associates reported ties to numerous device and pharmaceutical sources.

Beta-blocker therapy doesn’t reduce all-cause mortality, cardiovascular mortality, cardiovascular hospitalization, or stroke in patients with heart failure who also have atrial fibrillation, Dipak Kotecha, Ph.D., reported at the annual congress of the European Society of Cardiology in Barcelona.

In a meta-analysis that used the "arduous" process of analyzing individual-patient data from all the high-quality randomized controlled trials available, Dr. Kotecha and his associates found no evidence that beta-blockers prevent adverse clinical events in this patient population. "Beta-blockers should no longer be regarded as standard therapy to improve prognosis" in patients with concomitant heart failure (HF) and atrial fibrillation (AF), he said.

In contrast, the drugs are effective and are strongly recommended for patients with HF who are in sinus rhythm, Dr. Kotecha said in a paper presented at the meeting and simultaneously published online Sept. 2 (Lancet 2014 [doi:10.1016/S0140-6736(14)61373-8]).

Heart failure and atrial fibrillation are common disorders, and both are becoming more prevalent. They often coexist, and an estimated 14%-50% of patients who have symptomatic HF also have AF.

At present, both European and American guidelines advise the use of beta-blockers for symptomatic heart failure without regard to AF status, based on trials that predominantly enrolled patients in sinus rhythm. There have been concerns about the drugs’ efficacy in certain subgroups of patients, including those with AF. "Patients with AF are often prescribed beta-blockers for both prognostic benefit in HF and heart-rate control, although there is little and underpowered evidence for efficacy in terms of clinical outcomes," he noted.

Dr. Kotecha and his associates in the Beta-Blockers in Heart Failure Collaborative Group – a multinational organization "formed to provide definitive answers to open questions about HF and beta-blocker therapy" and to provide clinicians with clear guidance – reviewed data from 18,254 study participants who were followed for a mean of 1.5 years, which allowed a robust and adequately powered analysis of this issue. They included "only unconfounded head-to-head trials with recruitment of more than 300 patients and a planned follow-up of more than 6 months."

A total of 3,066 (17%) of the study participants had AF as well as heart failure, and 633 of them died during follow-up. "A consistent benefit of beta-blockers versus placebo was noted for all death or hospital admission outcomes in patients with sinus rhythm, but the differences were not significant in patients with AF," said Dr. Kotecha of the University of Birmingham (England) Centre for Cardiovascular Sciences.

"The substantial benefit identified in patients with sinus rhythm should not be extrapolated to patients with AF," he said.

It is important to note that beta-blockers did appear to be safe, with no increase in mortality or adverse events, compared with placebo. "This should reassure clinicians, particularly for patients with another indication for beta-blockers, such as MI or the need for rate control of rapid AF with ongoing symptoms," he added.

This study was funded by Menarini Farmaceutica Internazionale, and GlaxoSmithKline provided data extraction support. Neither pharmaceutical group had a role in data analysis or manuscript preparation. Dr. Kotecha is supported by the U.K. National Institute for Health Research and reported receiving grants and honoraria from Menarini. His associates reported ties to numerous device and pharmaceutical sources.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Beta-blockers no help for heart failure with atrial fib
Display Headline
Beta-blockers no help for heart failure with atrial fib
Legacy Keywords
Beta-blocker, therapy, all-cause mortality, cardiovascular mortality, cardiovascular hospitalization, stroke, heart failure, atrial fibrillation, Dipak Kotecha, European Society of Cardiology, Barcelona, HF, AF,
Legacy Keywords
Beta-blocker, therapy, all-cause mortality, cardiovascular mortality, cardiovascular hospitalization, stroke, heart failure, atrial fibrillation, Dipak Kotecha, European Society of Cardiology, Barcelona, HF, AF,
Sections
Article Source

FROM THE ESC CONGRESS 2014

PURLs Copyright

Inside the Article

Vitals

Key clinical point: The benefit of beta-blockers in patients with HF and sinus rhythm cannot be extended to those with HF and atrial fibrillation.

Major finding: A consistent benefit of beta-blockers versus placebo was noted for all-cause mortality, CV mortality, CV hospitalization, and stroke in patients with sinus rhythm, but the differences were not significant in patients with AF.

Data source: A metaanalysis of all large high-quality randomized controlled trials assessing mortality and CV outcomes for 18,254 adults with HF, including 3,066 with concomitant AF, who received either beta-blockers or placebo and were followed for a mean of 1.5 years.

Disclosures: This study was funded by Menarini Farmaceutica Internazionale, and GlaxoSmithKline provided data extraction support. Neither pharmaceutical group had a role in data analysis or manuscript preparation. Dr. Kotecha is supported by the U.K. National Institute for Health Research and reported receiving grants and honoraria from Menarini. His associates reported ties to numerous device and pharmaceutical sources.

Prednisolone, immunotherapy ineffective for most tuberculous pericarditis

Stop routine glucocorticoids
Article Type
Changed
Fri, 01/18/2019 - 13:56
Display Headline
Prednisolone, immunotherapy ineffective for most tuberculous pericarditis

Neither standard anti-inflammatory therapy using prednisolone nor an experimental immunotherapy using Mycobacterium indicus pranii injections reduced the composite outcome of death, cardiac tamponade requiring pericardiocentesis, or constrictive pericarditis in an international clinical trial of adults with tuberculous pericarditis, a study showed.

However, prednisolone was beneficial for one component of this composite outcome – lowering the rate of constrictive pericarditis – compared with placebo, Dr. Bongani M. Mayosi reported at the annual congress of the European Society of Cardiology in Barcelona. His report was simultaneously presented at the meeting and published online Sept. 2 (N. Engl. J. Med. 2014 [doi:10.1056/NEJMoa1407380]).

Glucocorticoids are thought to attenuate the inflammatory response in patients with tuberculous pericarditis, and are recommended as adjunctive therapy in current American and World Health Organization treatment guidelines. But the studies on which these recommendations are based had "very small" numbers of events and patients, and the treatment effect was minimal, leading European expert groups to advise against using the drugs in this patient population. In addition, glucocorticoids may raise the risk of cancer in patients who are coinfected with HIV, which is common in the regions of sub-Saharan Africa and Asia where tuberculous pericarditis is most frequent, said Dr. Mayosi of the department of medicine, Old Groote Schuur Hospital, Cape Town, South Africa.

The 1,400 participants in the Investigation of the Management of Pericarditis (IMPI) trial had pericardial effusion confirmed by echocardiography and evidence of definite or probable tuberculous pericarditis; two-thirds also had concomitant HIV infection. They were treated at 19 hospitals across 8 countries in Africa during a 5-year period. All received background antimicrobial therapy for tuberculosis, and all patients with HIV received antiretrovirals according to WHO guidelines.

The participants first were randomly assigned to receive prednisolone (706 patients) or placebo (694 patients) in tapering doses for 6 weeks. In the second phase of the study, 1,250 of these participants were then randomly assigned to receive five intradermal injections of either heat-killed M. indicus pranii or placebo at intervals over the course of 3 months. This nonpathogenic, rapidly growing mycobacterium species has been shown to suppress inflammation in patients with leprosy.

The primary efficacy outcome was a composite of death, cardiac tamponade requiring pericardiocentesis, or constrictive pericarditis. After a median follow-up of 636 days, there were 14.3 such events per 100 patient-years in the prednisolone group and 14.8 in the placebo group, a nonsignificant difference.

When each component of this composite outcome was considered individually, prednisolone did not improve the death rate or the rate of cardiac tamponade, compared with placebo, but did reduce the rate of constrictive pericarditis (4.4%, vs 7.8% with placebo), and thus the rate of hospitalization (20.7%, vs 25.2% with placebo). "This finding is important because pericardiectomy, the definitive treatment for chronic pericardial constriction, is associated with high perioperative mortality and morbidity, and cardiac surgery is not widely available in Africa," Dr. Mayosi said.

Prednisolone also raised the rate of opportunistic infections, chiefly candidiasis, compared with placebo. And it markedly increased the rate of cancer, primarily Kaposi’s sarcoma, in patients coinfected with HIV (1.05 cases per 100 person-years, vs. 0.32 cases with placebo).

In the M. indicus pranii comparison, the active treatment was no different from placebo with regard to the composite outcome or any secondary outcomes, and that portion of the trial was halted early for futility. Like prednisolone, this experimental agent raised the rate of cancer in HIV-positive patients (0.92 cases per 100 person-years, vs. 0.24 cases with placebo) – an adverse event that has not been reported previously with M. indicus pranii.

In addition, significantly more patients who received M. indicus pranii (41.4%) than placebo (2.9%) developed injection-site reactions. Fifteen percent of patients given the active injections developed abscesses at the injection site, compared with only 1% of those given placebo injections.

The IMPI trial was supported by the Canadian Institutes of Health Research, the Canadian Network and Centre for Trials Internationally, the Population Health Research Institute, the South African Medical Research Council, the Lily and Ernst Hausmann Research Trust, and Cadila Pharma (India). Cadila donated the study drugs but had no role in the design or conduct of the study or in data analysis. The authors had no relevant financial conflicts of interest to disclose.

References

Body

These findings clearly suggest that adjunctive glucocorticoids should not be used routinely in patients with tuberculous pericarditis, which may surprise many clinicians. But because they prevent constrictive pericarditis and reduce hospitalizations, the drugs may be appropriate for patients at highest risk for complications, such as those with large effusions, high levels of inflammatory cells or markers in the pericardial fluid, and early signs of constriction.

The use of glucocorticoids should be curtailed in patients coinfected with HIV unless the risk of constrictive pericarditis is high, since these drugs increase the risk of cancer in this patient population.

Dr. Richard E. Chaisson and Dr. Wendy S. Post of Johns Hopkins University, Baltimore, made these remarks in an editorial accompanying Dr. Mayosi’s report (N. Engl. J. Med. 2014 Sept. 2 [doi:10.1056/NEJMe1409356]. Dr. Chaisson reported receiving support from Merck.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
anti-inflammatory therapy, prednisolone, experimental immunotherapy, Mycobacterium indicus pranii injections, cardiac tamponade, pericardiocentesis, constrictive pericarditis, tuberculous pericarditis, Dr. Bongani M. Mayosi, European Society of Cardiology in Barcelona,
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event
Body

These findings clearly suggest that adjunctive glucocorticoids should not be used routinely in patients with tuberculous pericarditis, which may surprise many clinicians. But because they prevent constrictive pericarditis and reduce hospitalizations, the drugs may be appropriate for patients at highest risk for complications, such as those with large effusions, high levels of inflammatory cells or markers in the pericardial fluid, and early signs of constriction.

The use of glucocorticoids should be curtailed in patients coinfected with HIV unless the risk of constrictive pericarditis is high, since these drugs increase the risk of cancer in this patient population.

Dr. Richard E. Chaisson and Dr. Wendy S. Post of Johns Hopkins University, Baltimore, made these remarks in an editorial accompanying Dr. Mayosi’s report (N. Engl. J. Med. 2014 Sept. 2 [doi:10.1056/NEJMe1409356]. Dr. Chaisson reported receiving support from Merck.

Body

These findings clearly suggest that adjunctive glucocorticoids should not be used routinely in patients with tuberculous pericarditis, which may surprise many clinicians. But because they prevent constrictive pericarditis and reduce hospitalizations, the drugs may be appropriate for patients at highest risk for complications, such as those with large effusions, high levels of inflammatory cells or markers in the pericardial fluid, and early signs of constriction.

The use of glucocorticoids should be curtailed in patients coinfected with HIV unless the risk of constrictive pericarditis is high, since these drugs increase the risk of cancer in this patient population.

Dr. Richard E. Chaisson and Dr. Wendy S. Post of Johns Hopkins University, Baltimore, made these remarks in an editorial accompanying Dr. Mayosi’s report (N. Engl. J. Med. 2014 Sept. 2 [doi:10.1056/NEJMe1409356]. Dr. Chaisson reported receiving support from Merck.

Title
Stop routine glucocorticoids
Stop routine glucocorticoids

Neither standard anti-inflammatory therapy using prednisolone nor an experimental immunotherapy using Mycobacterium indicus pranii injections reduced the composite outcome of death, cardiac tamponade requiring pericardiocentesis, or constrictive pericarditis in an international clinical trial of adults with tuberculous pericarditis, a study showed.

However, prednisolone was beneficial for one component of this composite outcome – lowering the rate of constrictive pericarditis – compared with placebo, Dr. Bongani M. Mayosi reported at the annual congress of the European Society of Cardiology in Barcelona. His report was simultaneously presented at the meeting and published online Sept. 2 (N. Engl. J. Med. 2014 [doi:10.1056/NEJMoa1407380]).

Glucocorticoids are thought to attenuate the inflammatory response in patients with tuberculous pericarditis, and are recommended as adjunctive therapy in current American and World Health Organization treatment guidelines. But the studies on which these recommendations are based had "very small" numbers of events and patients, and the treatment effect was minimal, leading European expert groups to advise against using the drugs in this patient population. In addition, glucocorticoids may raise the risk of cancer in patients who are coinfected with HIV, which is common in the regions of sub-Saharan Africa and Asia where tuberculous pericarditis is most frequent, said Dr. Mayosi of the department of medicine, Old Groote Schuur Hospital, Cape Town, South Africa.

The 1,400 participants in the Investigation of the Management of Pericarditis (IMPI) trial had pericardial effusion confirmed by echocardiography and evidence of definite or probable tuberculous pericarditis; two-thirds also had concomitant HIV infection. They were treated at 19 hospitals across 8 countries in Africa during a 5-year period. All received background antimicrobial therapy for tuberculosis, and all patients with HIV received antiretrovirals according to WHO guidelines.

The participants first were randomly assigned to receive prednisolone (706 patients) or placebo (694 patients) in tapering doses for 6 weeks. In the second phase of the study, 1,250 of these participants were then randomly assigned to receive five intradermal injections of either heat-killed M. indicus pranii or placebo at intervals over the course of 3 months. This nonpathogenic, rapidly growing mycobacterium species has been shown to suppress inflammation in patients with leprosy.

The primary efficacy outcome was a composite of death, cardiac tamponade requiring pericardiocentesis, or constrictive pericarditis. After a median follow-up of 636 days, there were 14.3 such events per 100 patient-years in the prednisolone group and 14.8 in the placebo group, a nonsignificant difference.

When each component of this composite outcome was considered individually, prednisolone did not improve the death rate or the rate of cardiac tamponade, compared with placebo, but did reduce the rate of constrictive pericarditis (4.4%, vs 7.8% with placebo), and thus the rate of hospitalization (20.7%, vs 25.2% with placebo). "This finding is important because pericardiectomy, the definitive treatment for chronic pericardial constriction, is associated with high perioperative mortality and morbidity, and cardiac surgery is not widely available in Africa," Dr. Mayosi said.

Prednisolone also raised the rate of opportunistic infections, chiefly candidiasis, compared with placebo. And it markedly increased the rate of cancer, primarily Kaposi’s sarcoma, in patients coinfected with HIV (1.05 cases per 100 person-years, vs. 0.32 cases with placebo).

In the M. indicus pranii comparison, the active treatment was no different from placebo with regard to the composite outcome or any secondary outcomes, and that portion of the trial was halted early for futility. Like prednisolone, this experimental agent raised the rate of cancer in HIV-positive patients (0.92 cases per 100 person-years, vs. 0.24 cases with placebo) – an adverse event that has not been reported previously with M. indicus pranii.

In addition, significantly more patients who received M. indicus pranii (41.4%) than placebo (2.9%) developed injection-site reactions. Fifteen percent of patients given the active injections developed abscesses at the injection site, compared with only 1% of those given placebo injections.

The IMPI trial was supported by the Canadian Institutes of Health Research, the Canadian Network and Centre for Trials Internationally, the Population Health Research Institute, the South African Medical Research Council, the Lily and Ernst Hausmann Research Trust, and Cadila Pharma (India). Cadila donated the study drugs but had no role in the design or conduct of the study or in data analysis. The authors had no relevant financial conflicts of interest to disclose.

Neither standard anti-inflammatory therapy using prednisolone nor an experimental immunotherapy using Mycobacterium indicus pranii injections reduced the composite outcome of death, cardiac tamponade requiring pericardiocentesis, or constrictive pericarditis in an international clinical trial of adults with tuberculous pericarditis, a study showed.

However, prednisolone was beneficial for one component of this composite outcome – lowering the rate of constrictive pericarditis – compared with placebo, Dr. Bongani M. Mayosi reported at the annual congress of the European Society of Cardiology in Barcelona. His report was simultaneously presented at the meeting and published online Sept. 2 (N. Engl. J. Med. 2014 [doi:10.1056/NEJMoa1407380]).

Glucocorticoids are thought to attenuate the inflammatory response in patients with tuberculous pericarditis, and are recommended as adjunctive therapy in current American and World Health Organization treatment guidelines. But the studies on which these recommendations are based had "very small" numbers of events and patients, and the treatment effect was minimal, leading European expert groups to advise against using the drugs in this patient population. In addition, glucocorticoids may raise the risk of cancer in patients who are coinfected with HIV, which is common in the regions of sub-Saharan Africa and Asia where tuberculous pericarditis is most frequent, said Dr. Mayosi of the department of medicine, Old Groote Schuur Hospital, Cape Town, South Africa.

The 1,400 participants in the Investigation of the Management of Pericarditis (IMPI) trial had pericardial effusion confirmed by echocardiography and evidence of definite or probable tuberculous pericarditis; two-thirds also had concomitant HIV infection. They were treated at 19 hospitals across 8 countries in Africa during a 5-year period. All received background antimicrobial therapy for tuberculosis, and all patients with HIV received antiretrovirals according to WHO guidelines.

The participants first were randomly assigned to receive prednisolone (706 patients) or placebo (694 patients) in tapering doses for 6 weeks. In the second phase of the study, 1,250 of these participants were then randomly assigned to receive five intradermal injections of either heat-killed M. indicus pranii or placebo at intervals over the course of 3 months. This nonpathogenic, rapidly growing mycobacterium species has been shown to suppress inflammation in patients with leprosy.

The primary efficacy outcome was a composite of death, cardiac tamponade requiring pericardiocentesis, or constrictive pericarditis. After a median follow-up of 636 days, there were 14.3 such events per 100 patient-years in the prednisolone group and 14.8 in the placebo group, a nonsignificant difference.

When each component of this composite outcome was considered individually, prednisolone did not improve the death rate or the rate of cardiac tamponade, compared with placebo, but did reduce the rate of constrictive pericarditis (4.4%, vs 7.8% with placebo), and thus the rate of hospitalization (20.7%, vs 25.2% with placebo). "This finding is important because pericardiectomy, the definitive treatment for chronic pericardial constriction, is associated with high perioperative mortality and morbidity, and cardiac surgery is not widely available in Africa," Dr. Mayosi said.

Prednisolone also raised the rate of opportunistic infections, chiefly candidiasis, compared with placebo. And it markedly increased the rate of cancer, primarily Kaposi’s sarcoma, in patients coinfected with HIV (1.05 cases per 100 person-years, vs. 0.32 cases with placebo).

In the M. indicus pranii comparison, the active treatment was no different from placebo with regard to the composite outcome or any secondary outcomes, and that portion of the trial was halted early for futility. Like prednisolone, this experimental agent raised the rate of cancer in HIV-positive patients (0.92 cases per 100 person-years, vs. 0.24 cases with placebo) – an adverse event that has not been reported previously with M. indicus pranii.

In addition, significantly more patients who received M. indicus pranii (41.4%) than placebo (2.9%) developed injection-site reactions. Fifteen percent of patients given the active injections developed abscesses at the injection site, compared with only 1% of those given placebo injections.

The IMPI trial was supported by the Canadian Institutes of Health Research, the Canadian Network and Centre for Trials Internationally, the Population Health Research Institute, the South African Medical Research Council, the Lily and Ernst Hausmann Research Trust, and Cadila Pharma (India). Cadila donated the study drugs but had no role in the design or conduct of the study or in data analysis. The authors had no relevant financial conflicts of interest to disclose.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Prednisolone, immunotherapy ineffective for most tuberculous pericarditis
Display Headline
Prednisolone, immunotherapy ineffective for most tuberculous pericarditis
Legacy Keywords
anti-inflammatory therapy, prednisolone, experimental immunotherapy, Mycobacterium indicus pranii injections, cardiac tamponade, pericardiocentesis, constrictive pericarditis, tuberculous pericarditis, Dr. Bongani M. Mayosi, European Society of Cardiology in Barcelona,
Legacy Keywords
anti-inflammatory therapy, prednisolone, experimental immunotherapy, Mycobacterium indicus pranii injections, cardiac tamponade, pericardiocentesis, constrictive pericarditis, tuberculous pericarditis, Dr. Bongani M. Mayosi, European Society of Cardiology in Barcelona,
Sections
Article Source

FROM THE ESC CONGRESS 2014

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Although treatment with either prednisolone or M. indicus pranii did not improve overall outcomes in patients with tuberculosis pericarditis, prednisone did reduce the risk of developing constrictive pericarditis and hospitalizations.

Major finding: After a median follow-up of 636 days, the primary efficacy outcome – a composite of death, cardiac tamponade requiring pericardiocentesis, or constrictive pericarditis – occurred in 14.3 cases per 100 patient-years in the prednisolone group, compared with 14.8 in the placebo group.

Data source: IMPI, a randomized controlled trial in 1,400 adults with presumed tuberculous pericarditis who were treated at 19 hospitals in 8 African countries with 6 weeks of either prednisolone or placebo, followed by 3 months of M. indicus pranii or placebo injections.

Disclosures: The IMPI trial was supported by the Canadian Institutes of Health Research, the Canadian Network and Centre for Trials Internationally, the Population Health Research Institute, the South African Medical Research Council, the Lily and Ernst Hausmann Research Trust, and Cadila Pharma (India). Cadila donated the study drugs but had no role in the design or conduct of the study or in data analysis. The authors had no relevant financial conflicts of interest to disclose.

Novel anticoagulants given to 60% of newly diagnosed AF patients

Displacement of warfarin not surprising
Article Type
Changed
Fri, 01/18/2019 - 13:55
Display Headline
Novel anticoagulants given to 60% of newly diagnosed AF patients

Novel oral anticoagulants introduced since October 2010 have been adopted into clinical practice rapidly, and within 2.5 years were prescribed for more than 60% of patients with newly diagnosed atrial fibrillation, according to a report published online in the American Journal of Medicine.

Further, the new drugs are being prescribed for a different patient population from that indicated by the clinical trials on which Food and Drug Administration (FDA) approval was based. Specifically, dabigatran, rivaroxaban, and apixaban are selectively prescribed for younger, healthier men who have high incomes and reside in wealthier communities, reported Dr. Nihar R. Desai of the division of pharmacoepidemiology and pharmacoeconomics, Brigham and Women’s Hospital and Harvard Medical School, Boston, and his associates.

In what they described as the first study to evaluate real-world use of all novel anticoagulants, researchers found that the rapid uptake of the drugs as first-line therapy for atrial fibrillation (AF) was accompanied by a marked decline in the use of warfarin. The difference in total costs between the generic warfarin and the proprietary dabigatran, rivaroxaban, or apixaban totaled $900 per patient during the first 6 months alone, which "translates into billions of dollars at the national level."

This has important economic implications for patients, payers, and the health care system. The impending FDA approval of new factor Xa inhibitors such as edoxaban and betrixaban will likely further complicate the picture, the researchers said.

The researchers analyzed nationwide medical and prescription claims data for 6,893 adults covered by Aetna who had newly diagnosed nonvalvular AF and were prescribed an oral anticoagulant between October 2010 and June 2013. The direct thrombin inhibitor dabigatran was approved in October 2010, and the factor Xa inhibitors rivaroxaban and apixaban were approved in November 2011 and December 2012.

During the study period, these patients filled 45,472 prescriptions for oral anticoagulants: 57.7% for warfarin, 32.8% for dabigatran, 9.3% for rivaroxaban, and 0.1% for apixaban. However, these figures don’t reflect the trend over time in which prescriptions for the newer agents rapidly displaced those for warfarin. Within 1 year of appearing on the market, dabigatran was equally likely to be prescribed as warfarin was for new AF patients. Its use as a first-line therapy dropped considerably a year later, after reports of excess rates of myocardial infarction and serious and fatal bleeding events in patients taking dabigatran. But at that point rivaroxaban had been introduced, and it soon overtook both dabigatran and warfarin as first-line therapy for AF. (Apixaban accounted for 2% of new anticoagulant prescriptions as of 6 months after it was approved, which is the most recent date for which such statistics were available.)

Simultaneously, the costs of oral anticoagulants rose dramatically, with the new agents accounting for 98% of that escalation. It is estimated that insurers spend $5.82 million every month for all agents combined, and that warfarin accounts for only $0.43 million of that. Similarly, patient out-of-pocket spending for all oral anticoagulants combined was estimated to be $1.3 million per month, with warfarin accounting for $3,844 of that total.

Viewed from another perspective, the average combined patient and insurer spending for anticoagulants during the first 6 months of therapy for warfarin was $122, dabigatran $1,053, and rivaroxaban $1,084. "This represents a difference of more than $900 per patient," said Dr. Desai, who is also at the Center for Outcomes Research and Evaluation, Yale-New Haven Health Services, and his associates.

The greatest benefit from novel anticoagulants is among patients at the highest risk for stroke or systemic embolization, as measured by higher scores on CHADS (Congestive Heart Failure, Hypertension, Age of 75 years or more, Diabetes Mellitus, and Stroke) and HAS-BLED (Hypertension, Abnormal Renal/Liver Function, Stroke, Bleeding History or Predisposition, Labile INR, Elderly, Drugs/Alcohol) assessments. This is also the patient population targeted in the clinical trials that formed the basis for FDA approval.

But in this study, 46% of patients with low CHADS and HAS-BLED scores were initially prescribed the novel anticoagulants, compared with 26% of those with high scores. "For every 1-point increase in CHADS, patients were 20% less likely to receive a novel anticoagulant. Similarly, for every 1-point increase in HAS-BLED, patients were 18% less likely to receive a novel anticoagulant.

"In addition, women were 24% less likely to be initiated on a novel oral anticoagulant as compared with men. [And] there was a significant, stepwise increase in the likelihood of receiving a novel agent with progressively increasing neighborhood household income, compared with a median household income of $50,000 or less," the investigators said (Amer. J. Med. 2014 May 20 [doi: 10.1016/j.amjmed.2014.05.013]).

 

 

"These findings point to the need to conduct ongoing surveillance of the adoption of new agents into clinical practice, as well as the need for robust, real-world comparative-effectiveness analyses of these medications, to enable patients and providers to make informed decisions about their relative benefit, safety, and cost-effectiveness," Dr. Desai and his associates said.

This study was funded by an unrestricted research grant from CVS Caremark. Dr. Desai’s associates reported ties to CVS Caremark and Aetna.

References

Click for Credit Link
Body

It is not surprising that the novel oral anticoagulants appear to be supplanting warfarin as first-line therapy for nonvalvular AF. The newer drugs are much easier to use because they don’t require frequent monitoring of clotting parameters, require no dietary restrictions, and have simple and straightforward dosing.

I expect the use of these novel anticoagulants – and others soon to be approved – to increase over time, particularly once they become generic and cost gradually becomes less of an issue.

Dr. Joseph S. Alpert is professor of medicine at the University of Arizona, Tucson, and the editor-in-chief of the American Journal of Medicine. Dr. Alpert made these remarks in an editorial (Amer. J. Med. 2014 Aug. 8 [doi: 10.1016/amjmed.2014.07.028]) accompanying Dr. Desai’s report. He reported cochairing the data monitoring committees for two of the large clinical trials that led to FDA approval of rivaroxaban.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
oral, anticoagulants, clinical practice, atrial fibrillation, American Journal of Medicine, Food and Drug Administration, FDA, dabigatran, rivaroxaban, apixaban, Dr. Nihar R. Desai, Brigham and Women’s Hospital, Harvard Medical School, Boston,
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Body

It is not surprising that the novel oral anticoagulants appear to be supplanting warfarin as first-line therapy for nonvalvular AF. The newer drugs are much easier to use because they don’t require frequent monitoring of clotting parameters, require no dietary restrictions, and have simple and straightforward dosing.

I expect the use of these novel anticoagulants – and others soon to be approved – to increase over time, particularly once they become generic and cost gradually becomes less of an issue.

Dr. Joseph S. Alpert is professor of medicine at the University of Arizona, Tucson, and the editor-in-chief of the American Journal of Medicine. Dr. Alpert made these remarks in an editorial (Amer. J. Med. 2014 Aug. 8 [doi: 10.1016/amjmed.2014.07.028]) accompanying Dr. Desai’s report. He reported cochairing the data monitoring committees for two of the large clinical trials that led to FDA approval of rivaroxaban.

Body

It is not surprising that the novel oral anticoagulants appear to be supplanting warfarin as first-line therapy for nonvalvular AF. The newer drugs are much easier to use because they don’t require frequent monitoring of clotting parameters, require no dietary restrictions, and have simple and straightforward dosing.

I expect the use of these novel anticoagulants – and others soon to be approved – to increase over time, particularly once they become generic and cost gradually becomes less of an issue.

Dr. Joseph S. Alpert is professor of medicine at the University of Arizona, Tucson, and the editor-in-chief of the American Journal of Medicine. Dr. Alpert made these remarks in an editorial (Amer. J. Med. 2014 Aug. 8 [doi: 10.1016/amjmed.2014.07.028]) accompanying Dr. Desai’s report. He reported cochairing the data monitoring committees for two of the large clinical trials that led to FDA approval of rivaroxaban.

Title
Displacement of warfarin not surprising
Displacement of warfarin not surprising

Novel oral anticoagulants introduced since October 2010 have been adopted into clinical practice rapidly, and within 2.5 years were prescribed for more than 60% of patients with newly diagnosed atrial fibrillation, according to a report published online in the American Journal of Medicine.

Further, the new drugs are being prescribed for a different patient population from that indicated by the clinical trials on which Food and Drug Administration (FDA) approval was based. Specifically, dabigatran, rivaroxaban, and apixaban are selectively prescribed for younger, healthier men who have high incomes and reside in wealthier communities, reported Dr. Nihar R. Desai of the division of pharmacoepidemiology and pharmacoeconomics, Brigham and Women’s Hospital and Harvard Medical School, Boston, and his associates.

In what they described as the first study to evaluate real-world use of all novel anticoagulants, researchers found that the rapid uptake of the drugs as first-line therapy for atrial fibrillation (AF) was accompanied by a marked decline in the use of warfarin. The difference in total costs between the generic warfarin and the proprietary dabigatran, rivaroxaban, or apixaban totaled $900 per patient during the first 6 months alone, which "translates into billions of dollars at the national level."

This has important economic implications for patients, payers, and the health care system. The impending FDA approval of new factor Xa inhibitors such as edoxaban and betrixaban will likely further complicate the picture, the researchers said.

The researchers analyzed nationwide medical and prescription claims data for 6,893 adults covered by Aetna who had newly diagnosed nonvalvular AF and were prescribed an oral anticoagulant between October 2010 and June 2013. The direct thrombin inhibitor dabigatran was approved in October 2010, and the factor Xa inhibitors rivaroxaban and apixaban were approved in November 2011 and December 2012.

During the study period, these patients filled 45,472 prescriptions for oral anticoagulants: 57.7% for warfarin, 32.8% for dabigatran, 9.3% for rivaroxaban, and 0.1% for apixaban. However, these figures don’t reflect the trend over time in which prescriptions for the newer agents rapidly displaced those for warfarin. Within 1 year of appearing on the market, dabigatran was equally likely to be prescribed as warfarin was for new AF patients. Its use as a first-line therapy dropped considerably a year later, after reports of excess rates of myocardial infarction and serious and fatal bleeding events in patients taking dabigatran. But at that point rivaroxaban had been introduced, and it soon overtook both dabigatran and warfarin as first-line therapy for AF. (Apixaban accounted for 2% of new anticoagulant prescriptions as of 6 months after it was approved, which is the most recent date for which such statistics were available.)

Simultaneously, the costs of oral anticoagulants rose dramatically, with the new agents accounting for 98% of that escalation. It is estimated that insurers spend $5.82 million every month for all agents combined, and that warfarin accounts for only $0.43 million of that. Similarly, patient out-of-pocket spending for all oral anticoagulants combined was estimated to be $1.3 million per month, with warfarin accounting for $3,844 of that total.

Viewed from another perspective, the average combined patient and insurer spending for anticoagulants during the first 6 months of therapy for warfarin was $122, dabigatran $1,053, and rivaroxaban $1,084. "This represents a difference of more than $900 per patient," said Dr. Desai, who is also at the Center for Outcomes Research and Evaluation, Yale-New Haven Health Services, and his associates.

The greatest benefit from novel anticoagulants is among patients at the highest risk for stroke or systemic embolization, as measured by higher scores on CHADS (Congestive Heart Failure, Hypertension, Age of 75 years or more, Diabetes Mellitus, and Stroke) and HAS-BLED (Hypertension, Abnormal Renal/Liver Function, Stroke, Bleeding History or Predisposition, Labile INR, Elderly, Drugs/Alcohol) assessments. This is also the patient population targeted in the clinical trials that formed the basis for FDA approval.

But in this study, 46% of patients with low CHADS and HAS-BLED scores were initially prescribed the novel anticoagulants, compared with 26% of those with high scores. "For every 1-point increase in CHADS, patients were 20% less likely to receive a novel anticoagulant. Similarly, for every 1-point increase in HAS-BLED, patients were 18% less likely to receive a novel anticoagulant.

"In addition, women were 24% less likely to be initiated on a novel oral anticoagulant as compared with men. [And] there was a significant, stepwise increase in the likelihood of receiving a novel agent with progressively increasing neighborhood household income, compared with a median household income of $50,000 or less," the investigators said (Amer. J. Med. 2014 May 20 [doi: 10.1016/j.amjmed.2014.05.013]).

 

 

"These findings point to the need to conduct ongoing surveillance of the adoption of new agents into clinical practice, as well as the need for robust, real-world comparative-effectiveness analyses of these medications, to enable patients and providers to make informed decisions about their relative benefit, safety, and cost-effectiveness," Dr. Desai and his associates said.

This study was funded by an unrestricted research grant from CVS Caremark. Dr. Desai’s associates reported ties to CVS Caremark and Aetna.

Novel oral anticoagulants introduced since October 2010 have been adopted into clinical practice rapidly, and within 2.5 years were prescribed for more than 60% of patients with newly diagnosed atrial fibrillation, according to a report published online in the American Journal of Medicine.

Further, the new drugs are being prescribed for a different patient population from that indicated by the clinical trials on which Food and Drug Administration (FDA) approval was based. Specifically, dabigatran, rivaroxaban, and apixaban are selectively prescribed for younger, healthier men who have high incomes and reside in wealthier communities, reported Dr. Nihar R. Desai of the division of pharmacoepidemiology and pharmacoeconomics, Brigham and Women’s Hospital and Harvard Medical School, Boston, and his associates.

In what they described as the first study to evaluate real-world use of all novel anticoagulants, researchers found that the rapid uptake of the drugs as first-line therapy for atrial fibrillation (AF) was accompanied by a marked decline in the use of warfarin. The difference in total costs between the generic warfarin and the proprietary dabigatran, rivaroxaban, or apixaban totaled $900 per patient during the first 6 months alone, which "translates into billions of dollars at the national level."

This has important economic implications for patients, payers, and the health care system. The impending FDA approval of new factor Xa inhibitors such as edoxaban and betrixaban will likely further complicate the picture, the researchers said.

The researchers analyzed nationwide medical and prescription claims data for 6,893 adults covered by Aetna who had newly diagnosed nonvalvular AF and were prescribed an oral anticoagulant between October 2010 and June 2013. The direct thrombin inhibitor dabigatran was approved in October 2010, and the factor Xa inhibitors rivaroxaban and apixaban were approved in November 2011 and December 2012.

During the study period, these patients filled 45,472 prescriptions for oral anticoagulants: 57.7% for warfarin, 32.8% for dabigatran, 9.3% for rivaroxaban, and 0.1% for apixaban. However, these figures don’t reflect the trend over time in which prescriptions for the newer agents rapidly displaced those for warfarin. Within 1 year of appearing on the market, dabigatran was equally likely to be prescribed as warfarin was for new AF patients. Its use as a first-line therapy dropped considerably a year later, after reports of excess rates of myocardial infarction and serious and fatal bleeding events in patients taking dabigatran. But at that point rivaroxaban had been introduced, and it soon overtook both dabigatran and warfarin as first-line therapy for AF. (Apixaban accounted for 2% of new anticoagulant prescriptions as of 6 months after it was approved, which is the most recent date for which such statistics were available.)

Simultaneously, the costs of oral anticoagulants rose dramatically, with the new agents accounting for 98% of that escalation. It is estimated that insurers spend $5.82 million every month for all agents combined, and that warfarin accounts for only $0.43 million of that. Similarly, patient out-of-pocket spending for all oral anticoagulants combined was estimated to be $1.3 million per month, with warfarin accounting for $3,844 of that total.

Viewed from another perspective, the average combined patient and insurer spending for anticoagulants during the first 6 months of therapy for warfarin was $122, dabigatran $1,053, and rivaroxaban $1,084. "This represents a difference of more than $900 per patient," said Dr. Desai, who is also at the Center for Outcomes Research and Evaluation, Yale-New Haven Health Services, and his associates.

The greatest benefit from novel anticoagulants is among patients at the highest risk for stroke or systemic embolization, as measured by higher scores on CHADS (Congestive Heart Failure, Hypertension, Age of 75 years or more, Diabetes Mellitus, and Stroke) and HAS-BLED (Hypertension, Abnormal Renal/Liver Function, Stroke, Bleeding History or Predisposition, Labile INR, Elderly, Drugs/Alcohol) assessments. This is also the patient population targeted in the clinical trials that formed the basis for FDA approval.

But in this study, 46% of patients with low CHADS and HAS-BLED scores were initially prescribed the novel anticoagulants, compared with 26% of those with high scores. "For every 1-point increase in CHADS, patients were 20% less likely to receive a novel anticoagulant. Similarly, for every 1-point increase in HAS-BLED, patients were 18% less likely to receive a novel anticoagulant.

"In addition, women were 24% less likely to be initiated on a novel oral anticoagulant as compared with men. [And] there was a significant, stepwise increase in the likelihood of receiving a novel agent with progressively increasing neighborhood household income, compared with a median household income of $50,000 or less," the investigators said (Amer. J. Med. 2014 May 20 [doi: 10.1016/j.amjmed.2014.05.013]).

 

 

"These findings point to the need to conduct ongoing surveillance of the adoption of new agents into clinical practice, as well as the need for robust, real-world comparative-effectiveness analyses of these medications, to enable patients and providers to make informed decisions about their relative benefit, safety, and cost-effectiveness," Dr. Desai and his associates said.

This study was funded by an unrestricted research grant from CVS Caremark. Dr. Desai’s associates reported ties to CVS Caremark and Aetna.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Novel anticoagulants given to 60% of newly diagnosed AF patients
Display Headline
Novel anticoagulants given to 60% of newly diagnosed AF patients
Legacy Keywords
oral, anticoagulants, clinical practice, atrial fibrillation, American Journal of Medicine, Food and Drug Administration, FDA, dabigatran, rivaroxaban, apixaban, Dr. Nihar R. Desai, Brigham and Women’s Hospital, Harvard Medical School, Boston,
Legacy Keywords
oral, anticoagulants, clinical practice, atrial fibrillation, American Journal of Medicine, Food and Drug Administration, FDA, dabigatran, rivaroxaban, apixaban, Dr. Nihar R. Desai, Brigham and Women’s Hospital, Harvard Medical School, Boston,
Article Source

FROM THE AMERICAN JOURNAL OF MEDICINE

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Novel anticoagulants are being prescribed for younger men, a different patient population from that indicated by the clinical trials on which FDA approval was based.

Major finding: The average combined patient and insurer spending for oral anticoagulants during the first 6 months of therapy was $122 for warfarin, $1,053 for dabigatran, and $1,084 for rivaroxaban.

Data source: A retrospective, longitudinal analysis of nationwide Aetna prescription claims data for 6,893 adults with nonvalvular AF who initiated oral anticoagulants in 2010-2013.

Disclosures: This study was funded by an unrestricted research grant from CVS Caremark. Dr. Desai’s associates reported ties to CVS Caremark and Aetna.

Surveillance questioned after low-risk colorectal adenoma removal

Surveillance of low-risk patients unnecessary?
Article Type
Changed
Wed, 05/26/2021 - 13:59
Display Headline
Surveillance questioned after low-risk colorectal adenoma removal

Patients in Norway who underwent removal of low-risk colorectal adenomas but did not get further colonoscopic surveillance showed a 25% lower risk of death from colorectal cancer 8 years later, according to a report published online Aug. 28 in the New England Journal of Medicine.

"Thus, any increase in the risk of death from colorectal cancer associated with low-risk adenomas may have been eliminated by the polypectomy," said Dr. Magnus Løberg of the department of health management and health economics, University of Oslo, and his associates.

"Our finding that the removal of low-risk adenomas reduces the risk of death from colorectal cancer over a period of 8 years to a level below that of the general population is consistent with the hypothesis that surveillance every 5 years after removal of low-risk adenomas may confer little benefit over less intensive surveillance strategies. Furthermore, complications associated with colonoscopy are not trivial and might offset the benefit of surveillance" in this patient population, they noted.

Dr. Løberg and his colleagues took advantage of nationwide data in Norway’s cancer registry to "evaluate colorectal cancer mortality in a large, population-based cohort with virtually complete follow-up for death from colorectal cancer." They identified 40,826 patients aged 40 years and older who had at least one colorectal adenoma removed between 1993 and 2007. Surveillance colonoscopy was not recommended for such patients in Norway at that time.

A total of 49.8% had lesions classified as high risk because they showed a villous growth pattern or high-grade dysplasia, or because there were multiple adenomas. The remaining 50.2% of patients had lesions classified as low risk. Detailed information concerning polyp size and the exact number of polyps was not available to the researchers in their record search, so they could not classify the adenomas more definitively.

After a median follow-up of 7.7 years, 1,273 patients were diagnosed as having colorectal cancer, including 383 who died of the disease.

Among patients whose adenomas were low risk, colorectal cancer–specific mortality was reduced by 25%, compared with that of the general population, even though they had not undergone any further colonoscopic surveillance after the initial procedure, the investigators reported (N. Engl. J. Med. 2014 Aug. 28 [doi: 10.1056/NEJMoa1315870]).

In contrast, colorectal cancer–specific mortality was increased by 16% among patients whose adenomas were high risk – an excess of 33 deaths from the disease in this large cohort. "Our study cannot clarify the extent to which the increased risk after polypectomy reflects the underlying increase in the risk of death from colorectal cancer among these patients, but in any case, surveillance might not have been sufficient to lower this increased risk. This question can be answered only by performing comparative randomized trials with different surveillance intervals," Dr. Løberg and his associates said.

This study was supported by the Norwegian Cancer Society, the U.S.-Norway Fulbright Foundation for Educational Exchange, the Research Council of Norway, and the Karolinska Institute. Dr. Løberg reported no financial conflicts of interest; one of his associates reported ties to Exact Sciences, HJort/Norchip, CCS Pharma, and Fujinon.

References

Click for Credit Link
Body

Even though colorectal cancer surveillance practices are quite different between 1993 Norway and the present-day United States, these study results are still informative. It is quite possible that initial colonoscopy and polypectomy reduce the risk of death from colorectal cancer and that further surveillance may have little additional effect on disease-specific mortality, at least in patients with low-risk adenomas.

If future studies confirm that surveillance colonoscopy identifies low-risk patients who can forego further surveillance, it would be an exciting development both for patients and for health care systems. Colon polyp surveillance accounts for approximately 25% of colonoscopies in the U.S. and is a substantial burden on health resources.

Dr. David Lieberman is in the department of medicine and the division of gastroenterology and hepatology at Oregon Health and Science University, Portland. He made his remarks in an editorial accompanying Dr. Løberg’s report (N. Engl. J. Med. 2014 Aug. 28 [doi: 10.1056/NEJMe1407152]). He reported receiving fees from Exact Sciences and Given Imaging.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Norway, removal, low-risk, colorectal adenomas, colonoscopic surveillance, colorectal cancer, New England Journal of Medicine, polypectomy, Dr. Magnus Loberg, University of Oslo,
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Body

Even though colorectal cancer surveillance practices are quite different between 1993 Norway and the present-day United States, these study results are still informative. It is quite possible that initial colonoscopy and polypectomy reduce the risk of death from colorectal cancer and that further surveillance may have little additional effect on disease-specific mortality, at least in patients with low-risk adenomas.

If future studies confirm that surveillance colonoscopy identifies low-risk patients who can forego further surveillance, it would be an exciting development both for patients and for health care systems. Colon polyp surveillance accounts for approximately 25% of colonoscopies in the U.S. and is a substantial burden on health resources.

Dr. David Lieberman is in the department of medicine and the division of gastroenterology and hepatology at Oregon Health and Science University, Portland. He made his remarks in an editorial accompanying Dr. Løberg’s report (N. Engl. J. Med. 2014 Aug. 28 [doi: 10.1056/NEJMe1407152]). He reported receiving fees from Exact Sciences and Given Imaging.

Body

Even though colorectal cancer surveillance practices are quite different between 1993 Norway and the present-day United States, these study results are still informative. It is quite possible that initial colonoscopy and polypectomy reduce the risk of death from colorectal cancer and that further surveillance may have little additional effect on disease-specific mortality, at least in patients with low-risk adenomas.

If future studies confirm that surveillance colonoscopy identifies low-risk patients who can forego further surveillance, it would be an exciting development both for patients and for health care systems. Colon polyp surveillance accounts for approximately 25% of colonoscopies in the U.S. and is a substantial burden on health resources.

Dr. David Lieberman is in the department of medicine and the division of gastroenterology and hepatology at Oregon Health and Science University, Portland. He made his remarks in an editorial accompanying Dr. Løberg’s report (N. Engl. J. Med. 2014 Aug. 28 [doi: 10.1056/NEJMe1407152]). He reported receiving fees from Exact Sciences and Given Imaging.

Title
Surveillance of low-risk patients unnecessary?
Surveillance of low-risk patients unnecessary?

Patients in Norway who underwent removal of low-risk colorectal adenomas but did not get further colonoscopic surveillance showed a 25% lower risk of death from colorectal cancer 8 years later, according to a report published online Aug. 28 in the New England Journal of Medicine.

"Thus, any increase in the risk of death from colorectal cancer associated with low-risk adenomas may have been eliminated by the polypectomy," said Dr. Magnus Løberg of the department of health management and health economics, University of Oslo, and his associates.

"Our finding that the removal of low-risk adenomas reduces the risk of death from colorectal cancer over a period of 8 years to a level below that of the general population is consistent with the hypothesis that surveillance every 5 years after removal of low-risk adenomas may confer little benefit over less intensive surveillance strategies. Furthermore, complications associated with colonoscopy are not trivial and might offset the benefit of surveillance" in this patient population, they noted.

Dr. Løberg and his colleagues took advantage of nationwide data in Norway’s cancer registry to "evaluate colorectal cancer mortality in a large, population-based cohort with virtually complete follow-up for death from colorectal cancer." They identified 40,826 patients aged 40 years and older who had at least one colorectal adenoma removed between 1993 and 2007. Surveillance colonoscopy was not recommended for such patients in Norway at that time.

A total of 49.8% had lesions classified as high risk because they showed a villous growth pattern or high-grade dysplasia, or because there were multiple adenomas. The remaining 50.2% of patients had lesions classified as low risk. Detailed information concerning polyp size and the exact number of polyps was not available to the researchers in their record search, so they could not classify the adenomas more definitively.

After a median follow-up of 7.7 years, 1,273 patients were diagnosed as having colorectal cancer, including 383 who died of the disease.

Among patients whose adenomas were low risk, colorectal cancer–specific mortality was reduced by 25%, compared with that of the general population, even though they had not undergone any further colonoscopic surveillance after the initial procedure, the investigators reported (N. Engl. J. Med. 2014 Aug. 28 [doi: 10.1056/NEJMoa1315870]).

In contrast, colorectal cancer–specific mortality was increased by 16% among patients whose adenomas were high risk – an excess of 33 deaths from the disease in this large cohort. "Our study cannot clarify the extent to which the increased risk after polypectomy reflects the underlying increase in the risk of death from colorectal cancer among these patients, but in any case, surveillance might not have been sufficient to lower this increased risk. This question can be answered only by performing comparative randomized trials with different surveillance intervals," Dr. Løberg and his associates said.

This study was supported by the Norwegian Cancer Society, the U.S.-Norway Fulbright Foundation for Educational Exchange, the Research Council of Norway, and the Karolinska Institute. Dr. Løberg reported no financial conflicts of interest; one of his associates reported ties to Exact Sciences, HJort/Norchip, CCS Pharma, and Fujinon.

Patients in Norway who underwent removal of low-risk colorectal adenomas but did not get further colonoscopic surveillance showed a 25% lower risk of death from colorectal cancer 8 years later, according to a report published online Aug. 28 in the New England Journal of Medicine.

"Thus, any increase in the risk of death from colorectal cancer associated with low-risk adenomas may have been eliminated by the polypectomy," said Dr. Magnus Løberg of the department of health management and health economics, University of Oslo, and his associates.

"Our finding that the removal of low-risk adenomas reduces the risk of death from colorectal cancer over a period of 8 years to a level below that of the general population is consistent with the hypothesis that surveillance every 5 years after removal of low-risk adenomas may confer little benefit over less intensive surveillance strategies. Furthermore, complications associated with colonoscopy are not trivial and might offset the benefit of surveillance" in this patient population, they noted.

Dr. Løberg and his colleagues took advantage of nationwide data in Norway’s cancer registry to "evaluate colorectal cancer mortality in a large, population-based cohort with virtually complete follow-up for death from colorectal cancer." They identified 40,826 patients aged 40 years and older who had at least one colorectal adenoma removed between 1993 and 2007. Surveillance colonoscopy was not recommended for such patients in Norway at that time.

A total of 49.8% had lesions classified as high risk because they showed a villous growth pattern or high-grade dysplasia, or because there were multiple adenomas. The remaining 50.2% of patients had lesions classified as low risk. Detailed information concerning polyp size and the exact number of polyps was not available to the researchers in their record search, so they could not classify the adenomas more definitively.

After a median follow-up of 7.7 years, 1,273 patients were diagnosed as having colorectal cancer, including 383 who died of the disease.

Among patients whose adenomas were low risk, colorectal cancer–specific mortality was reduced by 25%, compared with that of the general population, even though they had not undergone any further colonoscopic surveillance after the initial procedure, the investigators reported (N. Engl. J. Med. 2014 Aug. 28 [doi: 10.1056/NEJMoa1315870]).

In contrast, colorectal cancer–specific mortality was increased by 16% among patients whose adenomas were high risk – an excess of 33 deaths from the disease in this large cohort. "Our study cannot clarify the extent to which the increased risk after polypectomy reflects the underlying increase in the risk of death from colorectal cancer among these patients, but in any case, surveillance might not have been sufficient to lower this increased risk. This question can be answered only by performing comparative randomized trials with different surveillance intervals," Dr. Løberg and his associates said.

This study was supported by the Norwegian Cancer Society, the U.S.-Norway Fulbright Foundation for Educational Exchange, the Research Council of Norway, and the Karolinska Institute. Dr. Løberg reported no financial conflicts of interest; one of his associates reported ties to Exact Sciences, HJort/Norchip, CCS Pharma, and Fujinon.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Surveillance questioned after low-risk colorectal adenoma removal
Display Headline
Surveillance questioned after low-risk colorectal adenoma removal
Legacy Keywords
Norway, removal, low-risk, colorectal adenomas, colonoscopic surveillance, colorectal cancer, New England Journal of Medicine, polypectomy, Dr. Magnus Loberg, University of Oslo,
Legacy Keywords
Norway, removal, low-risk, colorectal adenomas, colonoscopic surveillance, colorectal cancer, New England Journal of Medicine, polypectomy, Dr. Magnus Loberg, University of Oslo,
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Surveillance every 5 years after removal of low-risk adenomas may confer little benefit over less intensive surveillance strategies.

Major finding: Among patients whose adenomas were low risk, colorectal cancer–specific mortality was reduced by 25%, compared with that of the general population, even though they had not undergone any further colonoscopic surveillance after the initial procedure.

Data source: A nationwide, population-based cohort study in Norway, involving 40,826 adults who underwent colonoscopy and removal of at least one adenoma during 1993-2007 and were followed for a median of 7.7 years.

Disclosures: This study was supported by the Norwegian Cancer Society, the U.S.-Norway Fulbright Foundation for Educational Exchange, the Research Council of Norway, and the Karolinska Institute. Dr. Løberg reported no financial conflicts of interest.

Lipid screening uncommon before 2011 recommendations

Article Type
Changed
Fri, 01/18/2019 - 13:54
Display Headline
Lipid screening uncommon before 2011 recommendations

Before new guidelines advocating universal screening for dyslipidemia at ages 9-11 years and again at ages 17-21 years, 2.8% of normal-weight children in the younger age group and 22% of those in the older age group underwent such screening, according to a report published online Aug. 26 in Circulation: Cardiovascular Quality and Outcomes.

Lipid screening also was low among obese children in these age groups at that time (30.6% and 34.6%, respectively), even though existing recommendations advised such screening in that patient population, said Dr. Karen L. Margolis of the HealthPartners Institute for Education and Research, Minneapolis, and her associates.

©iStock/thinkstockphotos.com
Before 2011, dyslipdemia screenings for children were lower in number than expected.

Data are scarce regarding lipid screening practices in the pediatric population, and no studies have yet examined these practices after the release of the recommendation for universal screening in 2011. Dr. Margolis and her associates focused on lipid screening practices during the 3-year period before the recommendations were issued, hoping to establish a benchmark for assessing changes in community practice patterns. They performed a secondary analysis of data collected in a retrospective cohort study of pediatric hypertension and obesity conducted in three large health care delivery systems. Their sample included 301,080 individuals in that cohort study, who were aged 3-19 years.

Overall, 9.8% of the children and adolescents had lipid testing with at least one measurement of total cholesterol. The number who underwent lipid screening increased with increasing patient age, increasing body mass index, and increasing blood pressure. Abnormal levels of total cholesterol were identified in 8.6% of the individuals, abnormal HDL-C levels were identified in 22.5%, abnormal non-HDL-C levels were identified in 12.0%, abnormal LDL-C levels were identified in 8.0%, and abnormal triglycerides were identified in 19%-38%, depending on age and sex, the investigators said (Circ. Cardiovasc. Qual. Outcomes 2014 August 26 [doi:10.1161/Circoutcomes.114.000842]).

"Although abnormal lipid values were more likely to be found in children with elevated BMI, we found some normal-weight children with a low level of HDL-C (12.6%) and a high level of non-HDL-C (6.9%). Thus, targeted screening for [overweight or obese] children would clearly miss some normal-weight children with lipid abnormalities, including [some] with LDL-C levels compatible with familial hypercholesterolemia," they noted. This suggests that more children will be newly identified with the recommended universal lipid screening of children.

This study was funded in part by the National Heart, Lung, and Blood Institute. Dr. Margolis reported no financial conflicts of interest; one of her associates reported ties with Sanofi.

References

Click for Credit Link
Author and Disclosure Information

Publications
Topics
Legacy Keywords
dyslipidemia, screening, Circulation, Cardiovascular Quality and Outcomes, Lipid, Dr. Karen L. Margolis, HealthPartners Institute for Education and Research, Minneapolis,
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Before new guidelines advocating universal screening for dyslipidemia at ages 9-11 years and again at ages 17-21 years, 2.8% of normal-weight children in the younger age group and 22% of those in the older age group underwent such screening, according to a report published online Aug. 26 in Circulation: Cardiovascular Quality and Outcomes.

Lipid screening also was low among obese children in these age groups at that time (30.6% and 34.6%, respectively), even though existing recommendations advised such screening in that patient population, said Dr. Karen L. Margolis of the HealthPartners Institute for Education and Research, Minneapolis, and her associates.

©iStock/thinkstockphotos.com
Before 2011, dyslipdemia screenings for children were lower in number than expected.

Data are scarce regarding lipid screening practices in the pediatric population, and no studies have yet examined these practices after the release of the recommendation for universal screening in 2011. Dr. Margolis and her associates focused on lipid screening practices during the 3-year period before the recommendations were issued, hoping to establish a benchmark for assessing changes in community practice patterns. They performed a secondary analysis of data collected in a retrospective cohort study of pediatric hypertension and obesity conducted in three large health care delivery systems. Their sample included 301,080 individuals in that cohort study, who were aged 3-19 years.

Overall, 9.8% of the children and adolescents had lipid testing with at least one measurement of total cholesterol. The number who underwent lipid screening increased with increasing patient age, increasing body mass index, and increasing blood pressure. Abnormal levels of total cholesterol were identified in 8.6% of the individuals, abnormal HDL-C levels were identified in 22.5%, abnormal non-HDL-C levels were identified in 12.0%, abnormal LDL-C levels were identified in 8.0%, and abnormal triglycerides were identified in 19%-38%, depending on age and sex, the investigators said (Circ. Cardiovasc. Qual. Outcomes 2014 August 26 [doi:10.1161/Circoutcomes.114.000842]).

"Although abnormal lipid values were more likely to be found in children with elevated BMI, we found some normal-weight children with a low level of HDL-C (12.6%) and a high level of non-HDL-C (6.9%). Thus, targeted screening for [overweight or obese] children would clearly miss some normal-weight children with lipid abnormalities, including [some] with LDL-C levels compatible with familial hypercholesterolemia," they noted. This suggests that more children will be newly identified with the recommended universal lipid screening of children.

This study was funded in part by the National Heart, Lung, and Blood Institute. Dr. Margolis reported no financial conflicts of interest; one of her associates reported ties with Sanofi.

Before new guidelines advocating universal screening for dyslipidemia at ages 9-11 years and again at ages 17-21 years, 2.8% of normal-weight children in the younger age group and 22% of those in the older age group underwent such screening, according to a report published online Aug. 26 in Circulation: Cardiovascular Quality and Outcomes.

Lipid screening also was low among obese children in these age groups at that time (30.6% and 34.6%, respectively), even though existing recommendations advised such screening in that patient population, said Dr. Karen L. Margolis of the HealthPartners Institute for Education and Research, Minneapolis, and her associates.

©iStock/thinkstockphotos.com
Before 2011, dyslipdemia screenings for children were lower in number than expected.

Data are scarce regarding lipid screening practices in the pediatric population, and no studies have yet examined these practices after the release of the recommendation for universal screening in 2011. Dr. Margolis and her associates focused on lipid screening practices during the 3-year period before the recommendations were issued, hoping to establish a benchmark for assessing changes in community practice patterns. They performed a secondary analysis of data collected in a retrospective cohort study of pediatric hypertension and obesity conducted in three large health care delivery systems. Their sample included 301,080 individuals in that cohort study, who were aged 3-19 years.

Overall, 9.8% of the children and adolescents had lipid testing with at least one measurement of total cholesterol. The number who underwent lipid screening increased with increasing patient age, increasing body mass index, and increasing blood pressure. Abnormal levels of total cholesterol were identified in 8.6% of the individuals, abnormal HDL-C levels were identified in 22.5%, abnormal non-HDL-C levels were identified in 12.0%, abnormal LDL-C levels were identified in 8.0%, and abnormal triglycerides were identified in 19%-38%, depending on age and sex, the investigators said (Circ. Cardiovasc. Qual. Outcomes 2014 August 26 [doi:10.1161/Circoutcomes.114.000842]).

"Although abnormal lipid values were more likely to be found in children with elevated BMI, we found some normal-weight children with a low level of HDL-C (12.6%) and a high level of non-HDL-C (6.9%). Thus, targeted screening for [overweight or obese] children would clearly miss some normal-weight children with lipid abnormalities, including [some] with LDL-C levels compatible with familial hypercholesterolemia," they noted. This suggests that more children will be newly identified with the recommended universal lipid screening of children.

This study was funded in part by the National Heart, Lung, and Blood Institute. Dr. Margolis reported no financial conflicts of interest; one of her associates reported ties with Sanofi.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Lipid screening uncommon before 2011 recommendations
Display Headline
Lipid screening uncommon before 2011 recommendations
Legacy Keywords
dyslipidemia, screening, Circulation, Cardiovascular Quality and Outcomes, Lipid, Dr. Karen L. Margolis, HealthPartners Institute for Education and Research, Minneapolis,
Legacy Keywords
dyslipidemia, screening, Circulation, Cardiovascular Quality and Outcomes, Lipid, Dr. Karen L. Margolis, HealthPartners Institute for Education and Research, Minneapolis,
Article Source

FROM CIRCULATION: CARDIOVASCULAR QUALITY AND OUTCOMES

PURLs Copyright

Inside the Article

Vitals

Key clinical point: More children and adolescents will likely be newly identified by the universal lipid screening.

Major Finding: Overall, 9.8% of the children and adolescents in the study had lipid testing with at least 1 measurement of total cholesterol before new guidelines calling for universal screening were published, including 2.8% of normal-weight children aged 9-11 years and 22% of those aged 17-21 years.

Data Source: A secondary analysis of data for 301,080 individuals in a retrospective cohort study, in which lipid screening was assessed during the 3-year period before publication of guidelines advocating universal lipid screening at ages 9-11 years and again at ages 17-21 years.

D6isclosures: This study was funded in part by the National Heart, Lung, and Blood Institute. Dr. Margolis reported no financial conflicts of interest; one of her associates reported ties with Sanofi.