User login
Managing food allergy in children: An evidence-based update
Food allergy is a complex condition that has become a growing concern for parents and an increasing public health problem in the United States. Food allergy affects social interactions, school attendance, and quality of life, especially when associated with comorbid atopic conditions such as asthma, atopic dermatitis, and allergic rhinitis.1,2 It is the major cause of anaphylaxis in children, accounting for as many as 81% of cases.3 Societal costs of food allergy are great and are spread broadly across the health care system and the family. (See “What is the cost of food allergy?”2.)
SIDEBAR
What is the cost of food allergy?
Direct costs of food allergy to the health care system include medications, laboratory tests, office visits to primary care physicians and specialists, emergency department visits, and hospitalizations. Indirect costs include family medical and nonmedical expenses, lost work productivity, and job opportunity costs. Overall, the cost of food allergy in the United States is $24.8 billion annually—averaging $4184 for each affected child. Parents bear much of this expense.2
What a food allergy is—and isn’t
The National Institute of Allergy and Infectious Diseases (NIAID) defines food allergy as “an adverse health effect arising from a specific immune response that occurs reproducibly on exposure to a given food.”4 An adverse reaction to food or a food component that lacks an identified immunologic pathophysiology is not considered food allergy but is classified as food intolerance.4
Food allergy is caused by either immunoglobulin E (IgE)-mediated or non-IgE-mediated immunologic dysfunction. IgE antibodies can trigger an intense inflammatory response to certain allergens. Non-IgE-mediated food allergies are less common and not well understood.
This article focuses only on the diagnosis and management of IgE-mediated food allergy.
The culprits
More than 170 foods have been reported to cause an IgE-mediated reaction. Table 15-8 lists the 8 foods that most commonly cause allergic reactions in the United States and that account for > 50% of allergies to food.9 Studies vary in their methodology for estimating the prevalence of allergy to individual foods, but cow’s milk and peanuts appear to be the most common, each affecting as many as 2% to 2.5% of children.7,8 In general, allergies to cow’s milk and to eggs are more prevalent in very young and preschool children, whereas allergies to peanuts, tree nuts, fish, and shellfish are more prevalent in older children.10 Labels on all packaged foods regulated by the US Food and Drug Administration must declare if the product contains even a trace of these 8 allergens.
How common is food allergy?
The Centers for Disease Control and Prevention (CDC) estimates that 4% to 6% of children in the United States have a food allergy.11,12 Almost 40% of food-allergic children have a history of severe food-induced reactions.13 Other developed countries cite similar estimates of overall prevalence.14,15
However, many estimates of the prevalence of food allergy are derived from self-reports, without objective data.9 Accurate evaluation of the prevalence of food allergy is challenging because of many factors, including differences in study methodology and the definition of allergy, geographic variation, racial and ethnic variations, and dietary exposure. Parents and children often confuse nonallergic food reactions, such as food intolerance, with food allergy. Precise determination of the prevalence and natural history of food allergy at the population level requires confirmatory oral food challenges of a representative sample of infants and young children with presumed food allergy.16
Continue to: The CDC concludes that the prevalence...
The CDC concludes that the prevalence of food allergy in children younger than 18 years increased by 18% from 1997 through 2007.17,18 The cause of this increase is unclear but likely multifactorial; hypotheses include an increase in associated atopic conditions, delayed introduction of allergenic foods, and living in an overly sterile environment with reduced exposure to microbes.19 A recent population-based study of food allergy among children in Olmsted County, Minnesota, found that the incidence of food allergy increased between 2002 and 2007, stabilized subsequently, and appears to be declining among children 1 to 4 years of age, following a peak in 2006-2007.19
What are the risk factors?
Proposed risk factors for food allergy include demographics, genetics, a history of atopic disease, and environmental factors. Food allergy might be more common in boys than in girls, and in African Americans and Asians than in Whites.12,16 A child is 7 times more likely to be allergic to peanuts if a parent or sibling has peanut allergy.20 Infants and children with eczema or asthma are more likely to develop food allergy; the severity of eczema correlates with risk.12,20 Improvements in hygiene in Western societies have decreased the spread of infection, but this has been accompanied by a rise in atopic disease. In countries where health standards are poor and exposure to pathogens is greater, the prevalence of allergy is low.21
Conversely, increased microbial exposure might help protect against atopy via a pathway in which T-helper cells prevent pro-allergic immune development and keep harmless environmental exposures from becoming allergens.22 Attendance at daycare and exposure to farm animals early in life reduces the likelihood of atopic disease.16,21 The presence of a dog in the home lessens the probability of egg allergy in infants.23 Food allergy is less common in younger siblings than in first-born children, possibly due to younger siblings’ increased exposure to infection and alterations in the gut microbiome.23,24
Diagnosis: Established by presentation, positive testing
Onset of symptoms after exposure to a suspected food allergen almost always occurs within 2 hours and, typically, resolves within several hours. Symptoms should occur consistently after ingestion of the food allergen. Subsequent exposures can trigger more severe symptoms, depending on the amount, route, and duration of exposure to the allergen.25 Reactions typically follow ingestion or cutaneous exposures; inhalation rarely triggers a response.26 IgE-mediated release of histamine and other mediators from mast cells and basophils triggers reactions that typically involve one or more organ systems (Table 2).25
Cutaneous symptoms are the most common manifestations of food allergy, occurring in 70% to 80% of childhood reactions. Gastrointestinal and oral or respiratory symptoms occur in, respectively, 40% to 50% and 25% of allergic reactions to food. Cardiovascular symptoms develop in fewer than 10% of allergic reactions.26
Continue to: Anaphylaxis
Anaphylaxis is a serious allergic reaction that develops rapidly and can cause death; diagnosis is based on specific criteria (Table 3).27 Data for rates of anaphylaxis due to food allergy are limited. The incidence of fatal reaction due to food allergy is estimated to be 1 in every 800,000 children annually.3
Clinical suspicion. Food allergy should be suspected in infants and children who present with anaphylaxis or other symptoms (Table 225) that occur within minutes to hours of ingesting food.4 Parental and self-reports alone are insufficient to diagnose food allergy. NIAID guidelines recommend that patient reports of food allergy be confirmed, because multiple studies demonstrate that 50% to 90% of presumed food allergies are not true allergy.4 Health care providers must obtain a detailed medical history and pertinent family history, plus perform a physical exam and allergy sensitivity testing. Methods to help diagnose food allergies include skin-prick tests, allergen-specific serum IgE tests, and oral food challenges.4
General principles and utility of testing
Before ordering tests, it’s important to distinguish between food sensitization and food allergy and to inform the families of children with suspected food allergy about the limitations of skin-prick tests and serum IgE tests. A child with IgE antibodies specific to a food or with a positive skin-prick test, but without symptoms upon ingestion of the food, is merely sensitized; food allergy indicates the appearance of symptoms following exposure to a specific food, in addition to the detection of specific IgE antibodies or a positive skin-prick test to that same food.28
Skin-prick testing. Skin-prick tests can be performed at any age. The procedure involves pricking or scratching the surface of the skin, usually the volar aspect of the forearm or the back, with a commercial extract. Testing should be performed by a physician or other provider who is properly trained in the technique and in interpreting results. The extract contains specific allergenic proteins that activate mast cells, resulting in a characteristic wheal-and-flare response that is typically measured 15 to 20 minutes after application. Some medications, such as H1- and H2-receptor blockers and tricyclic antidepressants, can interfere with results and need to be held for 3 to 5 days before testing.
A positive skin-prick test result is defined as a wheal ≥ 3 mm larger in diameter than the negative control. The larger the size of the wheal, the higher the likelihood of a reaction to the tested food.29 Patients who exhibit dermatographism might experience a wheal-and-flare response from the action of the skin-prick test, rather than from food-specific IgE antibodies. A negative skin-prick test has > 90% negative predictive value, so the test can rule out suspected food allergy.30 However, the skin-prick test alone cannot be used to diagnose food allergy because it has a high false-positive rate.
Continue to: Allergen-specific serum IgE testing
Allergen-specific serum IgE testing. Measurement of food-specific serum IgE levels is routinely available and requires only a blood specimen. The test can be used in patients with skin disease, and results are not affected by concurrent medications. The presence of food-specific IgE indicates that the patient is sensitized to that allergen and might react upon exposure; children with a higher level of antibody are more likely to react.29
Food-specific serum IgE tests are sensitive but nonspecific for food allergy.31 Broad food-allergy test panels often yield false-positive results that can lead to unnecessary dietary elimination, resulting in years of inconvenience, nutrition problems, and needless health care expense.32
It is appropriate to order tests of specific serum IgE to foods ingested within the 2 to 3–hour window before onset of symptoms to avoid broad food allergy test panels. Like skin-prick testing, positive allergen-specific serum IgE tests alone cannot diagnose food allergy.
Oral food challenge. The double-blind, placebo-controlled oral food challenge is the gold standard for the diagnosis of food allergy. Because this test is time-consuming and technically difficult, single-blind or open food challenges are more common. Oral food challenges should be performed only by a physician or other provider who can identify and treat anaphylaxis.
The oral challenge starts with a very low dose of suspected food allergen, which is gradually increased every 15 to 30 minutes as vital signs are monitored carefully. Patients are observed for an allergic reaction for 1 hour after the final dose.
Continue to: A retrospective study...
A retrospective study showed that, whereas 19% of patients reacted during an open food challenge, only 2% required epinephrine.33 Another study showed that 89% of children whose serum IgE testing was positive for specific foods were able to reintroduce those foods into the diet after a reassuring oral food challenge.34
Other diagnostic tests. The basophil activation assay, measurement of total serum IgE, atopy patch tests, and intradermal tests have been used, but are not recommended, for making the diagnosis of food allergy.4
How can food allergy be managed?
Medical options are few. No approved treatment exists for food allergy. However, it’s important to appropriately manage acute reactions and reduce the risk of subsequent reactions.1 Parents or other caregivers can give an H1 antihistamine, such as diphenhydramine, to infants and children with acute non-life-threatening symptoms. More severe symptoms require rapid administration of epinephrine.1 Auto-injectable epinephrine should be prescribed for parents and caregivers to use as needed for emergency treatment of anaphylaxis.
Team approach. A multidisciplinary approach to managing food allergy—involving physicians, school nurses, dietitians, and teachers, and using educational materials—is ideal. This strategy expands knowledge about food allergies, enhances correct administration of epinephrine, and reduces allergic reactions.1
Avoidance of food allergens can be challenging. Parents and caregivers should be taught to interpret the list of ingredients on food packages. Self-recognition of allergic reactions reduces the likelihood of a subsequent severe allergic reaction.35
Continue to: Importance of individualized care
Importance of individualized care. Health care providers should develop personalized management plans for their patients.1 (A good place to start is with the “Food Allergy & Anaphylaxis Emergency Care Plan”a developed by Food Allergy Research & Education [FARE]). Keep in mind that children with multiple food allergies consume less calcium and protein, and tend to be shorter4; therefore, it’s wise to closely monitor growth in these children and consider referral to a dietitian who is familiar with food allergy.
Potential of immunotherapy. Current research focuses on immunotherapy to induce tolerance to food allergens and protect against life-threatening allergic reactions. The goal of immunotherapy is to lessen adverse reactions to allergenic food proteins; the strategy is to have patients repeatedly ingest small but gradually increasing doses of the food allergen over many months.36 Although immunotherapy has successfully allowed some patients to consume larger quantities of a food without having an allergic reaction, it is unknown whether immunotherapy provides permanent resolution of food allergy. In addition, immunotherapy often causes serious systemic and local reactions.1,36,37
Is prevention possible?
Maternal diet during pregnancy and lactation does not affect development of food allergy in infants.38,39 Breastfeeding might prevent development of atopic disease, but evidence is insufficient to determine whether breastfeeding reduces the likelihood of food allergy.39 In nonbreastfed infants at high risk of food allergy, extensively or partially hydrolyzed formula might help protect against food allergy, compared to standard cow’s milk formula.9,39 Feeding with soy formula rather than cow’s milk formula does not help prevent food allergy.39,40 Pregnant and breastfeeding women should not restrict their diet as a means of preventing food allergy.39
Diet in infancy. Over the years, physicians have debated the proper timing of the introduction of solid foods into the diet of infants. Traditional teaching advocated delaying introduction of potentially allergenic foods to reduce the risk of food allergy; however, this guideline was based on inconsistent evidence,41 and the strategy did not reduce the incidence of food allergy. The prevalence of food allergy is lower in developing countries where caregivers introduce foods to infants at an earlier age.20
A recent large clinical trial indicates that early introduction of peanut-containing foods can help prevent peanut allergy. The study randomized 4- to 11-month-old infants with severe eczema, egg allergy, or both, to eat or avoid peanut products until 5 years of age. Infants assigned to eat peanuts were 81% less likely to develop peanut allergy than infants in the avoidance group. Absolute risk reduction was 14% (number need to treat = 7).42 Another study showed a nonsignificant (20%) lower relative risk of food allergy in breastfed infants who were fed potentially allergenic foods starting at 3 months of age, compared to being exclusively breastfed.43
Continue to: Based on these data...
Based on these data,42,43 NIAID instituted recommendations in 2017 aimed at preventing peanut allergy44:
- In healthy infants without known food allergy and those with mild or moderate eczema, caregivers can introduce peanut-containing foods at home with other solid foods.Parents who are anxious about a possible allergic reaction can introduce peanut products in a physician’s office.
- Infants at high risk of peanut allergy (those with severe eczema or egg allergy, or both) should undergo peanut-specific IgE or skin-prick testing:
- Negative test: indicates low risk of a reaction to peanuts; the infant should start consuming peanut-containing foods at 4 to 6 months of age, at home or in a physician’s office, depending on the parents’ preference
- Positive test: Referral to an allergist is recommended.
Do children outgrow food allergy?
Approximately 85% of children who have an allergy to milk, egg, soy, or wheat outgrow their allergy; however, only 15% to 20% who have an allergy to peanuts, tree nuts, fish, or shellfish eventually tolerate these foods. The time to resolution of food allergy varies with the food, and might not occur until adolescence.4 No test reliably predicts which children develop tolerance to any given food. A decrease in the food-specific serum IgE level or a decrease in the size of the wheal on skin-prick testing might portend the onset of tolerance to the food.4
CORRESPONDENCE
Catherine M. Bettcher, MD, FAAFP, Briarwood Family Medicine, 1801 Briarwood Circle, Building #10, Ann Arbor, MI 48108; cbettche@umich.edu.
1. Muraro A, Werfel T, Hoffmann-Sommergruber K, et al;
. EAACI food allergy and anaphylaxis guidelines: diagnosis and management of food allergy. Allergy. 2014;69:1008-1025.2. Gupta R, Holdford D, Bilaver L, et al. The economic impact of childhood food allergy in the United States. JAMA Pediatr. 2013;167:1026-1031.
3. Cianferoni A, Muraro A. Food-induced anaphylaxis. Immunol Allergy Clin North Am. 2012;32:165-195.
4., Boyce JA, Assa’ad A, Burks WA, et al. Guidelines for the diagnosis and management of food allergy in the United States: report of the NIAID-sponsored expert panel. J Allergy Clin Immunol. 2010;126(6 suppl):S1-S58.
5. Vierk KA, Koehler KM, Fein SB, et al. Prevalence of self-reported food allergy in American adults and use of food labels. J Allergy Clin Immunol. 2007;119:1504-1510.
6. Allen KJ, Koplin JJ. The epidemiology of IgE-mediated food allergy and anaphylaxis. Immunol Allergy Clin North Am. 2012;32:35-50.
7. Iweala OI, Choudhary SK, Commins SP. Food allergy. Curr Gastroenterol Rep. 2018;20:17.
8. Gupta RS, Warren CM, Smith BM, et al. The public health impact of parent-reported childhood food allergies in the United States. Pediatrics. 2018;142:e20181235.
9. Chafen JJS, Newberry SJ, Riedl MA, et al. Diagnosing and managing common food allergies: a systematic review. JAMA. 2010;303:1848-1856.
10. Nwaru BI, Hickstein L, Panesar SS, et al. Prevalence of common food allergies in Europe: a systematic review and meta-analysis. Allergy. 2014;69:992-1007.
11. Branum AM, Lukacs SL. Food allergy among U.S. children: trends in prevalence and hospitalizations. NCHS Data Brief No. 10. National Center for Health Statistics. October 2008. www.cdc.gov/nchs/products/databriefs/db10.htm. Accessed August 19, 2020.
12. Liu AH, Jaramillo R, Sicherer SH, et al. National prevalence and risk factors for food allergy and relationship to asthma: results from the National Health and Nutrition Examination Survey 2005-2006. J Allergy Clin Immunol. 2010;126:798-806.e13.
13. Gupta RS, Springston EE, Warrier MR, et al. The prevalence, severity, and distribution of childhood food allergy in the United States. Pediatrics. 2011;128:e9-e17.
14. Soller L, Ben-Shoshan M, Harrington DW, et al. Overall prevalence of self-reported food allergy in Canada. J Allergy Clin Immunol. 2012;130:986-988.
15. Venter C, Pereira B, Voigt K, et al. Prevalence and cumulative incidence of food hypersensitivity in the first 3 years of life. Allergy. 2008;63:354-359.
16. Savage J, Johns CB. Food allergy: epidemiology and natural history. Immunol Allergy Clin North Am. 2015;35:45-59.
17. Branum AM, Lukacs SL. Food allergy among children in the United States. Pediatrics. 2009;124:1549-1555.
18. Jackson KD, Howie LD, Akinbami LJ. Trends in allergic conditions among children: United States, 1997-2011. NCHS Data Brief No. 121. National Center for Health Statistics. May 2013. www.cdc.gov/nchs/products/databriefs/db121.htm. Accessed August 19, 2020.
19. Willits EK, Park MA, Hartz MF, et al. Food allergy: a comprehensive population-based cohort study. Mayo Clin Proc. 2018;93:1423-1430.
20. Lack G. Epidemiologic risks for food allergy. J Allergy Clin Immunol. 2008;121:1331-1336.
21. Okada H, Kuhn C, Feillet H, et al. The ‘hygiene hypothesis’ for autoimmune and allergic diseases: an update. Clin Exp Immunol. 2010;160:1-9.
22. Liu AH. Hygiene theory and allergy and asthma prevention. Paediatr Perinat Epidemiol. 2007;21 Suppl 3:2-7.
23. Prince BT, Mandel MJ, Nadeau K, et al. Gut microbiome and the development of food allergy and allergic disease. Pediatr Clin North Am. 2015;62:1479-1492.
24. Kusunoki T, Mukaida K, Morimoto T, et al. Birth order effect on childhood food allergy. Pediatr Allergy Immunol. 2012;23:250-254.
25. Abrams EM, Sicherer SH. Diagnosis and management of food allergy. CMAJ. 2016;188:1087-1093.
26. Perry TT, Matsui EC, Conover-Walker MK, et al. Risk of oral food challenges. J Allergy Clin Immunol. 2004;114:1164-1168.
27. Sampson HA,
A, Campbell RL, et al. Second symposium on the definition and management of anaphylaxis: summary report—Second National Institute of Allergy and Infectious Disease/Food Allergy and Anaphylaxis Network symposium. J Allergy Clin Immunol. 2006;117:391-397.28. Sampson HA. Food allergy. Part 2: diagnosis and management. J Allergy Clin Immunol. 1999;103:981-989.
29. Lieberman JA, Sicherer SH. Diagnosis of food allergy: epicutaneous skin tests, in vitro tests, and oral food challenge. Curr Allergy Asthma Rep. 2011;11:58-64.
30. Sicherer SH, Sampson HA. Food allergy. J Allergy Clin Immunol. 2010;125(2 suppl 2):S116-S125.
31. Soares-Weiser K, Takwoingi Y, Panesar SS, et al. The diagnosis of food allergy: a systematic review and meta-analysis. Allergy. 2014;69:76-86.
32. Bird JA, Crain M, Varshney P. Food allergen panel testing often results in misdiagnosis of food allergy. J Pediatr. 2015;166:97-100.
33. Lieberman JA, Cox AL, Vitale M, et al. Outcomes of office-based, open food challenges in the management of food allergy. J Allergy Clin Immunol. 2011;128:1120-1122.
34. Fleischer DM, Bock SA, Spears GC, et al. Oral food challenges in children with a diagnosis of food allergy. J Pediatr. 2011;158:578-583.e1.
35. Ewan PW, Clark AT. Long-term prospective observational study of patients with peanut and nut allergy after participation in a management plan. Lancet. 2001;357:111-115.
36. Nurmatov U, Dhami S, Arasi S, et al. Allergen immunotherapy for IgE-mediated food allergy: a systematic review and meta-analysis. Allergy. 2017;72:1133-1147.
37. Sampson HA, Aceves S, Bock SA, et al. Food allergy: a practice parameter update—2014. J Allergy Clin Immunol. 2014;134:1016-1025.e43.
38. Kramer MS, Kakuma R. Maternal dietary antigen avoidance during pregnancy or lactation, or both, for preventing or treating atopic disease in the child. Cochrane Database Syst Rev. 2012;2012(9):CD000133.
39. de Silva D, Geromi M, Halken S, et al;
. Primary prevention of food allergy in children and adults: systematic review. Allergy. 2014;69:581-589.40. Osborn DA, Sinn J. Soy formula for prevention of allergy and food intolerance in infants. Cochrane Database Syst Rev. 2004;(3):CD003741.
41. Filipiak B, Zutavern A, Koletzko S, et al; GINI-Group. Solid food introduction in relation to eczema: results from a four-year prospective birth cohort study. J Pediatr. 2007;151:352-358.
42. Du Toit G, Roberts G, Sayre PH, et al; LEAP Study Team. Randomized trial of peanut consumption in infants at risk for peanut allergy. N Engl J Med. 2015;372:803-813.
43. Perkin MR, Logan K, Tseng A, et al; EAT Study Team. Randomized trial of introduction of allergenic foods in breast-fed infants. N Engl J Med. 2016;374:1733-1743.
44. Togias A, Cooper SF, Acebal ML, et al. Addendum guidelines for the prevention of peanut allergy in the United States: report of the National Institute of Allergy and Infectious Diseases-sponsored expert panel. J Allergy Clin Immunol. 2017;139:29-44.
Food allergy is a complex condition that has become a growing concern for parents and an increasing public health problem in the United States. Food allergy affects social interactions, school attendance, and quality of life, especially when associated with comorbid atopic conditions such as asthma, atopic dermatitis, and allergic rhinitis.1,2 It is the major cause of anaphylaxis in children, accounting for as many as 81% of cases.3 Societal costs of food allergy are great and are spread broadly across the health care system and the family. (See “What is the cost of food allergy?”2.)
SIDEBAR
What is the cost of food allergy?
Direct costs of food allergy to the health care system include medications, laboratory tests, office visits to primary care physicians and specialists, emergency department visits, and hospitalizations. Indirect costs include family medical and nonmedical expenses, lost work productivity, and job opportunity costs. Overall, the cost of food allergy in the United States is $24.8 billion annually—averaging $4184 for each affected child. Parents bear much of this expense.2
What a food allergy is—and isn’t
The National Institute of Allergy and Infectious Diseases (NIAID) defines food allergy as “an adverse health effect arising from a specific immune response that occurs reproducibly on exposure to a given food.”4 An adverse reaction to food or a food component that lacks an identified immunologic pathophysiology is not considered food allergy but is classified as food intolerance.4
Food allergy is caused by either immunoglobulin E (IgE)-mediated or non-IgE-mediated immunologic dysfunction. IgE antibodies can trigger an intense inflammatory response to certain allergens. Non-IgE-mediated food allergies are less common and not well understood.
This article focuses only on the diagnosis and management of IgE-mediated food allergy.
The culprits
More than 170 foods have been reported to cause an IgE-mediated reaction. Table 15-8 lists the 8 foods that most commonly cause allergic reactions in the United States and that account for > 50% of allergies to food.9 Studies vary in their methodology for estimating the prevalence of allergy to individual foods, but cow’s milk and peanuts appear to be the most common, each affecting as many as 2% to 2.5% of children.7,8 In general, allergies to cow’s milk and to eggs are more prevalent in very young and preschool children, whereas allergies to peanuts, tree nuts, fish, and shellfish are more prevalent in older children.10 Labels on all packaged foods regulated by the US Food and Drug Administration must declare if the product contains even a trace of these 8 allergens.
How common is food allergy?
The Centers for Disease Control and Prevention (CDC) estimates that 4% to 6% of children in the United States have a food allergy.11,12 Almost 40% of food-allergic children have a history of severe food-induced reactions.13 Other developed countries cite similar estimates of overall prevalence.14,15
However, many estimates of the prevalence of food allergy are derived from self-reports, without objective data.9 Accurate evaluation of the prevalence of food allergy is challenging because of many factors, including differences in study methodology and the definition of allergy, geographic variation, racial and ethnic variations, and dietary exposure. Parents and children often confuse nonallergic food reactions, such as food intolerance, with food allergy. Precise determination of the prevalence and natural history of food allergy at the population level requires confirmatory oral food challenges of a representative sample of infants and young children with presumed food allergy.16
Continue to: The CDC concludes that the prevalence...
The CDC concludes that the prevalence of food allergy in children younger than 18 years increased by 18% from 1997 through 2007.17,18 The cause of this increase is unclear but likely multifactorial; hypotheses include an increase in associated atopic conditions, delayed introduction of allergenic foods, and living in an overly sterile environment with reduced exposure to microbes.19 A recent population-based study of food allergy among children in Olmsted County, Minnesota, found that the incidence of food allergy increased between 2002 and 2007, stabilized subsequently, and appears to be declining among children 1 to 4 years of age, following a peak in 2006-2007.19
What are the risk factors?
Proposed risk factors for food allergy include demographics, genetics, a history of atopic disease, and environmental factors. Food allergy might be more common in boys than in girls, and in African Americans and Asians than in Whites.12,16 A child is 7 times more likely to be allergic to peanuts if a parent or sibling has peanut allergy.20 Infants and children with eczema or asthma are more likely to develop food allergy; the severity of eczema correlates with risk.12,20 Improvements in hygiene in Western societies have decreased the spread of infection, but this has been accompanied by a rise in atopic disease. In countries where health standards are poor and exposure to pathogens is greater, the prevalence of allergy is low.21
Conversely, increased microbial exposure might help protect against atopy via a pathway in which T-helper cells prevent pro-allergic immune development and keep harmless environmental exposures from becoming allergens.22 Attendance at daycare and exposure to farm animals early in life reduces the likelihood of atopic disease.16,21 The presence of a dog in the home lessens the probability of egg allergy in infants.23 Food allergy is less common in younger siblings than in first-born children, possibly due to younger siblings’ increased exposure to infection and alterations in the gut microbiome.23,24
Diagnosis: Established by presentation, positive testing
Onset of symptoms after exposure to a suspected food allergen almost always occurs within 2 hours and, typically, resolves within several hours. Symptoms should occur consistently after ingestion of the food allergen. Subsequent exposures can trigger more severe symptoms, depending on the amount, route, and duration of exposure to the allergen.25 Reactions typically follow ingestion or cutaneous exposures; inhalation rarely triggers a response.26 IgE-mediated release of histamine and other mediators from mast cells and basophils triggers reactions that typically involve one or more organ systems (Table 2).25
Cutaneous symptoms are the most common manifestations of food allergy, occurring in 70% to 80% of childhood reactions. Gastrointestinal and oral or respiratory symptoms occur in, respectively, 40% to 50% and 25% of allergic reactions to food. Cardiovascular symptoms develop in fewer than 10% of allergic reactions.26
Continue to: Anaphylaxis
Anaphylaxis is a serious allergic reaction that develops rapidly and can cause death; diagnosis is based on specific criteria (Table 3).27 Data for rates of anaphylaxis due to food allergy are limited. The incidence of fatal reaction due to food allergy is estimated to be 1 in every 800,000 children annually.3
Clinical suspicion. Food allergy should be suspected in infants and children who present with anaphylaxis or other symptoms (Table 225) that occur within minutes to hours of ingesting food.4 Parental and self-reports alone are insufficient to diagnose food allergy. NIAID guidelines recommend that patient reports of food allergy be confirmed, because multiple studies demonstrate that 50% to 90% of presumed food allergies are not true allergy.4 Health care providers must obtain a detailed medical history and pertinent family history, plus perform a physical exam and allergy sensitivity testing. Methods to help diagnose food allergies include skin-prick tests, allergen-specific serum IgE tests, and oral food challenges.4
General principles and utility of testing
Before ordering tests, it’s important to distinguish between food sensitization and food allergy and to inform the families of children with suspected food allergy about the limitations of skin-prick tests and serum IgE tests. A child with IgE antibodies specific to a food or with a positive skin-prick test, but without symptoms upon ingestion of the food, is merely sensitized; food allergy indicates the appearance of symptoms following exposure to a specific food, in addition to the detection of specific IgE antibodies or a positive skin-prick test to that same food.28
Skin-prick testing. Skin-prick tests can be performed at any age. The procedure involves pricking or scratching the surface of the skin, usually the volar aspect of the forearm or the back, with a commercial extract. Testing should be performed by a physician or other provider who is properly trained in the technique and in interpreting results. The extract contains specific allergenic proteins that activate mast cells, resulting in a characteristic wheal-and-flare response that is typically measured 15 to 20 minutes after application. Some medications, such as H1- and H2-receptor blockers and tricyclic antidepressants, can interfere with results and need to be held for 3 to 5 days before testing.
A positive skin-prick test result is defined as a wheal ≥ 3 mm larger in diameter than the negative control. The larger the size of the wheal, the higher the likelihood of a reaction to the tested food.29 Patients who exhibit dermatographism might experience a wheal-and-flare response from the action of the skin-prick test, rather than from food-specific IgE antibodies. A negative skin-prick test has > 90% negative predictive value, so the test can rule out suspected food allergy.30 However, the skin-prick test alone cannot be used to diagnose food allergy because it has a high false-positive rate.
Continue to: Allergen-specific serum IgE testing
Allergen-specific serum IgE testing. Measurement of food-specific serum IgE levels is routinely available and requires only a blood specimen. The test can be used in patients with skin disease, and results are not affected by concurrent medications. The presence of food-specific IgE indicates that the patient is sensitized to that allergen and might react upon exposure; children with a higher level of antibody are more likely to react.29
Food-specific serum IgE tests are sensitive but nonspecific for food allergy.31 Broad food-allergy test panels often yield false-positive results that can lead to unnecessary dietary elimination, resulting in years of inconvenience, nutrition problems, and needless health care expense.32
It is appropriate to order tests of specific serum IgE to foods ingested within the 2 to 3–hour window before onset of symptoms to avoid broad food allergy test panels. Like skin-prick testing, positive allergen-specific serum IgE tests alone cannot diagnose food allergy.
Oral food challenge. The double-blind, placebo-controlled oral food challenge is the gold standard for the diagnosis of food allergy. Because this test is time-consuming and technically difficult, single-blind or open food challenges are more common. Oral food challenges should be performed only by a physician or other provider who can identify and treat anaphylaxis.
The oral challenge starts with a very low dose of suspected food allergen, which is gradually increased every 15 to 30 minutes as vital signs are monitored carefully. Patients are observed for an allergic reaction for 1 hour after the final dose.
Continue to: A retrospective study...
A retrospective study showed that, whereas 19% of patients reacted during an open food challenge, only 2% required epinephrine.33 Another study showed that 89% of children whose serum IgE testing was positive for specific foods were able to reintroduce those foods into the diet after a reassuring oral food challenge.34
Other diagnostic tests. The basophil activation assay, measurement of total serum IgE, atopy patch tests, and intradermal tests have been used, but are not recommended, for making the diagnosis of food allergy.4
How can food allergy be managed?
Medical options are few. No approved treatment exists for food allergy. However, it’s important to appropriately manage acute reactions and reduce the risk of subsequent reactions.1 Parents or other caregivers can give an H1 antihistamine, such as diphenhydramine, to infants and children with acute non-life-threatening symptoms. More severe symptoms require rapid administration of epinephrine.1 Auto-injectable epinephrine should be prescribed for parents and caregivers to use as needed for emergency treatment of anaphylaxis.
Team approach. A multidisciplinary approach to managing food allergy—involving physicians, school nurses, dietitians, and teachers, and using educational materials—is ideal. This strategy expands knowledge about food allergies, enhances correct administration of epinephrine, and reduces allergic reactions.1
Avoidance of food allergens can be challenging. Parents and caregivers should be taught to interpret the list of ingredients on food packages. Self-recognition of allergic reactions reduces the likelihood of a subsequent severe allergic reaction.35
Continue to: Importance of individualized care
Importance of individualized care. Health care providers should develop personalized management plans for their patients.1 (A good place to start is with the “Food Allergy & Anaphylaxis Emergency Care Plan”a developed by Food Allergy Research & Education [FARE]). Keep in mind that children with multiple food allergies consume less calcium and protein, and tend to be shorter4; therefore, it’s wise to closely monitor growth in these children and consider referral to a dietitian who is familiar with food allergy.
Potential of immunotherapy. Current research focuses on immunotherapy to induce tolerance to food allergens and protect against life-threatening allergic reactions. The goal of immunotherapy is to lessen adverse reactions to allergenic food proteins; the strategy is to have patients repeatedly ingest small but gradually increasing doses of the food allergen over many months.36 Although immunotherapy has successfully allowed some patients to consume larger quantities of a food without having an allergic reaction, it is unknown whether immunotherapy provides permanent resolution of food allergy. In addition, immunotherapy often causes serious systemic and local reactions.1,36,37
Is prevention possible?
Maternal diet during pregnancy and lactation does not affect development of food allergy in infants.38,39 Breastfeeding might prevent development of atopic disease, but evidence is insufficient to determine whether breastfeeding reduces the likelihood of food allergy.39 In nonbreastfed infants at high risk of food allergy, extensively or partially hydrolyzed formula might help protect against food allergy, compared to standard cow’s milk formula.9,39 Feeding with soy formula rather than cow’s milk formula does not help prevent food allergy.39,40 Pregnant and breastfeeding women should not restrict their diet as a means of preventing food allergy.39
Diet in infancy. Over the years, physicians have debated the proper timing of the introduction of solid foods into the diet of infants. Traditional teaching advocated delaying introduction of potentially allergenic foods to reduce the risk of food allergy; however, this guideline was based on inconsistent evidence,41 and the strategy did not reduce the incidence of food allergy. The prevalence of food allergy is lower in developing countries where caregivers introduce foods to infants at an earlier age.20
A recent large clinical trial indicates that early introduction of peanut-containing foods can help prevent peanut allergy. The study randomized 4- to 11-month-old infants with severe eczema, egg allergy, or both, to eat or avoid peanut products until 5 years of age. Infants assigned to eat peanuts were 81% less likely to develop peanut allergy than infants in the avoidance group. Absolute risk reduction was 14% (number need to treat = 7).42 Another study showed a nonsignificant (20%) lower relative risk of food allergy in breastfed infants who were fed potentially allergenic foods starting at 3 months of age, compared to being exclusively breastfed.43
Continue to: Based on these data...
Based on these data,42,43 NIAID instituted recommendations in 2017 aimed at preventing peanut allergy44:
- In healthy infants without known food allergy and those with mild or moderate eczema, caregivers can introduce peanut-containing foods at home with other solid foods.Parents who are anxious about a possible allergic reaction can introduce peanut products in a physician’s office.
- Infants at high risk of peanut allergy (those with severe eczema or egg allergy, or both) should undergo peanut-specific IgE or skin-prick testing:
- Negative test: indicates low risk of a reaction to peanuts; the infant should start consuming peanut-containing foods at 4 to 6 months of age, at home or in a physician’s office, depending on the parents’ preference
- Positive test: Referral to an allergist is recommended.
Do children outgrow food allergy?
Approximately 85% of children who have an allergy to milk, egg, soy, or wheat outgrow their allergy; however, only 15% to 20% who have an allergy to peanuts, tree nuts, fish, or shellfish eventually tolerate these foods. The time to resolution of food allergy varies with the food, and might not occur until adolescence.4 No test reliably predicts which children develop tolerance to any given food. A decrease in the food-specific serum IgE level or a decrease in the size of the wheal on skin-prick testing might portend the onset of tolerance to the food.4
CORRESPONDENCE
Catherine M. Bettcher, MD, FAAFP, Briarwood Family Medicine, 1801 Briarwood Circle, Building #10, Ann Arbor, MI 48108; cbettche@umich.edu.
Food allergy is a complex condition that has become a growing concern for parents and an increasing public health problem in the United States. Food allergy affects social interactions, school attendance, and quality of life, especially when associated with comorbid atopic conditions such as asthma, atopic dermatitis, and allergic rhinitis.1,2 It is the major cause of anaphylaxis in children, accounting for as many as 81% of cases.3 Societal costs of food allergy are great and are spread broadly across the health care system and the family. (See “What is the cost of food allergy?”2.)
SIDEBAR
What is the cost of food allergy?
Direct costs of food allergy to the health care system include medications, laboratory tests, office visits to primary care physicians and specialists, emergency department visits, and hospitalizations. Indirect costs include family medical and nonmedical expenses, lost work productivity, and job opportunity costs. Overall, the cost of food allergy in the United States is $24.8 billion annually—averaging $4184 for each affected child. Parents bear much of this expense.2
What a food allergy is—and isn’t
The National Institute of Allergy and Infectious Diseases (NIAID) defines food allergy as “an adverse health effect arising from a specific immune response that occurs reproducibly on exposure to a given food.”4 An adverse reaction to food or a food component that lacks an identified immunologic pathophysiology is not considered food allergy but is classified as food intolerance.4
Food allergy is caused by either immunoglobulin E (IgE)-mediated or non-IgE-mediated immunologic dysfunction. IgE antibodies can trigger an intense inflammatory response to certain allergens. Non-IgE-mediated food allergies are less common and not well understood.
This article focuses only on the diagnosis and management of IgE-mediated food allergy.
The culprits
More than 170 foods have been reported to cause an IgE-mediated reaction. Table 15-8 lists the 8 foods that most commonly cause allergic reactions in the United States and that account for > 50% of allergies to food.9 Studies vary in their methodology for estimating the prevalence of allergy to individual foods, but cow’s milk and peanuts appear to be the most common, each affecting as many as 2% to 2.5% of children.7,8 In general, allergies to cow’s milk and to eggs are more prevalent in very young and preschool children, whereas allergies to peanuts, tree nuts, fish, and shellfish are more prevalent in older children.10 Labels on all packaged foods regulated by the US Food and Drug Administration must declare if the product contains even a trace of these 8 allergens.
How common is food allergy?
The Centers for Disease Control and Prevention (CDC) estimates that 4% to 6% of children in the United States have a food allergy.11,12 Almost 40% of food-allergic children have a history of severe food-induced reactions.13 Other developed countries cite similar estimates of overall prevalence.14,15
However, many estimates of the prevalence of food allergy are derived from self-reports, without objective data.9 Accurate evaluation of the prevalence of food allergy is challenging because of many factors, including differences in study methodology and the definition of allergy, geographic variation, racial and ethnic variations, and dietary exposure. Parents and children often confuse nonallergic food reactions, such as food intolerance, with food allergy. Precise determination of the prevalence and natural history of food allergy at the population level requires confirmatory oral food challenges of a representative sample of infants and young children with presumed food allergy.16
Continue to: The CDC concludes that the prevalence...
The CDC concludes that the prevalence of food allergy in children younger than 18 years increased by 18% from 1997 through 2007.17,18 The cause of this increase is unclear but likely multifactorial; hypotheses include an increase in associated atopic conditions, delayed introduction of allergenic foods, and living in an overly sterile environment with reduced exposure to microbes.19 A recent population-based study of food allergy among children in Olmsted County, Minnesota, found that the incidence of food allergy increased between 2002 and 2007, stabilized subsequently, and appears to be declining among children 1 to 4 years of age, following a peak in 2006-2007.19
What are the risk factors?
Proposed risk factors for food allergy include demographics, genetics, a history of atopic disease, and environmental factors. Food allergy might be more common in boys than in girls, and in African Americans and Asians than in Whites.12,16 A child is 7 times more likely to be allergic to peanuts if a parent or sibling has peanut allergy.20 Infants and children with eczema or asthma are more likely to develop food allergy; the severity of eczema correlates with risk.12,20 Improvements in hygiene in Western societies have decreased the spread of infection, but this has been accompanied by a rise in atopic disease. In countries where health standards are poor and exposure to pathogens is greater, the prevalence of allergy is low.21
Conversely, increased microbial exposure might help protect against atopy via a pathway in which T-helper cells prevent pro-allergic immune development and keep harmless environmental exposures from becoming allergens.22 Attendance at daycare and exposure to farm animals early in life reduces the likelihood of atopic disease.16,21 The presence of a dog in the home lessens the probability of egg allergy in infants.23 Food allergy is less common in younger siblings than in first-born children, possibly due to younger siblings’ increased exposure to infection and alterations in the gut microbiome.23,24
Diagnosis: Established by presentation, positive testing
Onset of symptoms after exposure to a suspected food allergen almost always occurs within 2 hours and, typically, resolves within several hours. Symptoms should occur consistently after ingestion of the food allergen. Subsequent exposures can trigger more severe symptoms, depending on the amount, route, and duration of exposure to the allergen.25 Reactions typically follow ingestion or cutaneous exposures; inhalation rarely triggers a response.26 IgE-mediated release of histamine and other mediators from mast cells and basophils triggers reactions that typically involve one or more organ systems (Table 2).25
Cutaneous symptoms are the most common manifestations of food allergy, occurring in 70% to 80% of childhood reactions. Gastrointestinal and oral or respiratory symptoms occur in, respectively, 40% to 50% and 25% of allergic reactions to food. Cardiovascular symptoms develop in fewer than 10% of allergic reactions.26
Continue to: Anaphylaxis
Anaphylaxis is a serious allergic reaction that develops rapidly and can cause death; diagnosis is based on specific criteria (Table 3).27 Data for rates of anaphylaxis due to food allergy are limited. The incidence of fatal reaction due to food allergy is estimated to be 1 in every 800,000 children annually.3
Clinical suspicion. Food allergy should be suspected in infants and children who present with anaphylaxis or other symptoms (Table 225) that occur within minutes to hours of ingesting food.4 Parental and self-reports alone are insufficient to diagnose food allergy. NIAID guidelines recommend that patient reports of food allergy be confirmed, because multiple studies demonstrate that 50% to 90% of presumed food allergies are not true allergy.4 Health care providers must obtain a detailed medical history and pertinent family history, plus perform a physical exam and allergy sensitivity testing. Methods to help diagnose food allergies include skin-prick tests, allergen-specific serum IgE tests, and oral food challenges.4
General principles and utility of testing
Before ordering tests, it’s important to distinguish between food sensitization and food allergy and to inform the families of children with suspected food allergy about the limitations of skin-prick tests and serum IgE tests. A child with IgE antibodies specific to a food or with a positive skin-prick test, but without symptoms upon ingestion of the food, is merely sensitized; food allergy indicates the appearance of symptoms following exposure to a specific food, in addition to the detection of specific IgE antibodies or a positive skin-prick test to that same food.28
Skin-prick testing. Skin-prick tests can be performed at any age. The procedure involves pricking or scratching the surface of the skin, usually the volar aspect of the forearm or the back, with a commercial extract. Testing should be performed by a physician or other provider who is properly trained in the technique and in interpreting results. The extract contains specific allergenic proteins that activate mast cells, resulting in a characteristic wheal-and-flare response that is typically measured 15 to 20 minutes after application. Some medications, such as H1- and H2-receptor blockers and tricyclic antidepressants, can interfere with results and need to be held for 3 to 5 days before testing.
A positive skin-prick test result is defined as a wheal ≥ 3 mm larger in diameter than the negative control. The larger the size of the wheal, the higher the likelihood of a reaction to the tested food.29 Patients who exhibit dermatographism might experience a wheal-and-flare response from the action of the skin-prick test, rather than from food-specific IgE antibodies. A negative skin-prick test has > 90% negative predictive value, so the test can rule out suspected food allergy.30 However, the skin-prick test alone cannot be used to diagnose food allergy because it has a high false-positive rate.
Continue to: Allergen-specific serum IgE testing
Allergen-specific serum IgE testing. Measurement of food-specific serum IgE levels is routinely available and requires only a blood specimen. The test can be used in patients with skin disease, and results are not affected by concurrent medications. The presence of food-specific IgE indicates that the patient is sensitized to that allergen and might react upon exposure; children with a higher level of antibody are more likely to react.29
Food-specific serum IgE tests are sensitive but nonspecific for food allergy.31 Broad food-allergy test panels often yield false-positive results that can lead to unnecessary dietary elimination, resulting in years of inconvenience, nutrition problems, and needless health care expense.32
It is appropriate to order tests of specific serum IgE to foods ingested within the 2 to 3–hour window before onset of symptoms to avoid broad food allergy test panels. Like skin-prick testing, positive allergen-specific serum IgE tests alone cannot diagnose food allergy.
Oral food challenge. The double-blind, placebo-controlled oral food challenge is the gold standard for the diagnosis of food allergy. Because this test is time-consuming and technically difficult, single-blind or open food challenges are more common. Oral food challenges should be performed only by a physician or other provider who can identify and treat anaphylaxis.
The oral challenge starts with a very low dose of suspected food allergen, which is gradually increased every 15 to 30 minutes as vital signs are monitored carefully. Patients are observed for an allergic reaction for 1 hour after the final dose.
Continue to: A retrospective study...
A retrospective study showed that, whereas 19% of patients reacted during an open food challenge, only 2% required epinephrine.33 Another study showed that 89% of children whose serum IgE testing was positive for specific foods were able to reintroduce those foods into the diet after a reassuring oral food challenge.34
Other diagnostic tests. The basophil activation assay, measurement of total serum IgE, atopy patch tests, and intradermal tests have been used, but are not recommended, for making the diagnosis of food allergy.4
How can food allergy be managed?
Medical options are few. No approved treatment exists for food allergy. However, it’s important to appropriately manage acute reactions and reduce the risk of subsequent reactions.1 Parents or other caregivers can give an H1 antihistamine, such as diphenhydramine, to infants and children with acute non-life-threatening symptoms. More severe symptoms require rapid administration of epinephrine.1 Auto-injectable epinephrine should be prescribed for parents and caregivers to use as needed for emergency treatment of anaphylaxis.
Team approach. A multidisciplinary approach to managing food allergy—involving physicians, school nurses, dietitians, and teachers, and using educational materials—is ideal. This strategy expands knowledge about food allergies, enhances correct administration of epinephrine, and reduces allergic reactions.1
Avoidance of food allergens can be challenging. Parents and caregivers should be taught to interpret the list of ingredients on food packages. Self-recognition of allergic reactions reduces the likelihood of a subsequent severe allergic reaction.35
Continue to: Importance of individualized care
Importance of individualized care. Health care providers should develop personalized management plans for their patients.1 (A good place to start is with the “Food Allergy & Anaphylaxis Emergency Care Plan”a developed by Food Allergy Research & Education [FARE]). Keep in mind that children with multiple food allergies consume less calcium and protein, and tend to be shorter4; therefore, it’s wise to closely monitor growth in these children and consider referral to a dietitian who is familiar with food allergy.
Potential of immunotherapy. Current research focuses on immunotherapy to induce tolerance to food allergens and protect against life-threatening allergic reactions. The goal of immunotherapy is to lessen adverse reactions to allergenic food proteins; the strategy is to have patients repeatedly ingest small but gradually increasing doses of the food allergen over many months.36 Although immunotherapy has successfully allowed some patients to consume larger quantities of a food without having an allergic reaction, it is unknown whether immunotherapy provides permanent resolution of food allergy. In addition, immunotherapy often causes serious systemic and local reactions.1,36,37
Is prevention possible?
Maternal diet during pregnancy and lactation does not affect development of food allergy in infants.38,39 Breastfeeding might prevent development of atopic disease, but evidence is insufficient to determine whether breastfeeding reduces the likelihood of food allergy.39 In nonbreastfed infants at high risk of food allergy, extensively or partially hydrolyzed formula might help protect against food allergy, compared to standard cow’s milk formula.9,39 Feeding with soy formula rather than cow’s milk formula does not help prevent food allergy.39,40 Pregnant and breastfeeding women should not restrict their diet as a means of preventing food allergy.39
Diet in infancy. Over the years, physicians have debated the proper timing of the introduction of solid foods into the diet of infants. Traditional teaching advocated delaying introduction of potentially allergenic foods to reduce the risk of food allergy; however, this guideline was based on inconsistent evidence,41 and the strategy did not reduce the incidence of food allergy. The prevalence of food allergy is lower in developing countries where caregivers introduce foods to infants at an earlier age.20
A recent large clinical trial indicates that early introduction of peanut-containing foods can help prevent peanut allergy. The study randomized 4- to 11-month-old infants with severe eczema, egg allergy, or both, to eat or avoid peanut products until 5 years of age. Infants assigned to eat peanuts were 81% less likely to develop peanut allergy than infants in the avoidance group. Absolute risk reduction was 14% (number need to treat = 7).42 Another study showed a nonsignificant (20%) lower relative risk of food allergy in breastfed infants who were fed potentially allergenic foods starting at 3 months of age, compared to being exclusively breastfed.43
Continue to: Based on these data...
Based on these data,42,43 NIAID instituted recommendations in 2017 aimed at preventing peanut allergy44:
- In healthy infants without known food allergy and those with mild or moderate eczema, caregivers can introduce peanut-containing foods at home with other solid foods.Parents who are anxious about a possible allergic reaction can introduce peanut products in a physician’s office.
- Infants at high risk of peanut allergy (those with severe eczema or egg allergy, or both) should undergo peanut-specific IgE or skin-prick testing:
- Negative test: indicates low risk of a reaction to peanuts; the infant should start consuming peanut-containing foods at 4 to 6 months of age, at home or in a physician’s office, depending on the parents’ preference
- Positive test: Referral to an allergist is recommended.
Do children outgrow food allergy?
Approximately 85% of children who have an allergy to milk, egg, soy, or wheat outgrow their allergy; however, only 15% to 20% who have an allergy to peanuts, tree nuts, fish, or shellfish eventually tolerate these foods. The time to resolution of food allergy varies with the food, and might not occur until adolescence.4 No test reliably predicts which children develop tolerance to any given food. A decrease in the food-specific serum IgE level or a decrease in the size of the wheal on skin-prick testing might portend the onset of tolerance to the food.4
CORRESPONDENCE
Catherine M. Bettcher, MD, FAAFP, Briarwood Family Medicine, 1801 Briarwood Circle, Building #10, Ann Arbor, MI 48108; cbettche@umich.edu.
1. Muraro A, Werfel T, Hoffmann-Sommergruber K, et al;
. EAACI food allergy and anaphylaxis guidelines: diagnosis and management of food allergy. Allergy. 2014;69:1008-1025.2. Gupta R, Holdford D, Bilaver L, et al. The economic impact of childhood food allergy in the United States. JAMA Pediatr. 2013;167:1026-1031.
3. Cianferoni A, Muraro A. Food-induced anaphylaxis. Immunol Allergy Clin North Am. 2012;32:165-195.
4., Boyce JA, Assa’ad A, Burks WA, et al. Guidelines for the diagnosis and management of food allergy in the United States: report of the NIAID-sponsored expert panel. J Allergy Clin Immunol. 2010;126(6 suppl):S1-S58.
5. Vierk KA, Koehler KM, Fein SB, et al. Prevalence of self-reported food allergy in American adults and use of food labels. J Allergy Clin Immunol. 2007;119:1504-1510.
6. Allen KJ, Koplin JJ. The epidemiology of IgE-mediated food allergy and anaphylaxis. Immunol Allergy Clin North Am. 2012;32:35-50.
7. Iweala OI, Choudhary SK, Commins SP. Food allergy. Curr Gastroenterol Rep. 2018;20:17.
8. Gupta RS, Warren CM, Smith BM, et al. The public health impact of parent-reported childhood food allergies in the United States. Pediatrics. 2018;142:e20181235.
9. Chafen JJS, Newberry SJ, Riedl MA, et al. Diagnosing and managing common food allergies: a systematic review. JAMA. 2010;303:1848-1856.
10. Nwaru BI, Hickstein L, Panesar SS, et al. Prevalence of common food allergies in Europe: a systematic review and meta-analysis. Allergy. 2014;69:992-1007.
11. Branum AM, Lukacs SL. Food allergy among U.S. children: trends in prevalence and hospitalizations. NCHS Data Brief No. 10. National Center for Health Statistics. October 2008. www.cdc.gov/nchs/products/databriefs/db10.htm. Accessed August 19, 2020.
12. Liu AH, Jaramillo R, Sicherer SH, et al. National prevalence and risk factors for food allergy and relationship to asthma: results from the National Health and Nutrition Examination Survey 2005-2006. J Allergy Clin Immunol. 2010;126:798-806.e13.
13. Gupta RS, Springston EE, Warrier MR, et al. The prevalence, severity, and distribution of childhood food allergy in the United States. Pediatrics. 2011;128:e9-e17.
14. Soller L, Ben-Shoshan M, Harrington DW, et al. Overall prevalence of self-reported food allergy in Canada. J Allergy Clin Immunol. 2012;130:986-988.
15. Venter C, Pereira B, Voigt K, et al. Prevalence and cumulative incidence of food hypersensitivity in the first 3 years of life. Allergy. 2008;63:354-359.
16. Savage J, Johns CB. Food allergy: epidemiology and natural history. Immunol Allergy Clin North Am. 2015;35:45-59.
17. Branum AM, Lukacs SL. Food allergy among children in the United States. Pediatrics. 2009;124:1549-1555.
18. Jackson KD, Howie LD, Akinbami LJ. Trends in allergic conditions among children: United States, 1997-2011. NCHS Data Brief No. 121. National Center for Health Statistics. May 2013. www.cdc.gov/nchs/products/databriefs/db121.htm. Accessed August 19, 2020.
19. Willits EK, Park MA, Hartz MF, et al. Food allergy: a comprehensive population-based cohort study. Mayo Clin Proc. 2018;93:1423-1430.
20. Lack G. Epidemiologic risks for food allergy. J Allergy Clin Immunol. 2008;121:1331-1336.
21. Okada H, Kuhn C, Feillet H, et al. The ‘hygiene hypothesis’ for autoimmune and allergic diseases: an update. Clin Exp Immunol. 2010;160:1-9.
22. Liu AH. Hygiene theory and allergy and asthma prevention. Paediatr Perinat Epidemiol. 2007;21 Suppl 3:2-7.
23. Prince BT, Mandel MJ, Nadeau K, et al. Gut microbiome and the development of food allergy and allergic disease. Pediatr Clin North Am. 2015;62:1479-1492.
24. Kusunoki T, Mukaida K, Morimoto T, et al. Birth order effect on childhood food allergy. Pediatr Allergy Immunol. 2012;23:250-254.
25. Abrams EM, Sicherer SH. Diagnosis and management of food allergy. CMAJ. 2016;188:1087-1093.
26. Perry TT, Matsui EC, Conover-Walker MK, et al. Risk of oral food challenges. J Allergy Clin Immunol. 2004;114:1164-1168.
27. Sampson HA,
A, Campbell RL, et al. Second symposium on the definition and management of anaphylaxis: summary report—Second National Institute of Allergy and Infectious Disease/Food Allergy and Anaphylaxis Network symposium. J Allergy Clin Immunol. 2006;117:391-397.28. Sampson HA. Food allergy. Part 2: diagnosis and management. J Allergy Clin Immunol. 1999;103:981-989.
29. Lieberman JA, Sicherer SH. Diagnosis of food allergy: epicutaneous skin tests, in vitro tests, and oral food challenge. Curr Allergy Asthma Rep. 2011;11:58-64.
30. Sicherer SH, Sampson HA. Food allergy. J Allergy Clin Immunol. 2010;125(2 suppl 2):S116-S125.
31. Soares-Weiser K, Takwoingi Y, Panesar SS, et al. The diagnosis of food allergy: a systematic review and meta-analysis. Allergy. 2014;69:76-86.
32. Bird JA, Crain M, Varshney P. Food allergen panel testing often results in misdiagnosis of food allergy. J Pediatr. 2015;166:97-100.
33. Lieberman JA, Cox AL, Vitale M, et al. Outcomes of office-based, open food challenges in the management of food allergy. J Allergy Clin Immunol. 2011;128:1120-1122.
34. Fleischer DM, Bock SA, Spears GC, et al. Oral food challenges in children with a diagnosis of food allergy. J Pediatr. 2011;158:578-583.e1.
35. Ewan PW, Clark AT. Long-term prospective observational study of patients with peanut and nut allergy after participation in a management plan. Lancet. 2001;357:111-115.
36. Nurmatov U, Dhami S, Arasi S, et al. Allergen immunotherapy for IgE-mediated food allergy: a systematic review and meta-analysis. Allergy. 2017;72:1133-1147.
37. Sampson HA, Aceves S, Bock SA, et al. Food allergy: a practice parameter update—2014. J Allergy Clin Immunol. 2014;134:1016-1025.e43.
38. Kramer MS, Kakuma R. Maternal dietary antigen avoidance during pregnancy or lactation, or both, for preventing or treating atopic disease in the child. Cochrane Database Syst Rev. 2012;2012(9):CD000133.
39. de Silva D, Geromi M, Halken S, et al;
. Primary prevention of food allergy in children and adults: systematic review. Allergy. 2014;69:581-589.40. Osborn DA, Sinn J. Soy formula for prevention of allergy and food intolerance in infants. Cochrane Database Syst Rev. 2004;(3):CD003741.
41. Filipiak B, Zutavern A, Koletzko S, et al; GINI-Group. Solid food introduction in relation to eczema: results from a four-year prospective birth cohort study. J Pediatr. 2007;151:352-358.
42. Du Toit G, Roberts G, Sayre PH, et al; LEAP Study Team. Randomized trial of peanut consumption in infants at risk for peanut allergy. N Engl J Med. 2015;372:803-813.
43. Perkin MR, Logan K, Tseng A, et al; EAT Study Team. Randomized trial of introduction of allergenic foods in breast-fed infants. N Engl J Med. 2016;374:1733-1743.
44. Togias A, Cooper SF, Acebal ML, et al. Addendum guidelines for the prevention of peanut allergy in the United States: report of the National Institute of Allergy and Infectious Diseases-sponsored expert panel. J Allergy Clin Immunol. 2017;139:29-44.
1. Muraro A, Werfel T, Hoffmann-Sommergruber K, et al;
. EAACI food allergy and anaphylaxis guidelines: diagnosis and management of food allergy. Allergy. 2014;69:1008-1025.2. Gupta R, Holdford D, Bilaver L, et al. The economic impact of childhood food allergy in the United States. JAMA Pediatr. 2013;167:1026-1031.
3. Cianferoni A, Muraro A. Food-induced anaphylaxis. Immunol Allergy Clin North Am. 2012;32:165-195.
4., Boyce JA, Assa’ad A, Burks WA, et al. Guidelines for the diagnosis and management of food allergy in the United States: report of the NIAID-sponsored expert panel. J Allergy Clin Immunol. 2010;126(6 suppl):S1-S58.
5. Vierk KA, Koehler KM, Fein SB, et al. Prevalence of self-reported food allergy in American adults and use of food labels. J Allergy Clin Immunol. 2007;119:1504-1510.
6. Allen KJ, Koplin JJ. The epidemiology of IgE-mediated food allergy and anaphylaxis. Immunol Allergy Clin North Am. 2012;32:35-50.
7. Iweala OI, Choudhary SK, Commins SP. Food allergy. Curr Gastroenterol Rep. 2018;20:17.
8. Gupta RS, Warren CM, Smith BM, et al. The public health impact of parent-reported childhood food allergies in the United States. Pediatrics. 2018;142:e20181235.
9. Chafen JJS, Newberry SJ, Riedl MA, et al. Diagnosing and managing common food allergies: a systematic review. JAMA. 2010;303:1848-1856.
10. Nwaru BI, Hickstein L, Panesar SS, et al. Prevalence of common food allergies in Europe: a systematic review and meta-analysis. Allergy. 2014;69:992-1007.
11. Branum AM, Lukacs SL. Food allergy among U.S. children: trends in prevalence and hospitalizations. NCHS Data Brief No. 10. National Center for Health Statistics. October 2008. www.cdc.gov/nchs/products/databriefs/db10.htm. Accessed August 19, 2020.
12. Liu AH, Jaramillo R, Sicherer SH, et al. National prevalence and risk factors for food allergy and relationship to asthma: results from the National Health and Nutrition Examination Survey 2005-2006. J Allergy Clin Immunol. 2010;126:798-806.e13.
13. Gupta RS, Springston EE, Warrier MR, et al. The prevalence, severity, and distribution of childhood food allergy in the United States. Pediatrics. 2011;128:e9-e17.
14. Soller L, Ben-Shoshan M, Harrington DW, et al. Overall prevalence of self-reported food allergy in Canada. J Allergy Clin Immunol. 2012;130:986-988.
15. Venter C, Pereira B, Voigt K, et al. Prevalence and cumulative incidence of food hypersensitivity in the first 3 years of life. Allergy. 2008;63:354-359.
16. Savage J, Johns CB. Food allergy: epidemiology and natural history. Immunol Allergy Clin North Am. 2015;35:45-59.
17. Branum AM, Lukacs SL. Food allergy among children in the United States. Pediatrics. 2009;124:1549-1555.
18. Jackson KD, Howie LD, Akinbami LJ. Trends in allergic conditions among children: United States, 1997-2011. NCHS Data Brief No. 121. National Center for Health Statistics. May 2013. www.cdc.gov/nchs/products/databriefs/db121.htm. Accessed August 19, 2020.
19. Willits EK, Park MA, Hartz MF, et al. Food allergy: a comprehensive population-based cohort study. Mayo Clin Proc. 2018;93:1423-1430.
20. Lack G. Epidemiologic risks for food allergy. J Allergy Clin Immunol. 2008;121:1331-1336.
21. Okada H, Kuhn C, Feillet H, et al. The ‘hygiene hypothesis’ for autoimmune and allergic diseases: an update. Clin Exp Immunol. 2010;160:1-9.
22. Liu AH. Hygiene theory and allergy and asthma prevention. Paediatr Perinat Epidemiol. 2007;21 Suppl 3:2-7.
23. Prince BT, Mandel MJ, Nadeau K, et al. Gut microbiome and the development of food allergy and allergic disease. Pediatr Clin North Am. 2015;62:1479-1492.
24. Kusunoki T, Mukaida K, Morimoto T, et al. Birth order effect on childhood food allergy. Pediatr Allergy Immunol. 2012;23:250-254.
25. Abrams EM, Sicherer SH. Diagnosis and management of food allergy. CMAJ. 2016;188:1087-1093.
26. Perry TT, Matsui EC, Conover-Walker MK, et al. Risk of oral food challenges. J Allergy Clin Immunol. 2004;114:1164-1168.
27. Sampson HA,
A, Campbell RL, et al. Second symposium on the definition and management of anaphylaxis: summary report—Second National Institute of Allergy and Infectious Disease/Food Allergy and Anaphylaxis Network symposium. J Allergy Clin Immunol. 2006;117:391-397.28. Sampson HA. Food allergy. Part 2: diagnosis and management. J Allergy Clin Immunol. 1999;103:981-989.
29. Lieberman JA, Sicherer SH. Diagnosis of food allergy: epicutaneous skin tests, in vitro tests, and oral food challenge. Curr Allergy Asthma Rep. 2011;11:58-64.
30. Sicherer SH, Sampson HA. Food allergy. J Allergy Clin Immunol. 2010;125(2 suppl 2):S116-S125.
31. Soares-Weiser K, Takwoingi Y, Panesar SS, et al. The diagnosis of food allergy: a systematic review and meta-analysis. Allergy. 2014;69:76-86.
32. Bird JA, Crain M, Varshney P. Food allergen panel testing often results in misdiagnosis of food allergy. J Pediatr. 2015;166:97-100.
33. Lieberman JA, Cox AL, Vitale M, et al. Outcomes of office-based, open food challenges in the management of food allergy. J Allergy Clin Immunol. 2011;128:1120-1122.
34. Fleischer DM, Bock SA, Spears GC, et al. Oral food challenges in children with a diagnosis of food allergy. J Pediatr. 2011;158:578-583.e1.
35. Ewan PW, Clark AT. Long-term prospective observational study of patients with peanut and nut allergy after participation in a management plan. Lancet. 2001;357:111-115.
36. Nurmatov U, Dhami S, Arasi S, et al. Allergen immunotherapy for IgE-mediated food allergy: a systematic review and meta-analysis. Allergy. 2017;72:1133-1147.
37. Sampson HA, Aceves S, Bock SA, et al. Food allergy: a practice parameter update—2014. J Allergy Clin Immunol. 2014;134:1016-1025.e43.
38. Kramer MS, Kakuma R. Maternal dietary antigen avoidance during pregnancy or lactation, or both, for preventing or treating atopic disease in the child. Cochrane Database Syst Rev. 2012;2012(9):CD000133.
39. de Silva D, Geromi M, Halken S, et al;
. Primary prevention of food allergy in children and adults: systematic review. Allergy. 2014;69:581-589.40. Osborn DA, Sinn J. Soy formula for prevention of allergy and food intolerance in infants. Cochrane Database Syst Rev. 2004;(3):CD003741.
41. Filipiak B, Zutavern A, Koletzko S, et al; GINI-Group. Solid food introduction in relation to eczema: results from a four-year prospective birth cohort study. J Pediatr. 2007;151:352-358.
42. Du Toit G, Roberts G, Sayre PH, et al; LEAP Study Team. Randomized trial of peanut consumption in infants at risk for peanut allergy. N Engl J Med. 2015;372:803-813.
43. Perkin MR, Logan K, Tseng A, et al; EAT Study Team. Randomized trial of introduction of allergenic foods in breast-fed infants. N Engl J Med. 2016;374:1733-1743.
44. Togias A, Cooper SF, Acebal ML, et al. Addendum guidelines for the prevention of peanut allergy in the United States: report of the National Institute of Allergy and Infectious Diseases-sponsored expert panel. J Allergy Clin Immunol. 2017;139:29-44.
PRACTICE RECOMMENDATIONS
› Diagnose food allergy based on a convincing clinical history paired with positive diagnostic testing. A
› Use a multidisciplinary approach to improve caregiver and patient understanding of food allergy and to reduce allergic reactions. B
› Recommend early introduction of peanut products to infants to reduce the likelihood of peanut allergy. A
Strength of recommendation (SOR)
A Good-quality patient-oriented evidence
B Inconsistent or limited-quality patient-oriented evidence
C Consensus, usual practice, opinion, disease-oriented evidence, case series
An Atypical Long-Term Thiamine Treatment Regimen for Wernicke Encephalopathy
Wernicke-Korsakoff syndrome is a cluster of symptoms attributed to a disorder of vitamin B1 (thiamine) deficiency, manifesting as a combined presentation of alcohol-induced Wernicke encephalopathy (WE) and Korsakoff syndrome (KS).1 While there is consensus on the characteristic presentation and symptoms of WE, there is a lack of agreement on the exact definition of KS. The classic triad describing WE consists of ataxia, ophthalmoplegia, and confusion; however, reports now suggest that a majority of patients exhibit only 1 or 2 of the elements of the triad. KS is often seen as a condition of chronic thiamine deficiency manifesting as memory impairment alongside a cognitive and behavioral decline, with no clear consensus on the sequence of appearance of symptoms. The typical relationship is thought to be a progression of WE to KS if untreated.
From a mental health perspective, WE presents with delirium and confusion whereas KS manifests with irreversible dementia and a cognitive deterioration. Though it is commonly taught that KS-induced memory loss is permanent due to neuronal damage (classically identified as damage to the mammillary bodies - though other structures have been implicated as well), more recent research suggest otherwise.2 A review published in 2018, for example, gathered several case reports and case series that suggest significant improvement in memory and cognition attributed to behavioral and pharmacologic interventions, indicating this as an area deserving of further study.3 About 20% of patients diagnosed with WE by autopsy exhibited none of the classical triad symptoms prior to death.4 Hence, these conditions are surmised to be significantly underdiagnosed and misdiagnosed.
Though consensus regarding the appropriate treatment regimen is lacking for WE, a common protocol consists of high-dose parenteral thiamine for 4 to 7 days.5 This is usually followed by daily oral thiamine repletion until the patient either achieves complete abstinence from alcohol (ideal) or decreases consumption. The goal is to allow thiamine stores to replete and maintain at minimum required body levels moving forward. In this case report, we highlight the utilization of a long-term, unconventional intramuscular (IM) thiamine repletion regimen to ensure maintenance of a patient’s mental status, highlighting discrepancies in our understanding of the mechanisms at play in WE and its treatment.
Case Presentation
A 65-year-old male patient with a more than 3-decade history of daily hard liquor intake, multiple psychiatric hospitalizations for WE, and a prior suicide attempt, presented to the emergency department (ED) with increased frequency of falls, poor oral intake, confabulation, and diminished verbal communication. A chart review revealed memory impairment alongside the diagnoses of schizoaffective disorder and WE, and confusion that was responsive to thiamine administration as well as a history of hypertension, hyperlipidemia, osteoarthritis, and urinary retention secondary to benign prostatic hyperplasia (BPH).
On examination the patient was found to be disoriented with a clouded sensorium. While the history of heavy daily alcohol use was clear in the chart and confirmed by other sources, it appeared unlikely that the patient had been using alcohol in the preceding month due to restricted access in his most recent living environment (a shared apartment with daily nursing assistance). He reported no lightheadedness, dizziness, palpitations, numbness, tingling, or any head trauma. He also negated the presence of active mood symptoms, auditory or visual hallucinations or suicidal ideation (SI)
The patient was admitted to the Internal Medicine Service and received a workup for the causes of delirium, including consideration of normal pressure hydrocephalus (NPH) and other neurologic conditions. Laboratory tests including a comprehensive metabolic panel, thyroid stimulating hormone, urinalysis, urine toxicology screen, and vitamin B12 and folate levels were in normal ranges. Although brain imaging revealed enlarged ventricles, NPH was considered unlikely because of the absence of ophthalmologic abnormalities, like gaze nystagmus, and urinary incontinence; conversely, there was some presence of urinary retention attributed to BPH and required an admission a few months prior. Moreover, magnetic resonance images showed that the ventricles were enlarged slightly out of proportion to the sulci, which can be seen with predominantly central volume loss compared with the pattern typically seen in NPH.
In light of concern for WE and the patient's history, treatment with IV thiamine and IV fluids was initiated and the Liaison Psychiatry Service was consulted for cognitive disability and treatment of his mood. Administration of IV thiamine rapidly restored his sensorium, but he became abruptly disorganized as the IV regimen graduated to an oral thiamine dose of 200 mg 3 times daily. Simultaneously, as medical stabilization was achieved, the patient was transferred to the inpatient psychiatry unit to address the nonresolving cognitive impairment and behavioral disorganization. This specifically involved newly emerging, impulsive, self-harming behaviors like throwing himself on the ground and banging his head on the floor. Such behaviors along with paucity of speech and decreased oral intake, ultimately warranted constant observation, which led to a decrease in self-harming activity. All this behavior was noted even though the patient was adherent to oral administration of thiamine. Throughout this time, the patient underwent several transfers back and forth between the Psychiatry and Internal Medicine services due to ongoing concern for the possibility of delirium or WE. However, the Neurology and Internal Medicine services did not feel that WE would explain the patient’s mental and behavioral status, in part due to his ongoing adherence with daily oral thiamine dosing that was not associated with improvement in mental status.
Recollecting the patient’s improvement with the parenteral thiamine regimen (IV and IM), the psychiatry unit tried a thiamine regimen of 200 mg IM and 100 mg oral 2 times daily. After about 2 weeks on this regimen, the patient subsequently achieved remarkable improvement in his cognitive and behavioral status, with resolution of selfharming behaviors. The patient was noted to be calmer, more linear, and more oriented, though he remained incompletely oriented throughout his hospitalization. As improvement in sensorium was established and the patient’s hospital stay prolonged (Figure), his mood symptoms began manifesting as guilt, low energy, decreased appetite, withdrawal, and passive SI. This was followed by a trial of lithium that was discontinued due to elevated creatine levels. As the patient continued to report depression, a multidrug regimen of divalproex, fluoxetine, and quetiapine was administered, which lead to remarkable improvement.
At this time, it was concluded that the stores of thiamine in the patient’s body may have been replenished, the alcohol intake completely ceased and that he needed to be weaned off of thiamine. The next step taken was reduction of the twice daily 200 mg IM thiamine dose to a once daily regimen, and oral thiamine was put on hold. Over the next 48 hours, the patient became less verbal, more withdrawn, incontinent of urine, and delirious. The twice daily IM 200 mg thiamine was restarted, but this time the patient demonstrated very slow improvement. After 2 weeks, the IM thiamine 200 mg was increased to 3 times daily, and the patient showed marked improvement in recall, mood, and effect.
Several attempts were made to reduce the IM thiamine burden on the patient and/ or transition to an exclusively oral regimen. However, he rapidly decompensated within hours of each attempt to taper the IM dose and required immediate reinstation. On the IM thiamine regimen, he eventually appeared to reach a stable cognitive and affective baseline marked by incomplete orientation but pleasant affect, he reported no mood complaints, behavioral stability, and an ability to comply with care needs and have simple conversations. Some speech content remained disorganized particularly if engaged beyond simple exchanges.
The patient was discharged to a skilled nursing facility after a month of 3 times daily IM administration of thiamine. Within the next 24 hours, the patient returned to the ED with the originally reported symptoms of ataxia, agitation, and confusion. On inquiry, it was revealed that the ordered vials of IM thiamine for injection had not arrived with him at the nursing facility and he had missed 2 doses. The blood laboratory results, scans, and all other parameters were otherwise found to be normal and the patient was adherent to his prescribed antipsychotics and antidepressants. As anticipated, restoration of the IM thiamine regimen revived his baseline within hours. While confusion and delirium resolved completely with treatment, the memory impairments persisted. This patient has been administered a 3 times daily IM dose of 200 mg thiamine for more than 2 years with a stable cognitive clinical picture.
Discussion
According to data from the 2016 National Survey on Drug Use and Health, 16 million individuals in the US aged ≥ 12 years reported heavy alcohol use, which is defined as binge drinking on ≥ 5 days in the past month.6,7 Thiamine deficiency is an alcoholrelated disorder that is frequently encountered in hospital settings. This deficiency can also occur in the context of malabsorption, malnutrition, a prolonged course of vomiting, and bariatric surgery.8,9
The deficiency in thiamine, which is sometimes known as WE, manifests rarely with all 3 of the classic triad of gait disturbances, abnormal eye movements, and mental status changes, with only 16.5% of patients displaying all of the triad.4 Moreover, there may be additional symptoms not listed in this triad, such as memory impairment, bilateral sixth nerve palsy, ptosis, hypotension, and hypothermia.10.11 This inconsistent presentation makes the diagnosis challenging and therefore requires a higher threshold for suspicion. If undiagnosed and/or untreated, WE can lead to chronic thiamine deficiency causing permanent brain damage in the guise of KS. This further increases the importance of timely diagnosis and treatment.
Our case highlights the utilization of an unconventional thiamine regimen that appeared to be temporally associated with mental status improvement. The patient’s clouded sensorium and confusion could not be attributed to metabolic, encephalopathic, or infectious pathologies due to the absence of supportive laboratory evidence. He responded to IV and IM doses of thiamine, but repeated attempts to taper the IM doses with the objective of transitioning to oral thiamine supplementation were followed by immediate decompensations in mental status. This was atypical of WE as the patient seemed adequately replete with thiamine, and missing a few doses should not be enough to deplete his stores. Thus, reflecting a unique case of thiamine-dependent chronically set WE when even a single missed dose of thiamine adversely affected the patient’s cognitive baseline. Interesting to note is this patient’s memory issue, as evident by clinical examination and dating back at least 5 years as per chart review. This premature amnestic component of his presentation indicates a likely parallel running KS component of his presentation. Conversely, the patient’s long history of alcohol use disorder, prior episodes of WE, and ideal response achieved only on parenteral thiamine repletion further supported the diagnosis of WE and our impression of the scenario.
Even though this patient had prior episodes of WE, there remained diagnostic uncertainty regarding his altered mental status for some time before the nonoral thiamine repletion treatment was implemented. Particularly in this admission, the patient’s mental status frequently waxed and waned and there was the additional confusion of whether a potential psychiatric etiology contributed to some of the elements of his presentation, such as his impulsive self-harm behaviors. This behavior led to recurrent transfers among the Psychiatry Service, Internal Medicine Service, and the ED.
The patient’s presentation did not reflect the classical triad of WE, and while this is consistent with the majority of clinical manifestations, various services were reluctant to attribute his symptoms to WE. Once the threshold of suspicion of thiamine deficiency was lowered and the deficit treated more aggressively, the patient seemed to improve tremendously. Presence of memory problems and confabulation, both of which this patient exhibited, are suggestive of KS and are not expected to recover with treatment, yet for this patient there did seem to be some improvement—though not complete resolution. This is consistent with newer evidence suggesting that some recovery from the deficits seen in KS is possible.3
Once diagnosed, the treatment objective is the replenishment of thiamine stores and optimization of the metabolic scenario of the body to prevent recurrence. For acute WE symptoms, many regimens call for 250 to 500 mg of IV thiamine supplementation 2 to 3 times daily for 3 to 5 days. High dose IV thiamine (≥ 500 mg daily) has been proposed to be efficacious and free of considerable adverse effects.12 A study conducted at the University of North Carolina described thiamine prescribing practices in a large academic hospital, analyzing data with the objective of assessing outcomes of ordering high-dose IV thiamine (HDIV, ≥ 200 mg IV twice daily) to patients with encephalopathy. 13 The researchers concluded that HDIV, even though rarely prescribed, was associated with decreased inpatient mortality in bivariable models. However, in multivariable analyses this decrease was found to be clinically insignificant. Our patient benefitted from both IV and IM delivery.
Ideally, after the initial IV thiamine dose, oral administration of thiamine 250 to 1,000 mg is continued until a reduction, if not abstinence, from alcohol use is achieved.5 Many patients are discharged on an oral maintenance dose of thiamine 100 mg. Oral thiamine is poorly absorbed and less effective in both prophylaxis and treatment of newly diagnosed WE; therefore, it is typically used only after IM or IV replenishment. It remains unclear why this patient required IM thiamine multiple times per day to maintain his mental status, and why he would present with selfinjurious behaviors after missing doses. The patient’s response can be attributed to late-onset defects in oral thiamine absorption at the carrier protein level of the brush border and basolateral membranes of his jejunum; however, an invasive procedure like a jejunal biopsy to establish the definitive etiology was neither necessary nor practical once treatment response was observed. 14 Other possible explanations include rapid thiamine metabolism, poor gastrointestinal absorption and a late-onset deficit in the thiamine diffusion mechanisms, and active transport systems (thiamine utilization depends on active transport in low availability states and passive transport when readily available). The nature of these mechanisms deserves further study. Less data have been reported on the administration and utility of IM thiamine for chronic WE; hence, our case report is one of the first illustrating the role of this method for sustained repletion.
Conclusions
This case presented a clinical dilemma because the conventional treatment regimen for WE didn’t yield the desired outcome until the mode and duration of thiamine administration were adjusted. It illustrates the utility of a sustained intensive thiamine regimen irrespective of sobriety status, as opposed to the traditional regimen of parenteral (primarily IV) thiamine for 3 to 7 days, followed by oral repletion until the patient achieves sustained abstinence. In this patient’s case, access to nursing care postdischarge facilitated his continued adherence to IM thiamine therapy.
The longitudinal time course of this case suggests a relationship between this route of administration and improvement in symptom burden and indicates that this patient may have a long-term need for IM thiamine to maintain his baseline mental status. Of great benefit in such patients would be the availability of a long-acting IM thiamine therapy. Risk of overdose is unlikely due to the water solubility of B group vitamins.
This case report highlights the importance of setting a high clinical suspicion for WE due to its ever-increasing incidence in these times. We also wish to direct researchers to consider other out-of-the-box treatment options in case of failure of the conventional regime. In documenting this patient report, we invite more medical providers to investigate and explore other therapeutic options for WE treatment with the aim of decreasing both morbidity and mortality secondary to the condition.
1. Lough ME. Wernicke’s encephalopathy: expanding the diagnostic toolbox. Neuropsychol Rev. 2012;22(2):181-194. doi:10.1007/s11065-012-9200-7
2. Arts NJ, Walvoort SJ, Kessels RP. Korsakoff’s syndrome: a critical review. Neuropsychiatr Dis Treat. 2017;13:2875- 2890. Published 2017 Nov 27. doi:10.2147/NDT.S130078
3. Johnson JM, Fox V. Beyond thiamine: treatment for cognitive impairment in Korsakoff’s syndrome. Psychosomatics. 2018;59(4):311-317. doi:10.1016/j.psym.2018.03.011
4. Harper CG, Giles M, Finlay-Jones R. Clinical signs in the Wernicke-Korsakoff complex: a retrospective analysis of 131 cases diagnosed at necropsy. J Neurol Neurosurg Psychiatry. 1986;49(4):341-345. doi:10.1136/ jnnp.49.4.341
5. Xiong GL, Kenedl, CA. Wernicke-Korsakoff syndrome. https://emedicine.medscape.com/article/288379-overview. Updated May 16, 2018, Accessed July 24, 2020.
6. Ahrnsbrak R, Bose J, Hedden SL, Lipari RN, Park-Lee E. Results from the 2016 National Survey on Drug Use and Health. https://www.samhsa.gov/data/sites/default/files /NSDUH-FFR1-2016/NSDUH-FFR1-2016.htm. Accessed July 22, 2020.
7. National Institute on Alcohol Abuse and Alcoholism. Drinking Levels Defined. https://www.niaaa.nih.gov /alcohol-health/overview-alcohol-consumption/moderate -binge-drinking Accessed July 24, 2020.
8. Heye N, Terstegge K, Sirtl C, McMonagle U, Schreiber K, Meyer-Gessner M. Wernicke’s encephalopathy--causes to consider. Intensive Care Med. 1994;20(4):282-286. doi:10.1007/BF01708966
9. Aasheim ET. Wernicke encephalopathy after bariatric surgery: a systematic review. Ann Surg. 2008;248(5):714-720. doi:10.1097/SLA.0b013e3181884308
10. Victor M, Adams RD, Collins GH. The Wernicke-Korsakoff Syndrome and Related Neurologic Disorders Due to Alcoholism and Malnutrition. Philadelphia, PA: FA Davis; 1989.
11. Thomson AD, Cook CC, Touquet R, Henry JA; Royal College of Physicians, London. The Royal College of Physicians report on alcohol: guidelines for managing Wernicke’s encephalopathy in the accident and Emergency Department [published correction appears in Alcohol Alcohol. 2003 May-Jun;38(3):291]. Alcohol Alcohol. 2002;37(6):513-521. doi:10.1093/alcalc/37.6.513
12. Nishimoto A, Usery J, Winton JC, Twilla J. High-dose parenteral thiamine in treatment of Wernicke’s encephalopathy: case series and review of the literature. In Vivo. 2017;31(1):121-124. doi:10.21873/invivo.11034
13. Nakamura ZM, Tatreau JR, Rosenstein DL, Park EM. Clinical characteristics and outcomes associated with highdose intravenous thiamine administration in patients with encephalopathy. Psychosomatics. 2018;59(4):379-387. doi:10.1016/j.psym.2018.01.004
14. Subramanya SB, Subramanian VS, Said HM. Chronic alcohol consumption and intestinal thiamin absorption: effects on physiological and molecular parameters of the uptake process. Am J Physiol Gastrointest Liver Physiol. 2010;299(1):G23-G31. doi:10.1152/ajpgi.00132.2010
Wernicke-Korsakoff syndrome is a cluster of symptoms attributed to a disorder of vitamin B1 (thiamine) deficiency, manifesting as a combined presentation of alcohol-induced Wernicke encephalopathy (WE) and Korsakoff syndrome (KS).1 While there is consensus on the characteristic presentation and symptoms of WE, there is a lack of agreement on the exact definition of KS. The classic triad describing WE consists of ataxia, ophthalmoplegia, and confusion; however, reports now suggest that a majority of patients exhibit only 1 or 2 of the elements of the triad. KS is often seen as a condition of chronic thiamine deficiency manifesting as memory impairment alongside a cognitive and behavioral decline, with no clear consensus on the sequence of appearance of symptoms. The typical relationship is thought to be a progression of WE to KS if untreated.
From a mental health perspective, WE presents with delirium and confusion whereas KS manifests with irreversible dementia and a cognitive deterioration. Though it is commonly taught that KS-induced memory loss is permanent due to neuronal damage (classically identified as damage to the mammillary bodies - though other structures have been implicated as well), more recent research suggest otherwise.2 A review published in 2018, for example, gathered several case reports and case series that suggest significant improvement in memory and cognition attributed to behavioral and pharmacologic interventions, indicating this as an area deserving of further study.3 About 20% of patients diagnosed with WE by autopsy exhibited none of the classical triad symptoms prior to death.4 Hence, these conditions are surmised to be significantly underdiagnosed and misdiagnosed.
Though consensus regarding the appropriate treatment regimen is lacking for WE, a common protocol consists of high-dose parenteral thiamine for 4 to 7 days.5 This is usually followed by daily oral thiamine repletion until the patient either achieves complete abstinence from alcohol (ideal) or decreases consumption. The goal is to allow thiamine stores to replete and maintain at minimum required body levels moving forward. In this case report, we highlight the utilization of a long-term, unconventional intramuscular (IM) thiamine repletion regimen to ensure maintenance of a patient’s mental status, highlighting discrepancies in our understanding of the mechanisms at play in WE and its treatment.
Case Presentation
A 65-year-old male patient with a more than 3-decade history of daily hard liquor intake, multiple psychiatric hospitalizations for WE, and a prior suicide attempt, presented to the emergency department (ED) with increased frequency of falls, poor oral intake, confabulation, and diminished verbal communication. A chart review revealed memory impairment alongside the diagnoses of schizoaffective disorder and WE, and confusion that was responsive to thiamine administration as well as a history of hypertension, hyperlipidemia, osteoarthritis, and urinary retention secondary to benign prostatic hyperplasia (BPH).
On examination the patient was found to be disoriented with a clouded sensorium. While the history of heavy daily alcohol use was clear in the chart and confirmed by other sources, it appeared unlikely that the patient had been using alcohol in the preceding month due to restricted access in his most recent living environment (a shared apartment with daily nursing assistance). He reported no lightheadedness, dizziness, palpitations, numbness, tingling, or any head trauma. He also negated the presence of active mood symptoms, auditory or visual hallucinations or suicidal ideation (SI)
The patient was admitted to the Internal Medicine Service and received a workup for the causes of delirium, including consideration of normal pressure hydrocephalus (NPH) and other neurologic conditions. Laboratory tests including a comprehensive metabolic panel, thyroid stimulating hormone, urinalysis, urine toxicology screen, and vitamin B12 and folate levels were in normal ranges. Although brain imaging revealed enlarged ventricles, NPH was considered unlikely because of the absence of ophthalmologic abnormalities, like gaze nystagmus, and urinary incontinence; conversely, there was some presence of urinary retention attributed to BPH and required an admission a few months prior. Moreover, magnetic resonance images showed that the ventricles were enlarged slightly out of proportion to the sulci, which can be seen with predominantly central volume loss compared with the pattern typically seen in NPH.
In light of concern for WE and the patient's history, treatment with IV thiamine and IV fluids was initiated and the Liaison Psychiatry Service was consulted for cognitive disability and treatment of his mood. Administration of IV thiamine rapidly restored his sensorium, but he became abruptly disorganized as the IV regimen graduated to an oral thiamine dose of 200 mg 3 times daily. Simultaneously, as medical stabilization was achieved, the patient was transferred to the inpatient psychiatry unit to address the nonresolving cognitive impairment and behavioral disorganization. This specifically involved newly emerging, impulsive, self-harming behaviors like throwing himself on the ground and banging his head on the floor. Such behaviors along with paucity of speech and decreased oral intake, ultimately warranted constant observation, which led to a decrease in self-harming activity. All this behavior was noted even though the patient was adherent to oral administration of thiamine. Throughout this time, the patient underwent several transfers back and forth between the Psychiatry and Internal Medicine services due to ongoing concern for the possibility of delirium or WE. However, the Neurology and Internal Medicine services did not feel that WE would explain the patient’s mental and behavioral status, in part due to his ongoing adherence with daily oral thiamine dosing that was not associated with improvement in mental status.
Recollecting the patient’s improvement with the parenteral thiamine regimen (IV and IM), the psychiatry unit tried a thiamine regimen of 200 mg IM and 100 mg oral 2 times daily. After about 2 weeks on this regimen, the patient subsequently achieved remarkable improvement in his cognitive and behavioral status, with resolution of selfharming behaviors. The patient was noted to be calmer, more linear, and more oriented, though he remained incompletely oriented throughout his hospitalization. As improvement in sensorium was established and the patient’s hospital stay prolonged (Figure), his mood symptoms began manifesting as guilt, low energy, decreased appetite, withdrawal, and passive SI. This was followed by a trial of lithium that was discontinued due to elevated creatine levels. As the patient continued to report depression, a multidrug regimen of divalproex, fluoxetine, and quetiapine was administered, which lead to remarkable improvement.
At this time, it was concluded that the stores of thiamine in the patient’s body may have been replenished, the alcohol intake completely ceased and that he needed to be weaned off of thiamine. The next step taken was reduction of the twice daily 200 mg IM thiamine dose to a once daily regimen, and oral thiamine was put on hold. Over the next 48 hours, the patient became less verbal, more withdrawn, incontinent of urine, and delirious. The twice daily IM 200 mg thiamine was restarted, but this time the patient demonstrated very slow improvement. After 2 weeks, the IM thiamine 200 mg was increased to 3 times daily, and the patient showed marked improvement in recall, mood, and effect.
Several attempts were made to reduce the IM thiamine burden on the patient and/ or transition to an exclusively oral regimen. However, he rapidly decompensated within hours of each attempt to taper the IM dose and required immediate reinstation. On the IM thiamine regimen, he eventually appeared to reach a stable cognitive and affective baseline marked by incomplete orientation but pleasant affect, he reported no mood complaints, behavioral stability, and an ability to comply with care needs and have simple conversations. Some speech content remained disorganized particularly if engaged beyond simple exchanges.
The patient was discharged to a skilled nursing facility after a month of 3 times daily IM administration of thiamine. Within the next 24 hours, the patient returned to the ED with the originally reported symptoms of ataxia, agitation, and confusion. On inquiry, it was revealed that the ordered vials of IM thiamine for injection had not arrived with him at the nursing facility and he had missed 2 doses. The blood laboratory results, scans, and all other parameters were otherwise found to be normal and the patient was adherent to his prescribed antipsychotics and antidepressants. As anticipated, restoration of the IM thiamine regimen revived his baseline within hours. While confusion and delirium resolved completely with treatment, the memory impairments persisted. This patient has been administered a 3 times daily IM dose of 200 mg thiamine for more than 2 years with a stable cognitive clinical picture.
Discussion
According to data from the 2016 National Survey on Drug Use and Health, 16 million individuals in the US aged ≥ 12 years reported heavy alcohol use, which is defined as binge drinking on ≥ 5 days in the past month.6,7 Thiamine deficiency is an alcoholrelated disorder that is frequently encountered in hospital settings. This deficiency can also occur in the context of malabsorption, malnutrition, a prolonged course of vomiting, and bariatric surgery.8,9
The deficiency in thiamine, which is sometimes known as WE, manifests rarely with all 3 of the classic triad of gait disturbances, abnormal eye movements, and mental status changes, with only 16.5% of patients displaying all of the triad.4 Moreover, there may be additional symptoms not listed in this triad, such as memory impairment, bilateral sixth nerve palsy, ptosis, hypotension, and hypothermia.10.11 This inconsistent presentation makes the diagnosis challenging and therefore requires a higher threshold for suspicion. If undiagnosed and/or untreated, WE can lead to chronic thiamine deficiency causing permanent brain damage in the guise of KS. This further increases the importance of timely diagnosis and treatment.
Our case highlights the utilization of an unconventional thiamine regimen that appeared to be temporally associated with mental status improvement. The patient’s clouded sensorium and confusion could not be attributed to metabolic, encephalopathic, or infectious pathologies due to the absence of supportive laboratory evidence. He responded to IV and IM doses of thiamine, but repeated attempts to taper the IM doses with the objective of transitioning to oral thiamine supplementation were followed by immediate decompensations in mental status. This was atypical of WE as the patient seemed adequately replete with thiamine, and missing a few doses should not be enough to deplete his stores. Thus, reflecting a unique case of thiamine-dependent chronically set WE when even a single missed dose of thiamine adversely affected the patient’s cognitive baseline. Interesting to note is this patient’s memory issue, as evident by clinical examination and dating back at least 5 years as per chart review. This premature amnestic component of his presentation indicates a likely parallel running KS component of his presentation. Conversely, the patient’s long history of alcohol use disorder, prior episodes of WE, and ideal response achieved only on parenteral thiamine repletion further supported the diagnosis of WE and our impression of the scenario.
Even though this patient had prior episodes of WE, there remained diagnostic uncertainty regarding his altered mental status for some time before the nonoral thiamine repletion treatment was implemented. Particularly in this admission, the patient’s mental status frequently waxed and waned and there was the additional confusion of whether a potential psychiatric etiology contributed to some of the elements of his presentation, such as his impulsive self-harm behaviors. This behavior led to recurrent transfers among the Psychiatry Service, Internal Medicine Service, and the ED.
The patient’s presentation did not reflect the classical triad of WE, and while this is consistent with the majority of clinical manifestations, various services were reluctant to attribute his symptoms to WE. Once the threshold of suspicion of thiamine deficiency was lowered and the deficit treated more aggressively, the patient seemed to improve tremendously. Presence of memory problems and confabulation, both of which this patient exhibited, are suggestive of KS and are not expected to recover with treatment, yet for this patient there did seem to be some improvement—though not complete resolution. This is consistent with newer evidence suggesting that some recovery from the deficits seen in KS is possible.3
Once diagnosed, the treatment objective is the replenishment of thiamine stores and optimization of the metabolic scenario of the body to prevent recurrence. For acute WE symptoms, many regimens call for 250 to 500 mg of IV thiamine supplementation 2 to 3 times daily for 3 to 5 days. High dose IV thiamine (≥ 500 mg daily) has been proposed to be efficacious and free of considerable adverse effects.12 A study conducted at the University of North Carolina described thiamine prescribing practices in a large academic hospital, analyzing data with the objective of assessing outcomes of ordering high-dose IV thiamine (HDIV, ≥ 200 mg IV twice daily) to patients with encephalopathy. 13 The researchers concluded that HDIV, even though rarely prescribed, was associated with decreased inpatient mortality in bivariable models. However, in multivariable analyses this decrease was found to be clinically insignificant. Our patient benefitted from both IV and IM delivery.
Ideally, after the initial IV thiamine dose, oral administration of thiamine 250 to 1,000 mg is continued until a reduction, if not abstinence, from alcohol use is achieved.5 Many patients are discharged on an oral maintenance dose of thiamine 100 mg. Oral thiamine is poorly absorbed and less effective in both prophylaxis and treatment of newly diagnosed WE; therefore, it is typically used only after IM or IV replenishment. It remains unclear why this patient required IM thiamine multiple times per day to maintain his mental status, and why he would present with selfinjurious behaviors after missing doses. The patient’s response can be attributed to late-onset defects in oral thiamine absorption at the carrier protein level of the brush border and basolateral membranes of his jejunum; however, an invasive procedure like a jejunal biopsy to establish the definitive etiology was neither necessary nor practical once treatment response was observed. 14 Other possible explanations include rapid thiamine metabolism, poor gastrointestinal absorption and a late-onset deficit in the thiamine diffusion mechanisms, and active transport systems (thiamine utilization depends on active transport in low availability states and passive transport when readily available). The nature of these mechanisms deserves further study. Less data have been reported on the administration and utility of IM thiamine for chronic WE; hence, our case report is one of the first illustrating the role of this method for sustained repletion.
Conclusions
This case presented a clinical dilemma because the conventional treatment regimen for WE didn’t yield the desired outcome until the mode and duration of thiamine administration were adjusted. It illustrates the utility of a sustained intensive thiamine regimen irrespective of sobriety status, as opposed to the traditional regimen of parenteral (primarily IV) thiamine for 3 to 7 days, followed by oral repletion until the patient achieves sustained abstinence. In this patient’s case, access to nursing care postdischarge facilitated his continued adherence to IM thiamine therapy.
The longitudinal time course of this case suggests a relationship between this route of administration and improvement in symptom burden and indicates that this patient may have a long-term need for IM thiamine to maintain his baseline mental status. Of great benefit in such patients would be the availability of a long-acting IM thiamine therapy. Risk of overdose is unlikely due to the water solubility of B group vitamins.
This case report highlights the importance of setting a high clinical suspicion for WE due to its ever-increasing incidence in these times. We also wish to direct researchers to consider other out-of-the-box treatment options in case of failure of the conventional regime. In documenting this patient report, we invite more medical providers to investigate and explore other therapeutic options for WE treatment with the aim of decreasing both morbidity and mortality secondary to the condition.
Wernicke-Korsakoff syndrome is a cluster of symptoms attributed to a disorder of vitamin B1 (thiamine) deficiency, manifesting as a combined presentation of alcohol-induced Wernicke encephalopathy (WE) and Korsakoff syndrome (KS).1 While there is consensus on the characteristic presentation and symptoms of WE, there is a lack of agreement on the exact definition of KS. The classic triad describing WE consists of ataxia, ophthalmoplegia, and confusion; however, reports now suggest that a majority of patients exhibit only 1 or 2 of the elements of the triad. KS is often seen as a condition of chronic thiamine deficiency manifesting as memory impairment alongside a cognitive and behavioral decline, with no clear consensus on the sequence of appearance of symptoms. The typical relationship is thought to be a progression of WE to KS if untreated.
From a mental health perspective, WE presents with delirium and confusion whereas KS manifests with irreversible dementia and a cognitive deterioration. Though it is commonly taught that KS-induced memory loss is permanent due to neuronal damage (classically identified as damage to the mammillary bodies - though other structures have been implicated as well), more recent research suggest otherwise.2 A review published in 2018, for example, gathered several case reports and case series that suggest significant improvement in memory and cognition attributed to behavioral and pharmacologic interventions, indicating this as an area deserving of further study.3 About 20% of patients diagnosed with WE by autopsy exhibited none of the classical triad symptoms prior to death.4 Hence, these conditions are surmised to be significantly underdiagnosed and misdiagnosed.
Though consensus regarding the appropriate treatment regimen is lacking for WE, a common protocol consists of high-dose parenteral thiamine for 4 to 7 days.5 This is usually followed by daily oral thiamine repletion until the patient either achieves complete abstinence from alcohol (ideal) or decreases consumption. The goal is to allow thiamine stores to replete and maintain at minimum required body levels moving forward. In this case report, we highlight the utilization of a long-term, unconventional intramuscular (IM) thiamine repletion regimen to ensure maintenance of a patient’s mental status, highlighting discrepancies in our understanding of the mechanisms at play in WE and its treatment.
Case Presentation
A 65-year-old male patient with a more than 3-decade history of daily hard liquor intake, multiple psychiatric hospitalizations for WE, and a prior suicide attempt, presented to the emergency department (ED) with increased frequency of falls, poor oral intake, confabulation, and diminished verbal communication. A chart review revealed memory impairment alongside the diagnoses of schizoaffective disorder and WE, and confusion that was responsive to thiamine administration as well as a history of hypertension, hyperlipidemia, osteoarthritis, and urinary retention secondary to benign prostatic hyperplasia (BPH).
On examination the patient was found to be disoriented with a clouded sensorium. While the history of heavy daily alcohol use was clear in the chart and confirmed by other sources, it appeared unlikely that the patient had been using alcohol in the preceding month due to restricted access in his most recent living environment (a shared apartment with daily nursing assistance). He reported no lightheadedness, dizziness, palpitations, numbness, tingling, or any head trauma. He also negated the presence of active mood symptoms, auditory or visual hallucinations or suicidal ideation (SI)
The patient was admitted to the Internal Medicine Service and received a workup for the causes of delirium, including consideration of normal pressure hydrocephalus (NPH) and other neurologic conditions. Laboratory tests including a comprehensive metabolic panel, thyroid stimulating hormone, urinalysis, urine toxicology screen, and vitamin B12 and folate levels were in normal ranges. Although brain imaging revealed enlarged ventricles, NPH was considered unlikely because of the absence of ophthalmologic abnormalities, like gaze nystagmus, and urinary incontinence; conversely, there was some presence of urinary retention attributed to BPH and required an admission a few months prior. Moreover, magnetic resonance images showed that the ventricles were enlarged slightly out of proportion to the sulci, which can be seen with predominantly central volume loss compared with the pattern typically seen in NPH.
In light of concern for WE and the patient's history, treatment with IV thiamine and IV fluids was initiated and the Liaison Psychiatry Service was consulted for cognitive disability and treatment of his mood. Administration of IV thiamine rapidly restored his sensorium, but he became abruptly disorganized as the IV regimen graduated to an oral thiamine dose of 200 mg 3 times daily. Simultaneously, as medical stabilization was achieved, the patient was transferred to the inpatient psychiatry unit to address the nonresolving cognitive impairment and behavioral disorganization. This specifically involved newly emerging, impulsive, self-harming behaviors like throwing himself on the ground and banging his head on the floor. Such behaviors along with paucity of speech and decreased oral intake, ultimately warranted constant observation, which led to a decrease in self-harming activity. All this behavior was noted even though the patient was adherent to oral administration of thiamine. Throughout this time, the patient underwent several transfers back and forth between the Psychiatry and Internal Medicine services due to ongoing concern for the possibility of delirium or WE. However, the Neurology and Internal Medicine services did not feel that WE would explain the patient’s mental and behavioral status, in part due to his ongoing adherence with daily oral thiamine dosing that was not associated with improvement in mental status.
Recollecting the patient’s improvement with the parenteral thiamine regimen (IV and IM), the psychiatry unit tried a thiamine regimen of 200 mg IM and 100 mg oral 2 times daily. After about 2 weeks on this regimen, the patient subsequently achieved remarkable improvement in his cognitive and behavioral status, with resolution of selfharming behaviors. The patient was noted to be calmer, more linear, and more oriented, though he remained incompletely oriented throughout his hospitalization. As improvement in sensorium was established and the patient’s hospital stay prolonged (Figure), his mood symptoms began manifesting as guilt, low energy, decreased appetite, withdrawal, and passive SI. This was followed by a trial of lithium that was discontinued due to elevated creatine levels. As the patient continued to report depression, a multidrug regimen of divalproex, fluoxetine, and quetiapine was administered, which lead to remarkable improvement.
At this time, it was concluded that the stores of thiamine in the patient’s body may have been replenished, the alcohol intake completely ceased and that he needed to be weaned off of thiamine. The next step taken was reduction of the twice daily 200 mg IM thiamine dose to a once daily regimen, and oral thiamine was put on hold. Over the next 48 hours, the patient became less verbal, more withdrawn, incontinent of urine, and delirious. The twice daily IM 200 mg thiamine was restarted, but this time the patient demonstrated very slow improvement. After 2 weeks, the IM thiamine 200 mg was increased to 3 times daily, and the patient showed marked improvement in recall, mood, and effect.
Several attempts were made to reduce the IM thiamine burden on the patient and/ or transition to an exclusively oral regimen. However, he rapidly decompensated within hours of each attempt to taper the IM dose and required immediate reinstation. On the IM thiamine regimen, he eventually appeared to reach a stable cognitive and affective baseline marked by incomplete orientation but pleasant affect, he reported no mood complaints, behavioral stability, and an ability to comply with care needs and have simple conversations. Some speech content remained disorganized particularly if engaged beyond simple exchanges.
The patient was discharged to a skilled nursing facility after a month of 3 times daily IM administration of thiamine. Within the next 24 hours, the patient returned to the ED with the originally reported symptoms of ataxia, agitation, and confusion. On inquiry, it was revealed that the ordered vials of IM thiamine for injection had not arrived with him at the nursing facility and he had missed 2 doses. The blood laboratory results, scans, and all other parameters were otherwise found to be normal and the patient was adherent to his prescribed antipsychotics and antidepressants. As anticipated, restoration of the IM thiamine regimen revived his baseline within hours. While confusion and delirium resolved completely with treatment, the memory impairments persisted. This patient has been administered a 3 times daily IM dose of 200 mg thiamine for more than 2 years with a stable cognitive clinical picture.
Discussion
According to data from the 2016 National Survey on Drug Use and Health, 16 million individuals in the US aged ≥ 12 years reported heavy alcohol use, which is defined as binge drinking on ≥ 5 days in the past month.6,7 Thiamine deficiency is an alcoholrelated disorder that is frequently encountered in hospital settings. This deficiency can also occur in the context of malabsorption, malnutrition, a prolonged course of vomiting, and bariatric surgery.8,9
The deficiency in thiamine, which is sometimes known as WE, manifests rarely with all 3 of the classic triad of gait disturbances, abnormal eye movements, and mental status changes, with only 16.5% of patients displaying all of the triad.4 Moreover, there may be additional symptoms not listed in this triad, such as memory impairment, bilateral sixth nerve palsy, ptosis, hypotension, and hypothermia.10.11 This inconsistent presentation makes the diagnosis challenging and therefore requires a higher threshold for suspicion. If undiagnosed and/or untreated, WE can lead to chronic thiamine deficiency causing permanent brain damage in the guise of KS. This further increases the importance of timely diagnosis and treatment.
Our case highlights the utilization of an unconventional thiamine regimen that appeared to be temporally associated with mental status improvement. The patient’s clouded sensorium and confusion could not be attributed to metabolic, encephalopathic, or infectious pathologies due to the absence of supportive laboratory evidence. He responded to IV and IM doses of thiamine, but repeated attempts to taper the IM doses with the objective of transitioning to oral thiamine supplementation were followed by immediate decompensations in mental status. This was atypical of WE as the patient seemed adequately replete with thiamine, and missing a few doses should not be enough to deplete his stores. Thus, reflecting a unique case of thiamine-dependent chronically set WE when even a single missed dose of thiamine adversely affected the patient’s cognitive baseline. Interesting to note is this patient’s memory issue, as evident by clinical examination and dating back at least 5 years as per chart review. This premature amnestic component of his presentation indicates a likely parallel running KS component of his presentation. Conversely, the patient’s long history of alcohol use disorder, prior episodes of WE, and ideal response achieved only on parenteral thiamine repletion further supported the diagnosis of WE and our impression of the scenario.
Even though this patient had prior episodes of WE, there remained diagnostic uncertainty regarding his altered mental status for some time before the nonoral thiamine repletion treatment was implemented. Particularly in this admission, the patient’s mental status frequently waxed and waned and there was the additional confusion of whether a potential psychiatric etiology contributed to some of the elements of his presentation, such as his impulsive self-harm behaviors. This behavior led to recurrent transfers among the Psychiatry Service, Internal Medicine Service, and the ED.
The patient’s presentation did not reflect the classical triad of WE, and while this is consistent with the majority of clinical manifestations, various services were reluctant to attribute his symptoms to WE. Once the threshold of suspicion of thiamine deficiency was lowered and the deficit treated more aggressively, the patient seemed to improve tremendously. Presence of memory problems and confabulation, both of which this patient exhibited, are suggestive of KS and are not expected to recover with treatment, yet for this patient there did seem to be some improvement—though not complete resolution. This is consistent with newer evidence suggesting that some recovery from the deficits seen in KS is possible.3
Once diagnosed, the treatment objective is the replenishment of thiamine stores and optimization of the metabolic scenario of the body to prevent recurrence. For acute WE symptoms, many regimens call for 250 to 500 mg of IV thiamine supplementation 2 to 3 times daily for 3 to 5 days. High dose IV thiamine (≥ 500 mg daily) has been proposed to be efficacious and free of considerable adverse effects.12 A study conducted at the University of North Carolina described thiamine prescribing practices in a large academic hospital, analyzing data with the objective of assessing outcomes of ordering high-dose IV thiamine (HDIV, ≥ 200 mg IV twice daily) to patients with encephalopathy. 13 The researchers concluded that HDIV, even though rarely prescribed, was associated with decreased inpatient mortality in bivariable models. However, in multivariable analyses this decrease was found to be clinically insignificant. Our patient benefitted from both IV and IM delivery.
Ideally, after the initial IV thiamine dose, oral administration of thiamine 250 to 1,000 mg is continued until a reduction, if not abstinence, from alcohol use is achieved.5 Many patients are discharged on an oral maintenance dose of thiamine 100 mg. Oral thiamine is poorly absorbed and less effective in both prophylaxis and treatment of newly diagnosed WE; therefore, it is typically used only after IM or IV replenishment. It remains unclear why this patient required IM thiamine multiple times per day to maintain his mental status, and why he would present with selfinjurious behaviors after missing doses. The patient’s response can be attributed to late-onset defects in oral thiamine absorption at the carrier protein level of the brush border and basolateral membranes of his jejunum; however, an invasive procedure like a jejunal biopsy to establish the definitive etiology was neither necessary nor practical once treatment response was observed. 14 Other possible explanations include rapid thiamine metabolism, poor gastrointestinal absorption and a late-onset deficit in the thiamine diffusion mechanisms, and active transport systems (thiamine utilization depends on active transport in low availability states and passive transport when readily available). The nature of these mechanisms deserves further study. Less data have been reported on the administration and utility of IM thiamine for chronic WE; hence, our case report is one of the first illustrating the role of this method for sustained repletion.
Conclusions
This case presented a clinical dilemma because the conventional treatment regimen for WE didn’t yield the desired outcome until the mode and duration of thiamine administration were adjusted. It illustrates the utility of a sustained intensive thiamine regimen irrespective of sobriety status, as opposed to the traditional regimen of parenteral (primarily IV) thiamine for 3 to 7 days, followed by oral repletion until the patient achieves sustained abstinence. In this patient’s case, access to nursing care postdischarge facilitated his continued adherence to IM thiamine therapy.
The longitudinal time course of this case suggests a relationship between this route of administration and improvement in symptom burden and indicates that this patient may have a long-term need for IM thiamine to maintain his baseline mental status. Of great benefit in such patients would be the availability of a long-acting IM thiamine therapy. Risk of overdose is unlikely due to the water solubility of B group vitamins.
This case report highlights the importance of setting a high clinical suspicion for WE due to its ever-increasing incidence in these times. We also wish to direct researchers to consider other out-of-the-box treatment options in case of failure of the conventional regime. In documenting this patient report, we invite more medical providers to investigate and explore other therapeutic options for WE treatment with the aim of decreasing both morbidity and mortality secondary to the condition.
1. Lough ME. Wernicke’s encephalopathy: expanding the diagnostic toolbox. Neuropsychol Rev. 2012;22(2):181-194. doi:10.1007/s11065-012-9200-7
2. Arts NJ, Walvoort SJ, Kessels RP. Korsakoff’s syndrome: a critical review. Neuropsychiatr Dis Treat. 2017;13:2875- 2890. Published 2017 Nov 27. doi:10.2147/NDT.S130078
3. Johnson JM, Fox V. Beyond thiamine: treatment for cognitive impairment in Korsakoff’s syndrome. Psychosomatics. 2018;59(4):311-317. doi:10.1016/j.psym.2018.03.011
4. Harper CG, Giles M, Finlay-Jones R. Clinical signs in the Wernicke-Korsakoff complex: a retrospective analysis of 131 cases diagnosed at necropsy. J Neurol Neurosurg Psychiatry. 1986;49(4):341-345. doi:10.1136/ jnnp.49.4.341
5. Xiong GL, Kenedl, CA. Wernicke-Korsakoff syndrome. https://emedicine.medscape.com/article/288379-overview. Updated May 16, 2018, Accessed July 24, 2020.
6. Ahrnsbrak R, Bose J, Hedden SL, Lipari RN, Park-Lee E. Results from the 2016 National Survey on Drug Use and Health. https://www.samhsa.gov/data/sites/default/files /NSDUH-FFR1-2016/NSDUH-FFR1-2016.htm. Accessed July 22, 2020.
7. National Institute on Alcohol Abuse and Alcoholism. Drinking Levels Defined. https://www.niaaa.nih.gov /alcohol-health/overview-alcohol-consumption/moderate -binge-drinking Accessed July 24, 2020.
8. Heye N, Terstegge K, Sirtl C, McMonagle U, Schreiber K, Meyer-Gessner M. Wernicke’s encephalopathy--causes to consider. Intensive Care Med. 1994;20(4):282-286. doi:10.1007/BF01708966
9. Aasheim ET. Wernicke encephalopathy after bariatric surgery: a systematic review. Ann Surg. 2008;248(5):714-720. doi:10.1097/SLA.0b013e3181884308
10. Victor M, Adams RD, Collins GH. The Wernicke-Korsakoff Syndrome and Related Neurologic Disorders Due to Alcoholism and Malnutrition. Philadelphia, PA: FA Davis; 1989.
11. Thomson AD, Cook CC, Touquet R, Henry JA; Royal College of Physicians, London. The Royal College of Physicians report on alcohol: guidelines for managing Wernicke’s encephalopathy in the accident and Emergency Department [published correction appears in Alcohol Alcohol. 2003 May-Jun;38(3):291]. Alcohol Alcohol. 2002;37(6):513-521. doi:10.1093/alcalc/37.6.513
12. Nishimoto A, Usery J, Winton JC, Twilla J. High-dose parenteral thiamine in treatment of Wernicke’s encephalopathy: case series and review of the literature. In Vivo. 2017;31(1):121-124. doi:10.21873/invivo.11034
13. Nakamura ZM, Tatreau JR, Rosenstein DL, Park EM. Clinical characteristics and outcomes associated with highdose intravenous thiamine administration in patients with encephalopathy. Psychosomatics. 2018;59(4):379-387. doi:10.1016/j.psym.2018.01.004
14. Subramanya SB, Subramanian VS, Said HM. Chronic alcohol consumption and intestinal thiamin absorption: effects on physiological and molecular parameters of the uptake process. Am J Physiol Gastrointest Liver Physiol. 2010;299(1):G23-G31. doi:10.1152/ajpgi.00132.2010
1. Lough ME. Wernicke’s encephalopathy: expanding the diagnostic toolbox. Neuropsychol Rev. 2012;22(2):181-194. doi:10.1007/s11065-012-9200-7
2. Arts NJ, Walvoort SJ, Kessels RP. Korsakoff’s syndrome: a critical review. Neuropsychiatr Dis Treat. 2017;13:2875- 2890. Published 2017 Nov 27. doi:10.2147/NDT.S130078
3. Johnson JM, Fox V. Beyond thiamine: treatment for cognitive impairment in Korsakoff’s syndrome. Psychosomatics. 2018;59(4):311-317. doi:10.1016/j.psym.2018.03.011
4. Harper CG, Giles M, Finlay-Jones R. Clinical signs in the Wernicke-Korsakoff complex: a retrospective analysis of 131 cases diagnosed at necropsy. J Neurol Neurosurg Psychiatry. 1986;49(4):341-345. doi:10.1136/ jnnp.49.4.341
5. Xiong GL, Kenedl, CA. Wernicke-Korsakoff syndrome. https://emedicine.medscape.com/article/288379-overview. Updated May 16, 2018, Accessed July 24, 2020.
6. Ahrnsbrak R, Bose J, Hedden SL, Lipari RN, Park-Lee E. Results from the 2016 National Survey on Drug Use and Health. https://www.samhsa.gov/data/sites/default/files /NSDUH-FFR1-2016/NSDUH-FFR1-2016.htm. Accessed July 22, 2020.
7. National Institute on Alcohol Abuse and Alcoholism. Drinking Levels Defined. https://www.niaaa.nih.gov /alcohol-health/overview-alcohol-consumption/moderate -binge-drinking Accessed July 24, 2020.
8. Heye N, Terstegge K, Sirtl C, McMonagle U, Schreiber K, Meyer-Gessner M. Wernicke’s encephalopathy--causes to consider. Intensive Care Med. 1994;20(4):282-286. doi:10.1007/BF01708966
9. Aasheim ET. Wernicke encephalopathy after bariatric surgery: a systematic review. Ann Surg. 2008;248(5):714-720. doi:10.1097/SLA.0b013e3181884308
10. Victor M, Adams RD, Collins GH. The Wernicke-Korsakoff Syndrome and Related Neurologic Disorders Due to Alcoholism and Malnutrition. Philadelphia, PA: FA Davis; 1989.
11. Thomson AD, Cook CC, Touquet R, Henry JA; Royal College of Physicians, London. The Royal College of Physicians report on alcohol: guidelines for managing Wernicke’s encephalopathy in the accident and Emergency Department [published correction appears in Alcohol Alcohol. 2003 May-Jun;38(3):291]. Alcohol Alcohol. 2002;37(6):513-521. doi:10.1093/alcalc/37.6.513
12. Nishimoto A, Usery J, Winton JC, Twilla J. High-dose parenteral thiamine in treatment of Wernicke’s encephalopathy: case series and review of the literature. In Vivo. 2017;31(1):121-124. doi:10.21873/invivo.11034
13. Nakamura ZM, Tatreau JR, Rosenstein DL, Park EM. Clinical characteristics and outcomes associated with highdose intravenous thiamine administration in patients with encephalopathy. Psychosomatics. 2018;59(4):379-387. doi:10.1016/j.psym.2018.01.004
14. Subramanya SB, Subramanian VS, Said HM. Chronic alcohol consumption and intestinal thiamin absorption: effects on physiological and molecular parameters of the uptake process. Am J Physiol Gastrointest Liver Physiol. 2010;299(1):G23-G31. doi:10.1152/ajpgi.00132.2010
Using Artificial Intelligence for COVID-19 Chest X-ray Diagnosis
The novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARSCoV- 2), which causes the respiratory disease coronavirus disease-19 (COVID- 19), was first identified as a cluster of cases of pneumonia in Wuhan, Hubei Province of China on December 31, 2019.1 Within a month, the disease had spread significantly, leading the World Health Organization (WHO) to designate COVID-19 a public health emergency of international concern. On March 11, 2020, the WHO declared COVID-19 a global pandemic.2 As of August 18, 2020, the virus has infected > 21 million people, with > 750,000 deaths worldwide.3 The spread of COVID-19 has had a dramatic impact on social, economic, and health care issues throughout the world, which has been discussed elsewhere.4
Prior to the this century, members of the coronavirus family had minimal impact on human health.5 However, in the past 20 years, outbreaks have highlighted an emerging importance of coronaviruses in morbidity and mortality on a global scale. Although less prevalent than COVID-19, severe acute respiratory syndrome (SARS) in 2002 to 2003 and Middle East respiratory syndrome (MERS) in 2012 likely had higher mortality rates than the current pandemic.5 Based on this recent history, it is reasonable to assume that we will continue to see novel diseases with similar significant health and societal implications. The challenges presented to health care providers (HCPs) by such novel viral pathogens are numerous, including methods for rapid diagnosis, prevention, and treatment. In the current study, we focus on diagnosis issues, which were evident with COVID-19 with the time required to develop rapid and effective diagnostic modalities.
We have previously reported the utility of using artificial intelligence (AI) in the histopathologic diagnosis of cancer.6-8 AI was first described in 1956 and involves the field of computer science in which machines are trained to learn from experience.9 Machine learning (ML) is a subset of AI and is achieved by using mathematic models to compute sample datasets.10 Current ML employs deep learning with neural network algorithms, which can recognize patterns and achieve complex computational tasks often far quicker and with increased precision than can humans.11-13 In addition to applications in pathology, ML algorithms have both prognostic and diagnostic applications in multiple medical specialties, such as radiology, dermatology, ophthalmology, and cardiology.6 It is predicted that AI will impact almost every aspect of health care in the future.14
In this article, we examine the potential for AI to diagnose patients with COVID-19 pneumonia using chest radiographs (CXR) alone. This is done using Microsoft CustomVision (www.customvision.ai), a readily available, automated ML platform. Employing AI to both screen and diagnose emerging health emergencies such as COVID-19 has the potential to dramatically change how we approach medical care in the future. In addition, we describe the creation of a publicly available website (interknowlogy-covid-19 .azurewebsites.net) that could augment COVID-19 pneumonia CXR diagnosis.
Methods
For the training dataset, 103 CXR images of COVID-19 were downloaded from GitHub covid-chest-xray dataset.15 Five hundred images of non-COVID-19 pneumonia and 500 images of the normal lung were downloaded from the Kaggle RSNA Pneumonia Detection Challenge dataset.16 To balance the dataset, we expanded the COVID-19 dataset to 500 images by slight rotation (probability = 1, max rotation = 5) and zooming (probability = 0.5, percentage area = 0.9) of the original images using the Augmentor Python package.17
Validation Dataset
For the validation dataset 30 random CXR images were obtained from the US Department of Veterans Affairs (VA) PACS (picture archiving and communication system). This dataset included 10 CXR images from hospitalized patients with COVID-19, 10 CXR pneumonia images from patients without COVID-19, and 10 normal CXRs. COVID-19 diagnoses were confirmed with a positive test result from the Xpert Xpress SARS-CoV-2 polymerase chain reaction (PCR) platform.18
Microsoft Custom
Vision Microsoft CustomVision is an automated image classification and object detection system that is a part of Microsoft Azure Cognitive Services (azure.microsoft.com). It has a pay-as-you-go model with fees depending on the computing needs and usage. It offers a free trial to users for 2 initial projects. The service is online with an easy-to-follow graphical user interface. No coding skills are necessary.
We created a new classification project in CustomVision and chose a compact general domain for small size and easy export to TensorFlow. js model format. TensorFlow.js is a JavaScript library that enables dynamic download and execution of ML models. After the project was created, we proceeded to upload our image dataset. Each class was uploaded separately and tagged with the appropriate label (covid pneumonia, non-covid pneumonia, or normal lung). The system rejected 16 COVID-19 images as duplicates. The final CustomVision training dataset consisted of 484 images of COVID-19 pneumonia, 500 images of non-COVID-19 pneumonia, and 500 images of normal lungs. Once uploaded, CustomVision self-trains using the dataset upon initiating the program (Figure 1).
Website Creation
CustomVision was used to train the model. It can be used to execute the model continuously, or the model can be compacted and decoupled from CustomVision. In this case, the model was compacted and decoupled for use in an online application. An Angular online application was created with TensorFlow.js. Within a user’s web browser, the model is executed when an image of a CXR is submitted. Confidence values for each classification are returned. In this design, after the initial webpage and model is downloaded, the webpage no longer needs to access any server components and performs all operations in the browser. Although the solution works well on mobile phone browsers and in low bandwidth situations, the quality of predictions may depend on the browser and device used. At no time does an image get submitted to the cloud.
Result
Overall, our trained model showed 92.9% precision and recall. Precision and recall results for each label were 98.9% and 94.8%, respectively for COVID-19 pneumonia; 91.8% and 89%, respectively, for non- COVID-19 pneumonia; and 88.8% and 95%, respectively, for normal lung (Figure 2). Next, we proceeded to validate the training model on the VA data by making individual predictions on 30 images from the VA dataset. Our model performed well with 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value (Table).
Discussion
We successfully demonstrated the potential of using AI algorithms in assessing CXRs for COVID-19. We first trained the CustomVision automated image classification and object detection system to differentiate cases of COVID-19 from pneumonia from other etiologies as well as normal lung CXRs. We then tested our model against known patients from the James A. Haley Veterans’ Hospital in Tampa, Florida. The program achieved 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value in differentiating the 3 scenarios. Using the trained ML model, we proceeded to create a website that could augment COVID-19 CXR diagnosis.19 The website works on mobile as well as desktop platforms. A health care provider can take a CXR photo with a mobile phone or upload the image file. The ML algorithm would provide the probability of COVID-19 pneumonia, non-COVID-19 pneumonia, or normal lung diagnosis (Figure 3).
Emerging diseases such as COVID-19 present numerous challenges to HCPs, governments, and businesses, as well as to individual members of society. As evidenced with COVID-19, the time from first recognition of an emerging pathogen to the development of methods for reliable diagnosis and treatment can be months, even with a concerted international effort. The gold standard for diagnosis of COVID-19 is by reverse transcriptase PCR (RT-PCR) technologies; however, early RT-PCR testing produced less than optimal results.20-22 Even after the development of reliable tests for detection, making test kits readily available to health care providers on an adequate scale presents an additional challenge as evident with COVID-19.
Use of X-ray vs Computed Tomography
The lack of availability of diagnostic RTPCR with COVID-19 initially placed increased reliability on presumptive diagnoses via imaging in some situations.23 Most of the literature evaluating radiographs of patients with COVID-19 focuses on chest computed tomography (CT) findings, with initial results suggesting CT was more accurate than early RT-PCR methodologies.21,22,24 The Radiological Society of North America Expert consensus statement on chest CT for COVID-19 states that CT findings can even precede positivity on RT-PCR in some cases.22 However, currently it does not recommend the use of CT scanning as a screening tool. Furthermore, the actual sensitivity and specificity of CT interpretation by radiologists for COVID-19 are unknown.22
Characteristic CT findings include ground-glass opacities (GGOs) and consolidation most commonly in the lung periphery, though a diffuse distribution was found in a minority of patients.21,23,25-27 Lomoro and colleagues recently summarized the CT findings from several reports that described abnormalities as most often bilateral and peripheral, subpleural, and affecting the lower lobes.26 Not surprisingly, CT appears more sensitive at detecting changes with COVID-19 than does CXR, with reports that a minority of patients exhibited CT changes before changes were visible on CXR.23,26
We focused our study on the potential of AI in the examination of CXRs in patients with COVID-19, as there are several limitations to the routine use of CT scans with conditions such as COVID-19. Aside from the more considerable time required to obtain CTs, there are issues with contamination of CT suites, sometimes requiring a dedicated COVID-19 CT scanner.23,28 The time constraints of decontamination or limited utilization of CT suites can delay or disrupt services for patients with and without COVID-19. Because of these factors, CXR may be a better resource to minimize the risk of infection to other patients. Also, accurate assessment of abnormalities on CXR for COVID-19 may identify patients in whom the CXR was performed for other purposes.23 CXR is more readily available than CT, especially in more remote or underdeveloped areas.28 Finally, as with CT, CXR abnormalities are reported to have appeared before RT-PCR tests became positive for a minority of patients.23
CXR findings described in patients with COVID-19 are similar to those of CT and include GGOs, consolidation, and hazy increased opacities.23,25,26,28,29 Like CT, the majority of patients who received CXR demonstrated greater involvement in the lower zones and peripherally.23,25,26,28,29 Most patients showed bilateral involvement. However, while these findings are common in patients with COVID-19, they are not specific and can be seen in other conditions, such as other viral pneumonia, bacterial pneumonia, injury from drug toxicity, inhalation injury, connective tissue disease, and idiopathic conditions.
Application of AI for COVID-19
Applications of AI in interpreting radiographs of various types are numerous, and extensive literature has been written on the topic.30 Using deep learning algorithms, AI has multiple possible roles to augment traditional radiograph interpretation. These include the potential for screening, triaging, and increasing the speed to render diagnoses. It also can provide a rapid “second opinion” to the radiologist to support the final interpretation. In areas with critical shortages of radiologists, AI potentially can be used to render the definitive diagnosis. In COVID- 19, imaging studies have been shown to correlate with disease severity and mortality, and AI could assist in monitoring the course of the disease as it progresses and potentially identify patients at greatest risk.27 Furthermore, early results from PCR have been considered suboptimal, and it is known that patients with COVID-19 can test negative initially even by reliable testing methodologies. As AI technology progresses, interpretation can detect and guide triage and treatment of patients with high suspicions of COVID-19 but negative initial PCR results, or in situations where test availability is limited or results are delayed. There are numerous potential benefits should a rapid diagnostic test as simple as a CXR be able to reliably impact containment and prevention of the spread of contagions such as COVID- 19 early in its course.
Few studies have assessed using AI in the radiologic diagnosis of COVID-19, most of which use CT scanning. Bai and colleagues demonstrated increased accuracy, sensitivity, and specificity in distinguishing chest CTs of COVID-19 patients from other types of pneumonia.21,31 A separate study demonstrated the utility of using AI to differentiate COVID-19 from community-acquired pneumonia with CT.32 However, the effective utility of AI for CXR interpretation also has been demonstrated.14,33 Implementation of convolutional neural network layers has allowed for reliable differentiation of viral and bacterial pneumonia with CXR imaging.34 Evidence suggests that there is great potential in the application of AI in the interpretation of radiographs of all types.
Finally, we have developed a publicly available website based on our studies.18 This website is for research use only as it is based on data from our preliminary investigation. To appear within the website, images must have protected health information removed before uploading. The information on the website, including text, graphics, images, or other material, is for research and may not be appropriate for all circumstances. The website does not provide medical, professional, or licensed advice and is not a substitute for consultation with a HCP. Medical advice should be sought from a qualified HCP for any questions, and the website should not be used for medical diagnosis or treatment.
Limitations
In our preliminary study, we have demonstrated the potential impact AI can have in multiple aspects of patient care for emerging pathogens such as COVID-19 using a test as readily available as a CXR. However, several limitations to this investigation should be mentioned. The study is retrospective in nature with limited sample size and with X-rays from patients with various stages of COVID-19 pneumonia. Also, cases of non-COVID-19 pneumonia are not stratified into different types or etiologies. We intend to demonstrate the potential of AI in differentiating COVID-19 pneumonia from non-COVID-19 pneumonia of any etiology, though future studies should address comparison of COVID-19 cases to more specific types of pneumonias, such as of bacterial or viral origin. Furthermore, the present study does not address any potential effects of additional radiographic findings from coexistent conditions, such as pulmonary edema as seen in congestive heart failure, pleural effusions (which can be seen with COVID-19 pneumonia, though rarely), interstitial lung disease, etc. Future studies are required to address these issues. Ultimately, prospective studies to assess AI-assisted radiographic interpretation in conditions such as COVID-19 are required to demonstrate the impact on diagnosis, treatment, outcome, and patient safety as these technologies are implemented.
Conclusions
We have used a readily available, commercial platform to demonstrate the potential of AI to assist in the successful diagnosis of COVID-19 pneumonia on CXR images. While this technology has numerous applications in radiology, we have focused on the potential impact on future world health crises such as COVID-19. The findings have implications for screening and triage, initial diagnosis, monitoring disease progression, and identifying patients at increased risk of morbidity and mortality. Based on the data, a website was created to demonstrate how such technologies could be shared and distributed to others to combat entities such as COVID-19 moving forward. Our study offers a small window into the potential for how AI will likely dramatically change the practice of medicine in the future.
1. World Health Organization. Coronavirus disease (COVID- 19) pandemic. https://www.who.int/emergencies/diseases /novel-coronavirus2019. Updated August 23, 2020. Accessed August 24, 2020.
2. World Health Organization. WHO Director-General’s opening remarks at the media briefing on COVID-19 - 11 March 2020. https://www.who.int/dg/speeches/detail/who -director-general-sopening-remarks-at-the-media-briefing -on-covid-19---11-march2020. Published March 11, 2020. Accessed August 24, 2020.
3. World Health Organization. Coronavirus disease (COVID- 19): situation report--209. https://www.who.int/docs /default-source/coronaviruse/situation-reports/20200816 -covid-19-sitrep-209.pdf. Updated August 16, 2020. Accessed August 24, 2020.
4. Nicola M, Alsafi Z, Sohrabi C, et al. The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int J Surg. 2020;78:185-193. doi:10.1016/j.ijsu.2020.04.018
5. da Costa VG, Moreli ML, Saivish MV. The emergence of SARS, MERS and novel SARS-2 coronaviruses in the 21st century. Arch Virol. 2020;165(7):1517-1526. doi:10.1007/s00705-020-04628-0
6. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.
7. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Updated January 15, 2019. Accessed August 24, 2020.
8. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. http:// arxiv.org/abs/1808.08230. Updated January 15, 2019. Accessed August 24, 2020.
9. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87. doi:10.1609/AIMAG.V27I4.1911
10. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229. doi:10.1147/rd.33.0210
11. Sarle WS. Neural networks and statistical models https:// people.orie.cornell.edu/davidr/or474/nn_sas.pdf. Published April 1994. Accessed August 24, 2020.
12. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85-117. doi:10.1016/j.neunet.2014.09.003
13. 13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539
14. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44- 56. doi:10.1038/s41591-018-0300-7
15. Cohen JP, Morrison P, Dao L. COVID-19 Image Data Collection. Published online March 25, 2020. Accessed May 13, 2020. http://arxiv.org/abs/2003.11597
16. Radiological Society of America. RSNA pneumonia detection challenge. https://www.kaggle.com/c/rsnapneumonia- detectionchallenge. Accessed August 24, 2020.
17. Bloice MD, Roth PM, Holzinger A. Biomedical image augmentation using Augmentor. Bioinformatics. 2019;35(21):4522-4524. doi:10.1093/bioinformatics/btz259
18. Cepheid. Xpert Xpress SARS-CoV-2. https://www.cepheid .com/coronavirus. Accessed August 24, 2020.
19. Interknowlogy. COVID-19 detection in chest X-rays. https://interknowlogy-covid-19.azurewebsites.net. Accessed August 27, 2020.
20. Bernheim A, Mei X, Huang M, et al. Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology. 2020;295(3):200463. doi:10.1148/radiol.2020200463
21. Ai T, Yang Z, Hou H, et al. Correlation of Chest CT and RTPCR Testing for Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32- E40. doi:10.1148/radiol.2020200642
22. Simpson S, Kay FU, Abbara S, et al. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA - Secondary Publication. J Thorac Imaging. 2020;35(4):219-227. doi:10.1097/RTI.0000000000000524
23. Wong HYF, Lam HYS, Fong AH, et al. Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology. 2020;296(2):E72-E78. doi:10.1148/radiol.2020201160
24. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115-E117. doi:10.1148/radiol.2020200432
25. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507-513. doi:10.1016/S0140-6736(20)30211-7
26. Lomoro P, Verde F, Zerboni F, et al. COVID-19 pneumonia manifestations at the admission on chest ultrasound, radiographs, and CT: single-center study and comprehensive radiologic literature review. Eur J Radiol Open. 2020;7:100231. doi:10.1016/j.ejro.2020.100231
27. Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A. Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur Radiol. 2020;30(9):4930-4942. doi:10.1007/s00330-020-06863-0
28. Jacobi A, Chung M, Bernheim A, Eber C. Portable chest X-ray in coronavirus disease-19 (COVID- 19): a pictorial review. Clin Imaging. 2020;64:35-42. doi:10.1016/j.clinimag.2020.04.001
29. Bhat R, Hamid A, Kunin JR, et al. Chest imaging in patients hospitalized With COVID-19 infection - a case series. Curr Probl Diagn Radiol. 2020;49(4):294-301. doi:10.1067/j.cpradiol.2020.04.001
30. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Heal. 2019;1(6):E271- E297. doi:10.1016/S2589-7500(19)30123-2
31. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491
32. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905
33. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. http://arxiv.org /abs/2002.11379. Updated March 11, 2020. Accessed August 24, 2020.
34. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by imagebased deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010
The novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARSCoV- 2), which causes the respiratory disease coronavirus disease-19 (COVID- 19), was first identified as a cluster of cases of pneumonia in Wuhan, Hubei Province of China on December 31, 2019.1 Within a month, the disease had spread significantly, leading the World Health Organization (WHO) to designate COVID-19 a public health emergency of international concern. On March 11, 2020, the WHO declared COVID-19 a global pandemic.2 As of August 18, 2020, the virus has infected > 21 million people, with > 750,000 deaths worldwide.3 The spread of COVID-19 has had a dramatic impact on social, economic, and health care issues throughout the world, which has been discussed elsewhere.4
Prior to the this century, members of the coronavirus family had minimal impact on human health.5 However, in the past 20 years, outbreaks have highlighted an emerging importance of coronaviruses in morbidity and mortality on a global scale. Although less prevalent than COVID-19, severe acute respiratory syndrome (SARS) in 2002 to 2003 and Middle East respiratory syndrome (MERS) in 2012 likely had higher mortality rates than the current pandemic.5 Based on this recent history, it is reasonable to assume that we will continue to see novel diseases with similar significant health and societal implications. The challenges presented to health care providers (HCPs) by such novel viral pathogens are numerous, including methods for rapid diagnosis, prevention, and treatment. In the current study, we focus on diagnosis issues, which were evident with COVID-19 with the time required to develop rapid and effective diagnostic modalities.
We have previously reported the utility of using artificial intelligence (AI) in the histopathologic diagnosis of cancer.6-8 AI was first described in 1956 and involves the field of computer science in which machines are trained to learn from experience.9 Machine learning (ML) is a subset of AI and is achieved by using mathematic models to compute sample datasets.10 Current ML employs deep learning with neural network algorithms, which can recognize patterns and achieve complex computational tasks often far quicker and with increased precision than can humans.11-13 In addition to applications in pathology, ML algorithms have both prognostic and diagnostic applications in multiple medical specialties, such as radiology, dermatology, ophthalmology, and cardiology.6 It is predicted that AI will impact almost every aspect of health care in the future.14
In this article, we examine the potential for AI to diagnose patients with COVID-19 pneumonia using chest radiographs (CXR) alone. This is done using Microsoft CustomVision (www.customvision.ai), a readily available, automated ML platform. Employing AI to both screen and diagnose emerging health emergencies such as COVID-19 has the potential to dramatically change how we approach medical care in the future. In addition, we describe the creation of a publicly available website (interknowlogy-covid-19 .azurewebsites.net) that could augment COVID-19 pneumonia CXR diagnosis.
Methods
For the training dataset, 103 CXR images of COVID-19 were downloaded from GitHub covid-chest-xray dataset.15 Five hundred images of non-COVID-19 pneumonia and 500 images of the normal lung were downloaded from the Kaggle RSNA Pneumonia Detection Challenge dataset.16 To balance the dataset, we expanded the COVID-19 dataset to 500 images by slight rotation (probability = 1, max rotation = 5) and zooming (probability = 0.5, percentage area = 0.9) of the original images using the Augmentor Python package.17
Validation Dataset
For the validation dataset 30 random CXR images were obtained from the US Department of Veterans Affairs (VA) PACS (picture archiving and communication system). This dataset included 10 CXR images from hospitalized patients with COVID-19, 10 CXR pneumonia images from patients without COVID-19, and 10 normal CXRs. COVID-19 diagnoses were confirmed with a positive test result from the Xpert Xpress SARS-CoV-2 polymerase chain reaction (PCR) platform.18
Microsoft Custom
Vision Microsoft CustomVision is an automated image classification and object detection system that is a part of Microsoft Azure Cognitive Services (azure.microsoft.com). It has a pay-as-you-go model with fees depending on the computing needs and usage. It offers a free trial to users for 2 initial projects. The service is online with an easy-to-follow graphical user interface. No coding skills are necessary.
We created a new classification project in CustomVision and chose a compact general domain for small size and easy export to TensorFlow. js model format. TensorFlow.js is a JavaScript library that enables dynamic download and execution of ML models. After the project was created, we proceeded to upload our image dataset. Each class was uploaded separately and tagged with the appropriate label (covid pneumonia, non-covid pneumonia, or normal lung). The system rejected 16 COVID-19 images as duplicates. The final CustomVision training dataset consisted of 484 images of COVID-19 pneumonia, 500 images of non-COVID-19 pneumonia, and 500 images of normal lungs. Once uploaded, CustomVision self-trains using the dataset upon initiating the program (Figure 1).
Website Creation
CustomVision was used to train the model. It can be used to execute the model continuously, or the model can be compacted and decoupled from CustomVision. In this case, the model was compacted and decoupled for use in an online application. An Angular online application was created with TensorFlow.js. Within a user’s web browser, the model is executed when an image of a CXR is submitted. Confidence values for each classification are returned. In this design, after the initial webpage and model is downloaded, the webpage no longer needs to access any server components and performs all operations in the browser. Although the solution works well on mobile phone browsers and in low bandwidth situations, the quality of predictions may depend on the browser and device used. At no time does an image get submitted to the cloud.
Result
Overall, our trained model showed 92.9% precision and recall. Precision and recall results for each label were 98.9% and 94.8%, respectively for COVID-19 pneumonia; 91.8% and 89%, respectively, for non- COVID-19 pneumonia; and 88.8% and 95%, respectively, for normal lung (Figure 2). Next, we proceeded to validate the training model on the VA data by making individual predictions on 30 images from the VA dataset. Our model performed well with 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value (Table).
Discussion
We successfully demonstrated the potential of using AI algorithms in assessing CXRs for COVID-19. We first trained the CustomVision automated image classification and object detection system to differentiate cases of COVID-19 from pneumonia from other etiologies as well as normal lung CXRs. We then tested our model against known patients from the James A. Haley Veterans’ Hospital in Tampa, Florida. The program achieved 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value in differentiating the 3 scenarios. Using the trained ML model, we proceeded to create a website that could augment COVID-19 CXR diagnosis.19 The website works on mobile as well as desktop platforms. A health care provider can take a CXR photo with a mobile phone or upload the image file. The ML algorithm would provide the probability of COVID-19 pneumonia, non-COVID-19 pneumonia, or normal lung diagnosis (Figure 3).
Emerging diseases such as COVID-19 present numerous challenges to HCPs, governments, and businesses, as well as to individual members of society. As evidenced with COVID-19, the time from first recognition of an emerging pathogen to the development of methods for reliable diagnosis and treatment can be months, even with a concerted international effort. The gold standard for diagnosis of COVID-19 is by reverse transcriptase PCR (RT-PCR) technologies; however, early RT-PCR testing produced less than optimal results.20-22 Even after the development of reliable tests for detection, making test kits readily available to health care providers on an adequate scale presents an additional challenge as evident with COVID-19.
Use of X-ray vs Computed Tomography
The lack of availability of diagnostic RTPCR with COVID-19 initially placed increased reliability on presumptive diagnoses via imaging in some situations.23 Most of the literature evaluating radiographs of patients with COVID-19 focuses on chest computed tomography (CT) findings, with initial results suggesting CT was more accurate than early RT-PCR methodologies.21,22,24 The Radiological Society of North America Expert consensus statement on chest CT for COVID-19 states that CT findings can even precede positivity on RT-PCR in some cases.22 However, currently it does not recommend the use of CT scanning as a screening tool. Furthermore, the actual sensitivity and specificity of CT interpretation by radiologists for COVID-19 are unknown.22
Characteristic CT findings include ground-glass opacities (GGOs) and consolidation most commonly in the lung periphery, though a diffuse distribution was found in a minority of patients.21,23,25-27 Lomoro and colleagues recently summarized the CT findings from several reports that described abnormalities as most often bilateral and peripheral, subpleural, and affecting the lower lobes.26 Not surprisingly, CT appears more sensitive at detecting changes with COVID-19 than does CXR, with reports that a minority of patients exhibited CT changes before changes were visible on CXR.23,26
We focused our study on the potential of AI in the examination of CXRs in patients with COVID-19, as there are several limitations to the routine use of CT scans with conditions such as COVID-19. Aside from the more considerable time required to obtain CTs, there are issues with contamination of CT suites, sometimes requiring a dedicated COVID-19 CT scanner.23,28 The time constraints of decontamination or limited utilization of CT suites can delay or disrupt services for patients with and without COVID-19. Because of these factors, CXR may be a better resource to minimize the risk of infection to other patients. Also, accurate assessment of abnormalities on CXR for COVID-19 may identify patients in whom the CXR was performed for other purposes.23 CXR is more readily available than CT, especially in more remote or underdeveloped areas.28 Finally, as with CT, CXR abnormalities are reported to have appeared before RT-PCR tests became positive for a minority of patients.23
CXR findings described in patients with COVID-19 are similar to those of CT and include GGOs, consolidation, and hazy increased opacities.23,25,26,28,29 Like CT, the majority of patients who received CXR demonstrated greater involvement in the lower zones and peripherally.23,25,26,28,29 Most patients showed bilateral involvement. However, while these findings are common in patients with COVID-19, they are not specific and can be seen in other conditions, such as other viral pneumonia, bacterial pneumonia, injury from drug toxicity, inhalation injury, connective tissue disease, and idiopathic conditions.
Application of AI for COVID-19
Applications of AI in interpreting radiographs of various types are numerous, and extensive literature has been written on the topic.30 Using deep learning algorithms, AI has multiple possible roles to augment traditional radiograph interpretation. These include the potential for screening, triaging, and increasing the speed to render diagnoses. It also can provide a rapid “second opinion” to the radiologist to support the final interpretation. In areas with critical shortages of radiologists, AI potentially can be used to render the definitive diagnosis. In COVID- 19, imaging studies have been shown to correlate with disease severity and mortality, and AI could assist in monitoring the course of the disease as it progresses and potentially identify patients at greatest risk.27 Furthermore, early results from PCR have been considered suboptimal, and it is known that patients with COVID-19 can test negative initially even by reliable testing methodologies. As AI technology progresses, interpretation can detect and guide triage and treatment of patients with high suspicions of COVID-19 but negative initial PCR results, or in situations where test availability is limited or results are delayed. There are numerous potential benefits should a rapid diagnostic test as simple as a CXR be able to reliably impact containment and prevention of the spread of contagions such as COVID- 19 early in its course.
Few studies have assessed using AI in the radiologic diagnosis of COVID-19, most of which use CT scanning. Bai and colleagues demonstrated increased accuracy, sensitivity, and specificity in distinguishing chest CTs of COVID-19 patients from other types of pneumonia.21,31 A separate study demonstrated the utility of using AI to differentiate COVID-19 from community-acquired pneumonia with CT.32 However, the effective utility of AI for CXR interpretation also has been demonstrated.14,33 Implementation of convolutional neural network layers has allowed for reliable differentiation of viral and bacterial pneumonia with CXR imaging.34 Evidence suggests that there is great potential in the application of AI in the interpretation of radiographs of all types.
Finally, we have developed a publicly available website based on our studies.18 This website is for research use only as it is based on data from our preliminary investigation. To appear within the website, images must have protected health information removed before uploading. The information on the website, including text, graphics, images, or other material, is for research and may not be appropriate for all circumstances. The website does not provide medical, professional, or licensed advice and is not a substitute for consultation with a HCP. Medical advice should be sought from a qualified HCP for any questions, and the website should not be used for medical diagnosis or treatment.
Limitations
In our preliminary study, we have demonstrated the potential impact AI can have in multiple aspects of patient care for emerging pathogens such as COVID-19 using a test as readily available as a CXR. However, several limitations to this investigation should be mentioned. The study is retrospective in nature with limited sample size and with X-rays from patients with various stages of COVID-19 pneumonia. Also, cases of non-COVID-19 pneumonia are not stratified into different types or etiologies. We intend to demonstrate the potential of AI in differentiating COVID-19 pneumonia from non-COVID-19 pneumonia of any etiology, though future studies should address comparison of COVID-19 cases to more specific types of pneumonias, such as of bacterial or viral origin. Furthermore, the present study does not address any potential effects of additional radiographic findings from coexistent conditions, such as pulmonary edema as seen in congestive heart failure, pleural effusions (which can be seen with COVID-19 pneumonia, though rarely), interstitial lung disease, etc. Future studies are required to address these issues. Ultimately, prospective studies to assess AI-assisted radiographic interpretation in conditions such as COVID-19 are required to demonstrate the impact on diagnosis, treatment, outcome, and patient safety as these technologies are implemented.
Conclusions
We have used a readily available, commercial platform to demonstrate the potential of AI to assist in the successful diagnosis of COVID-19 pneumonia on CXR images. While this technology has numerous applications in radiology, we have focused on the potential impact on future world health crises such as COVID-19. The findings have implications for screening and triage, initial diagnosis, monitoring disease progression, and identifying patients at increased risk of morbidity and mortality. Based on the data, a website was created to demonstrate how such technologies could be shared and distributed to others to combat entities such as COVID-19 moving forward. Our study offers a small window into the potential for how AI will likely dramatically change the practice of medicine in the future.
The novel coronavirus severe acute respiratory syndrome coronavirus 2 (SARSCoV- 2), which causes the respiratory disease coronavirus disease-19 (COVID- 19), was first identified as a cluster of cases of pneumonia in Wuhan, Hubei Province of China on December 31, 2019.1 Within a month, the disease had spread significantly, leading the World Health Organization (WHO) to designate COVID-19 a public health emergency of international concern. On March 11, 2020, the WHO declared COVID-19 a global pandemic.2 As of August 18, 2020, the virus has infected > 21 million people, with > 750,000 deaths worldwide.3 The spread of COVID-19 has had a dramatic impact on social, economic, and health care issues throughout the world, which has been discussed elsewhere.4
Prior to the this century, members of the coronavirus family had minimal impact on human health.5 However, in the past 20 years, outbreaks have highlighted an emerging importance of coronaviruses in morbidity and mortality on a global scale. Although less prevalent than COVID-19, severe acute respiratory syndrome (SARS) in 2002 to 2003 and Middle East respiratory syndrome (MERS) in 2012 likely had higher mortality rates than the current pandemic.5 Based on this recent history, it is reasonable to assume that we will continue to see novel diseases with similar significant health and societal implications. The challenges presented to health care providers (HCPs) by such novel viral pathogens are numerous, including methods for rapid diagnosis, prevention, and treatment. In the current study, we focus on diagnosis issues, which were evident with COVID-19 with the time required to develop rapid and effective diagnostic modalities.
We have previously reported the utility of using artificial intelligence (AI) in the histopathologic diagnosis of cancer.6-8 AI was first described in 1956 and involves the field of computer science in which machines are trained to learn from experience.9 Machine learning (ML) is a subset of AI and is achieved by using mathematic models to compute sample datasets.10 Current ML employs deep learning with neural network algorithms, which can recognize patterns and achieve complex computational tasks often far quicker and with increased precision than can humans.11-13 In addition to applications in pathology, ML algorithms have both prognostic and diagnostic applications in multiple medical specialties, such as radiology, dermatology, ophthalmology, and cardiology.6 It is predicted that AI will impact almost every aspect of health care in the future.14
In this article, we examine the potential for AI to diagnose patients with COVID-19 pneumonia using chest radiographs (CXR) alone. This is done using Microsoft CustomVision (www.customvision.ai), a readily available, automated ML platform. Employing AI to both screen and diagnose emerging health emergencies such as COVID-19 has the potential to dramatically change how we approach medical care in the future. In addition, we describe the creation of a publicly available website (interknowlogy-covid-19 .azurewebsites.net) that could augment COVID-19 pneumonia CXR diagnosis.
Methods
For the training dataset, 103 CXR images of COVID-19 were downloaded from GitHub covid-chest-xray dataset.15 Five hundred images of non-COVID-19 pneumonia and 500 images of the normal lung were downloaded from the Kaggle RSNA Pneumonia Detection Challenge dataset.16 To balance the dataset, we expanded the COVID-19 dataset to 500 images by slight rotation (probability = 1, max rotation = 5) and zooming (probability = 0.5, percentage area = 0.9) of the original images using the Augmentor Python package.17
Validation Dataset
For the validation dataset 30 random CXR images were obtained from the US Department of Veterans Affairs (VA) PACS (picture archiving and communication system). This dataset included 10 CXR images from hospitalized patients with COVID-19, 10 CXR pneumonia images from patients without COVID-19, and 10 normal CXRs. COVID-19 diagnoses were confirmed with a positive test result from the Xpert Xpress SARS-CoV-2 polymerase chain reaction (PCR) platform.18
Microsoft Custom
Vision Microsoft CustomVision is an automated image classification and object detection system that is a part of Microsoft Azure Cognitive Services (azure.microsoft.com). It has a pay-as-you-go model with fees depending on the computing needs and usage. It offers a free trial to users for 2 initial projects. The service is online with an easy-to-follow graphical user interface. No coding skills are necessary.
We created a new classification project in CustomVision and chose a compact general domain for small size and easy export to TensorFlow. js model format. TensorFlow.js is a JavaScript library that enables dynamic download and execution of ML models. After the project was created, we proceeded to upload our image dataset. Each class was uploaded separately and tagged with the appropriate label (covid pneumonia, non-covid pneumonia, or normal lung). The system rejected 16 COVID-19 images as duplicates. The final CustomVision training dataset consisted of 484 images of COVID-19 pneumonia, 500 images of non-COVID-19 pneumonia, and 500 images of normal lungs. Once uploaded, CustomVision self-trains using the dataset upon initiating the program (Figure 1).
Website Creation
CustomVision was used to train the model. It can be used to execute the model continuously, or the model can be compacted and decoupled from CustomVision. In this case, the model was compacted and decoupled for use in an online application. An Angular online application was created with TensorFlow.js. Within a user’s web browser, the model is executed when an image of a CXR is submitted. Confidence values for each classification are returned. In this design, after the initial webpage and model is downloaded, the webpage no longer needs to access any server components and performs all operations in the browser. Although the solution works well on mobile phone browsers and in low bandwidth situations, the quality of predictions may depend on the browser and device used. At no time does an image get submitted to the cloud.
Result
Overall, our trained model showed 92.9% precision and recall. Precision and recall results for each label were 98.9% and 94.8%, respectively for COVID-19 pneumonia; 91.8% and 89%, respectively, for non- COVID-19 pneumonia; and 88.8% and 95%, respectively, for normal lung (Figure 2). Next, we proceeded to validate the training model on the VA data by making individual predictions on 30 images from the VA dataset. Our model performed well with 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value (Table).
Discussion
We successfully demonstrated the potential of using AI algorithms in assessing CXRs for COVID-19. We first trained the CustomVision automated image classification and object detection system to differentiate cases of COVID-19 from pneumonia from other etiologies as well as normal lung CXRs. We then tested our model against known patients from the James A. Haley Veterans’ Hospital in Tampa, Florida. The program achieved 100% sensitivity (recall), 95% specificity, 97% accuracy, 91% positive predictive value (precision), and 100% negative predictive value in differentiating the 3 scenarios. Using the trained ML model, we proceeded to create a website that could augment COVID-19 CXR diagnosis.19 The website works on mobile as well as desktop platforms. A health care provider can take a CXR photo with a mobile phone or upload the image file. The ML algorithm would provide the probability of COVID-19 pneumonia, non-COVID-19 pneumonia, or normal lung diagnosis (Figure 3).
Emerging diseases such as COVID-19 present numerous challenges to HCPs, governments, and businesses, as well as to individual members of society. As evidenced with COVID-19, the time from first recognition of an emerging pathogen to the development of methods for reliable diagnosis and treatment can be months, even with a concerted international effort. The gold standard for diagnosis of COVID-19 is by reverse transcriptase PCR (RT-PCR) technologies; however, early RT-PCR testing produced less than optimal results.20-22 Even after the development of reliable tests for detection, making test kits readily available to health care providers on an adequate scale presents an additional challenge as evident with COVID-19.
Use of X-ray vs Computed Tomography
The lack of availability of diagnostic RTPCR with COVID-19 initially placed increased reliability on presumptive diagnoses via imaging in some situations.23 Most of the literature evaluating radiographs of patients with COVID-19 focuses on chest computed tomography (CT) findings, with initial results suggesting CT was more accurate than early RT-PCR methodologies.21,22,24 The Radiological Society of North America Expert consensus statement on chest CT for COVID-19 states that CT findings can even precede positivity on RT-PCR in some cases.22 However, currently it does not recommend the use of CT scanning as a screening tool. Furthermore, the actual sensitivity and specificity of CT interpretation by radiologists for COVID-19 are unknown.22
Characteristic CT findings include ground-glass opacities (GGOs) and consolidation most commonly in the lung periphery, though a diffuse distribution was found in a minority of patients.21,23,25-27 Lomoro and colleagues recently summarized the CT findings from several reports that described abnormalities as most often bilateral and peripheral, subpleural, and affecting the lower lobes.26 Not surprisingly, CT appears more sensitive at detecting changes with COVID-19 than does CXR, with reports that a minority of patients exhibited CT changes before changes were visible on CXR.23,26
We focused our study on the potential of AI in the examination of CXRs in patients with COVID-19, as there are several limitations to the routine use of CT scans with conditions such as COVID-19. Aside from the more considerable time required to obtain CTs, there are issues with contamination of CT suites, sometimes requiring a dedicated COVID-19 CT scanner.23,28 The time constraints of decontamination or limited utilization of CT suites can delay or disrupt services for patients with and without COVID-19. Because of these factors, CXR may be a better resource to minimize the risk of infection to other patients. Also, accurate assessment of abnormalities on CXR for COVID-19 may identify patients in whom the CXR was performed for other purposes.23 CXR is more readily available than CT, especially in more remote or underdeveloped areas.28 Finally, as with CT, CXR abnormalities are reported to have appeared before RT-PCR tests became positive for a minority of patients.23
CXR findings described in patients with COVID-19 are similar to those of CT and include GGOs, consolidation, and hazy increased opacities.23,25,26,28,29 Like CT, the majority of patients who received CXR demonstrated greater involvement in the lower zones and peripherally.23,25,26,28,29 Most patients showed bilateral involvement. However, while these findings are common in patients with COVID-19, they are not specific and can be seen in other conditions, such as other viral pneumonia, bacterial pneumonia, injury from drug toxicity, inhalation injury, connective tissue disease, and idiopathic conditions.
Application of AI for COVID-19
Applications of AI in interpreting radiographs of various types are numerous, and extensive literature has been written on the topic.30 Using deep learning algorithms, AI has multiple possible roles to augment traditional radiograph interpretation. These include the potential for screening, triaging, and increasing the speed to render diagnoses. It also can provide a rapid “second opinion” to the radiologist to support the final interpretation. In areas with critical shortages of radiologists, AI potentially can be used to render the definitive diagnosis. In COVID- 19, imaging studies have been shown to correlate with disease severity and mortality, and AI could assist in monitoring the course of the disease as it progresses and potentially identify patients at greatest risk.27 Furthermore, early results from PCR have been considered suboptimal, and it is known that patients with COVID-19 can test negative initially even by reliable testing methodologies. As AI technology progresses, interpretation can detect and guide triage and treatment of patients with high suspicions of COVID-19 but negative initial PCR results, or in situations where test availability is limited or results are delayed. There are numerous potential benefits should a rapid diagnostic test as simple as a CXR be able to reliably impact containment and prevention of the spread of contagions such as COVID- 19 early in its course.
Few studies have assessed using AI in the radiologic diagnosis of COVID-19, most of which use CT scanning. Bai and colleagues demonstrated increased accuracy, sensitivity, and specificity in distinguishing chest CTs of COVID-19 patients from other types of pneumonia.21,31 A separate study demonstrated the utility of using AI to differentiate COVID-19 from community-acquired pneumonia with CT.32 However, the effective utility of AI for CXR interpretation also has been demonstrated.14,33 Implementation of convolutional neural network layers has allowed for reliable differentiation of viral and bacterial pneumonia with CXR imaging.34 Evidence suggests that there is great potential in the application of AI in the interpretation of radiographs of all types.
Finally, we have developed a publicly available website based on our studies.18 This website is for research use only as it is based on data from our preliminary investigation. To appear within the website, images must have protected health information removed before uploading. The information on the website, including text, graphics, images, or other material, is for research and may not be appropriate for all circumstances. The website does not provide medical, professional, or licensed advice and is not a substitute for consultation with a HCP. Medical advice should be sought from a qualified HCP for any questions, and the website should not be used for medical diagnosis or treatment.
Limitations
In our preliminary study, we have demonstrated the potential impact AI can have in multiple aspects of patient care for emerging pathogens such as COVID-19 using a test as readily available as a CXR. However, several limitations to this investigation should be mentioned. The study is retrospective in nature with limited sample size and with X-rays from patients with various stages of COVID-19 pneumonia. Also, cases of non-COVID-19 pneumonia are not stratified into different types or etiologies. We intend to demonstrate the potential of AI in differentiating COVID-19 pneumonia from non-COVID-19 pneumonia of any etiology, though future studies should address comparison of COVID-19 cases to more specific types of pneumonias, such as of bacterial or viral origin. Furthermore, the present study does not address any potential effects of additional radiographic findings from coexistent conditions, such as pulmonary edema as seen in congestive heart failure, pleural effusions (which can be seen with COVID-19 pneumonia, though rarely), interstitial lung disease, etc. Future studies are required to address these issues. Ultimately, prospective studies to assess AI-assisted radiographic interpretation in conditions such as COVID-19 are required to demonstrate the impact on diagnosis, treatment, outcome, and patient safety as these technologies are implemented.
Conclusions
We have used a readily available, commercial platform to demonstrate the potential of AI to assist in the successful diagnosis of COVID-19 pneumonia on CXR images. While this technology has numerous applications in radiology, we have focused on the potential impact on future world health crises such as COVID-19. The findings have implications for screening and triage, initial diagnosis, monitoring disease progression, and identifying patients at increased risk of morbidity and mortality. Based on the data, a website was created to demonstrate how such technologies could be shared and distributed to others to combat entities such as COVID-19 moving forward. Our study offers a small window into the potential for how AI will likely dramatically change the practice of medicine in the future.
1. World Health Organization. Coronavirus disease (COVID- 19) pandemic. https://www.who.int/emergencies/diseases /novel-coronavirus2019. Updated August 23, 2020. Accessed August 24, 2020.
2. World Health Organization. WHO Director-General’s opening remarks at the media briefing on COVID-19 - 11 March 2020. https://www.who.int/dg/speeches/detail/who -director-general-sopening-remarks-at-the-media-briefing -on-covid-19---11-march2020. Published March 11, 2020. Accessed August 24, 2020.
3. World Health Organization. Coronavirus disease (COVID- 19): situation report--209. https://www.who.int/docs /default-source/coronaviruse/situation-reports/20200816 -covid-19-sitrep-209.pdf. Updated August 16, 2020. Accessed August 24, 2020.
4. Nicola M, Alsafi Z, Sohrabi C, et al. The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int J Surg. 2020;78:185-193. doi:10.1016/j.ijsu.2020.04.018
5. da Costa VG, Moreli ML, Saivish MV. The emergence of SARS, MERS and novel SARS-2 coronaviruses in the 21st century. Arch Virol. 2020;165(7):1517-1526. doi:10.1007/s00705-020-04628-0
6. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.
7. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Updated January 15, 2019. Accessed August 24, 2020.
8. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. http:// arxiv.org/abs/1808.08230. Updated January 15, 2019. Accessed August 24, 2020.
9. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87. doi:10.1609/AIMAG.V27I4.1911
10. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229. doi:10.1147/rd.33.0210
11. Sarle WS. Neural networks and statistical models https:// people.orie.cornell.edu/davidr/or474/nn_sas.pdf. Published April 1994. Accessed August 24, 2020.
12. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85-117. doi:10.1016/j.neunet.2014.09.003
13. 13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539
14. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44- 56. doi:10.1038/s41591-018-0300-7
15. Cohen JP, Morrison P, Dao L. COVID-19 Image Data Collection. Published online March 25, 2020. Accessed May 13, 2020. http://arxiv.org/abs/2003.11597
16. Radiological Society of America. RSNA pneumonia detection challenge. https://www.kaggle.com/c/rsnapneumonia- detectionchallenge. Accessed August 24, 2020.
17. Bloice MD, Roth PM, Holzinger A. Biomedical image augmentation using Augmentor. Bioinformatics. 2019;35(21):4522-4524. doi:10.1093/bioinformatics/btz259
18. Cepheid. Xpert Xpress SARS-CoV-2. https://www.cepheid .com/coronavirus. Accessed August 24, 2020.
19. Interknowlogy. COVID-19 detection in chest X-rays. https://interknowlogy-covid-19.azurewebsites.net. Accessed August 27, 2020.
20. Bernheim A, Mei X, Huang M, et al. Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology. 2020;295(3):200463. doi:10.1148/radiol.2020200463
21. Ai T, Yang Z, Hou H, et al. Correlation of Chest CT and RTPCR Testing for Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32- E40. doi:10.1148/radiol.2020200642
22. Simpson S, Kay FU, Abbara S, et al. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA - Secondary Publication. J Thorac Imaging. 2020;35(4):219-227. doi:10.1097/RTI.0000000000000524
23. Wong HYF, Lam HYS, Fong AH, et al. Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology. 2020;296(2):E72-E78. doi:10.1148/radiol.2020201160
24. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115-E117. doi:10.1148/radiol.2020200432
25. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507-513. doi:10.1016/S0140-6736(20)30211-7
26. Lomoro P, Verde F, Zerboni F, et al. COVID-19 pneumonia manifestations at the admission on chest ultrasound, radiographs, and CT: single-center study and comprehensive radiologic literature review. Eur J Radiol Open. 2020;7:100231. doi:10.1016/j.ejro.2020.100231
27. Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A. Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur Radiol. 2020;30(9):4930-4942. doi:10.1007/s00330-020-06863-0
28. Jacobi A, Chung M, Bernheim A, Eber C. Portable chest X-ray in coronavirus disease-19 (COVID- 19): a pictorial review. Clin Imaging. 2020;64:35-42. doi:10.1016/j.clinimag.2020.04.001
29. Bhat R, Hamid A, Kunin JR, et al. Chest imaging in patients hospitalized With COVID-19 infection - a case series. Curr Probl Diagn Radiol. 2020;49(4):294-301. doi:10.1067/j.cpradiol.2020.04.001
30. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Heal. 2019;1(6):E271- E297. doi:10.1016/S2589-7500(19)30123-2
31. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491
32. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905
33. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. http://arxiv.org /abs/2002.11379. Updated March 11, 2020. Accessed August 24, 2020.
34. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by imagebased deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010
1. World Health Organization. Coronavirus disease (COVID- 19) pandemic. https://www.who.int/emergencies/diseases /novel-coronavirus2019. Updated August 23, 2020. Accessed August 24, 2020.
2. World Health Organization. WHO Director-General’s opening remarks at the media briefing on COVID-19 - 11 March 2020. https://www.who.int/dg/speeches/detail/who -director-general-sopening-remarks-at-the-media-briefing -on-covid-19---11-march2020. Published March 11, 2020. Accessed August 24, 2020.
3. World Health Organization. Coronavirus disease (COVID- 19): situation report--209. https://www.who.int/docs /default-source/coronaviruse/situation-reports/20200816 -covid-19-sitrep-209.pdf. Updated August 16, 2020. Accessed August 24, 2020.
4. Nicola M, Alsafi Z, Sohrabi C, et al. The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int J Surg. 2020;78:185-193. doi:10.1016/j.ijsu.2020.04.018
5. da Costa VG, Moreli ML, Saivish MV. The emergence of SARS, MERS and novel SARS-2 coronaviruses in the 21st century. Arch Virol. 2020;165(7):1517-1526. doi:10.1007/s00705-020-04628-0
6. Borkowski AA, Wilson CP, Borkowski SA, et al. Comparing artificial intelligence platforms for histopathologic cancer diagnosis. Fed Pract. 2019;36(10):456-463.
7. Borkowski AA, Wilson CP, Borkowski SA, Thomas LB, Deland LA, Mastorides SM. Apple machine learning algorithms successfully detect colon cancer but fail to predict KRAS mutation status. http://arxiv.org/abs/1812.04660. Updated January 15, 2019. Accessed August 24, 2020.
8. Borkowski AA, Wilson CP, Borkowski SA, Deland LA, Mastorides SM. Using Apple machine learning algorithms to detect and subclassify non-small cell lung cancer. http:// arxiv.org/abs/1808.08230. Updated January 15, 2019. Accessed August 24, 2020.
9. Moor J. The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 2006;27(4):87. doi:10.1609/AIMAG.V27I4.1911
10. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 1959;3(3):210-229. doi:10.1147/rd.33.0210
11. Sarle WS. Neural networks and statistical models https:// people.orie.cornell.edu/davidr/or474/nn_sas.pdf. Published April 1994. Accessed August 24, 2020.
12. Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85-117. doi:10.1016/j.neunet.2014.09.003
13. 13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444. doi:10.1038/nature14539
14. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44- 56. doi:10.1038/s41591-018-0300-7
15. Cohen JP, Morrison P, Dao L. COVID-19 Image Data Collection. Published online March 25, 2020. Accessed May 13, 2020. http://arxiv.org/abs/2003.11597
16. Radiological Society of America. RSNA pneumonia detection challenge. https://www.kaggle.com/c/rsnapneumonia- detectionchallenge. Accessed August 24, 2020.
17. Bloice MD, Roth PM, Holzinger A. Biomedical image augmentation using Augmentor. Bioinformatics. 2019;35(21):4522-4524. doi:10.1093/bioinformatics/btz259
18. Cepheid. Xpert Xpress SARS-CoV-2. https://www.cepheid .com/coronavirus. Accessed August 24, 2020.
19. Interknowlogy. COVID-19 detection in chest X-rays. https://interknowlogy-covid-19.azurewebsites.net. Accessed August 27, 2020.
20. Bernheim A, Mei X, Huang M, et al. Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection. Radiology. 2020;295(3):200463. doi:10.1148/radiol.2020200463
21. Ai T, Yang Z, Hou H, et al. Correlation of Chest CT and RTPCR Testing for Coronavirus Disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32- E40. doi:10.1148/radiol.2020200642
22. Simpson S, Kay FU, Abbara S, et al. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA - Secondary Publication. J Thorac Imaging. 2020;35(4):219-227. doi:10.1097/RTI.0000000000000524
23. Wong HYF, Lam HYS, Fong AH, et al. Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology. 2020;296(2):E72-E78. doi:10.1148/radiol.2020201160
24. Fang Y, Zhang H, Xie J, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115-E117. doi:10.1148/radiol.2020200432
25. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507-513. doi:10.1016/S0140-6736(20)30211-7
26. Lomoro P, Verde F, Zerboni F, et al. COVID-19 pneumonia manifestations at the admission on chest ultrasound, radiographs, and CT: single-center study and comprehensive radiologic literature review. Eur J Radiol Open. 2020;7:100231. doi:10.1016/j.ejro.2020.100231
27. Salehi S, Abedi A, Balakrishnan S, Gholamrezanezhad A. Coronavirus disease 2019 (COVID-19) imaging reporting and data system (COVID-RADS) and common lexicon: a proposal based on the imaging data of 37 studies. Eur Radiol. 2020;30(9):4930-4942. doi:10.1007/s00330-020-06863-0
28. Jacobi A, Chung M, Bernheim A, Eber C. Portable chest X-ray in coronavirus disease-19 (COVID- 19): a pictorial review. Clin Imaging. 2020;64:35-42. doi:10.1016/j.clinimag.2020.04.001
29. Bhat R, Hamid A, Kunin JR, et al. Chest imaging in patients hospitalized With COVID-19 infection - a case series. Curr Probl Diagn Radiol. 2020;49(4):294-301. doi:10.1067/j.cpradiol.2020.04.001
30. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Heal. 2019;1(6):E271- E297. doi:10.1016/S2589-7500(19)30123-2
31. Bai HX, Wang R, Xiong Z, et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology. 2020;296(3):E156-E165. doi:10.1148/radiol.2020201491
32. Li L, Qin L, Xu Z, et al. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy. Radiology. 2020;296(2):E65-E71. doi:10.1148/radiol.2020200905
33. Rajpurkar P, Joshi A, Pareek A, et al. CheXpedition: investigating generalization challenges for translation of chest x-ray algorithms to the clinical setting. http://arxiv.org /abs/2002.11379. Updated March 11, 2020. Accessed August 24, 2020.
34. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by imagebased deep learning. Cell. 2018;172(5):1122-1131.e9. doi:10.1016/j.cell.2018.02.010
First drug for MET+ NSCLC shows high response rates
conclude investigators of the pivotal trial that led to the drug’s approval.
Responses were seen in all patients regardless of how many previous drugs they had been treated with, although responses were particularly pronounced among patients who were treatment naive.
Capmatinib and a companion assay received FDA approval in May 2020 for the treatment of adults with metastatic NSCLC harboring MET exon 14–skipping mutations.
These MET mutations occur in 3%-4% of NSCLC patients. MET amplifications occur in 1%-6% of NSCLC patients. They have been associated with poor response to chemotherapy and immunotherapy.
“Prior to this approval, there weren’t any approved therapies for this group of patients,” noted Edward Garon, MD, associate professor of hematology and oncology at the University of California, Los Angeles, who led the pivotal trial.
“There are several drugs that have been used off label for MET exon 14 skipping mutations, but none with an indication for it,” he said in an interview.
Garon emphasized that capmatinib was particularly robust for patients who had not received prior therapy, although he added that it was also very effective for those who had been previously treated.
“The drug has been approved and it is available, and we have already written prescriptions for it at our clinic,” said Dr. Garon, “although, at our clinic, the majority of patients using it were part of the [pivotal] clinical trial.”
That trial is the phase 2 GEOMETRY mono-1 study. Results from the study were presented at a meeting earlier this year and have now been published in the New England Journal of Medicine.
It was conducted in a cohort of 364 patients with advanced NSCLC. Patients were stratified into five cohorts and two expansion cohorts, which were assigned according to MET status and previous lines of therapy. Across cohorts 1 through 5, a total of 97 patients had a MET exon 14–skipping mutation, and 210 had MET amplification. All patients were treated with capmatinib 400 mg twice daily.
Among patients with a MET exon 14 skipping mutation, an overall response was observed in 41% of previously treated patients and in 68% of those who had not previously been treated.
“That is a very high response rate, and clearly this drug is targeting this mutation,” said Fred Hirsch, MD, PhD, executive director, Center for Thoracic Oncology, Mount Sinai Health System, New York, who was approached for comment. “It’s very active, and you don’t get those responses with chemotherapy.”
The median duration of response was 9.7 months among previously treated patients and 12.6 months among those who were treatment naive. Median progression-free survival (PFS) was 5.4 months and 12.4 months, respectively.
In the cohort of patients with MET amplification, the overall response was 12% among those whose tumor tissue had a gene copy number of 6-9. The overall response rate was 9% among those with a gene copy number of 4 or 5, and it was 7% among those with a gene copy number of less than 4.
Median PFS was 2.7 months for patients whose tumor tissue had a gene copy number of 6-9 and in those with a gene copy number of 4 or 5. PFS rose to 3.6 months for patients with a gene copy number of less than 4.
The most frequently reported adverse events were peripheral edema (in 51%) and nausea (in 45%). These events were mostly of grade 1 or 2. Treatment-related serious adverse events occurred in 13% of patients. The incidence was lower in the groups with shorter duration of exposure. Treatment was discontinued in 11% of patients (consistent across cohorts) because of adverse events.
Dr. Hirsch commented that the results for patients with NSCLC and brain metastases were particularly noteworthy. “Brain metastases are, unfortunately, a common problem in patients with lung cancer,” he said. “Now, we have a drug that is effective for MET mutation and CNS involvement and can penetrate the blood-brain barrier, and this is a very encouraging situation.”
He pointed out that 7 of 13 patients with brain metastases responded to treatment with capmatinib. “Four patients have a complete response, and that is very encouraging,” said Dr. Hirsch. “This is clearly a deal-breaker in my opinion.”
The future is bright
Dr. Hirsch noted that the evidence supporting capmatinib is strong, even though a larger prospective study with a control group is lacking. “If we have a patient with this mutation, and knowing that there is a drug with a response rate of 68%, that is a good reason to try the drug up front. The data are sufficient that it should be offered to the patient, even without a control group.”
Capmatinib is the latest of many targeted drugs that have been launched in recent years, and several immunotherapies are also now available for treatment of this disease. These new therapies are making this a “very encouraging time in lung cancer,” Dr. Hirsch commented.
“We are seeing long-term survival, and, eventually, we may start seeing potential cures for some patients,” he said. “But at the very least, we are seeing very good long-term results with many of these targeted therapies, and we are continuing to learn more about resistant mechanisms. I can’t wait to see future in the field.”
The study was funded by Novartis Pharmaceuticals. Dr. Garon reports consulting or advisory roles with Dracen and research funding (institutional) from Merck, Genentech, AstraZeneca, Novartis, Lilly, Bristol-Myers Squibb, Mirati Therapeutics, Dynavax, Iovance Biotherapeutics, and Neon Therapeutics. His coauthors have disclosed numerous relationships with industry, as listed in the original article. Dr. Hirsch has disclosed no relevant financial relationships.
This article first appeared on Medscape.com.
conclude investigators of the pivotal trial that led to the drug’s approval.
Responses were seen in all patients regardless of how many previous drugs they had been treated with, although responses were particularly pronounced among patients who were treatment naive.
Capmatinib and a companion assay received FDA approval in May 2020 for the treatment of adults with metastatic NSCLC harboring MET exon 14–skipping mutations.
These MET mutations occur in 3%-4% of NSCLC patients. MET amplifications occur in 1%-6% of NSCLC patients. They have been associated with poor response to chemotherapy and immunotherapy.
“Prior to this approval, there weren’t any approved therapies for this group of patients,” noted Edward Garon, MD, associate professor of hematology and oncology at the University of California, Los Angeles, who led the pivotal trial.
“There are several drugs that have been used off label for MET exon 14 skipping mutations, but none with an indication for it,” he said in an interview.
Garon emphasized that capmatinib was particularly robust for patients who had not received prior therapy, although he added that it was also very effective for those who had been previously treated.
“The drug has been approved and it is available, and we have already written prescriptions for it at our clinic,” said Dr. Garon, “although, at our clinic, the majority of patients using it were part of the [pivotal] clinical trial.”
That trial is the phase 2 GEOMETRY mono-1 study. Results from the study were presented at a meeting earlier this year and have now been published in the New England Journal of Medicine.
It was conducted in a cohort of 364 patients with advanced NSCLC. Patients were stratified into five cohorts and two expansion cohorts, which were assigned according to MET status and previous lines of therapy. Across cohorts 1 through 5, a total of 97 patients had a MET exon 14–skipping mutation, and 210 had MET amplification. All patients were treated with capmatinib 400 mg twice daily.
Among patients with a MET exon 14 skipping mutation, an overall response was observed in 41% of previously treated patients and in 68% of those who had not previously been treated.
“That is a very high response rate, and clearly this drug is targeting this mutation,” said Fred Hirsch, MD, PhD, executive director, Center for Thoracic Oncology, Mount Sinai Health System, New York, who was approached for comment. “It’s very active, and you don’t get those responses with chemotherapy.”
The median duration of response was 9.7 months among previously treated patients and 12.6 months among those who were treatment naive. Median progression-free survival (PFS) was 5.4 months and 12.4 months, respectively.
In the cohort of patients with MET amplification, the overall response was 12% among those whose tumor tissue had a gene copy number of 6-9. The overall response rate was 9% among those with a gene copy number of 4 or 5, and it was 7% among those with a gene copy number of less than 4.
Median PFS was 2.7 months for patients whose tumor tissue had a gene copy number of 6-9 and in those with a gene copy number of 4 or 5. PFS rose to 3.6 months for patients with a gene copy number of less than 4.
The most frequently reported adverse events were peripheral edema (in 51%) and nausea (in 45%). These events were mostly of grade 1 or 2. Treatment-related serious adverse events occurred in 13% of patients. The incidence was lower in the groups with shorter duration of exposure. Treatment was discontinued in 11% of patients (consistent across cohorts) because of adverse events.
Dr. Hirsch commented that the results for patients with NSCLC and brain metastases were particularly noteworthy. “Brain metastases are, unfortunately, a common problem in patients with lung cancer,” he said. “Now, we have a drug that is effective for MET mutation and CNS involvement and can penetrate the blood-brain barrier, and this is a very encouraging situation.”
He pointed out that 7 of 13 patients with brain metastases responded to treatment with capmatinib. “Four patients have a complete response, and that is very encouraging,” said Dr. Hirsch. “This is clearly a deal-breaker in my opinion.”
The future is bright
Dr. Hirsch noted that the evidence supporting capmatinib is strong, even though a larger prospective study with a control group is lacking. “If we have a patient with this mutation, and knowing that there is a drug with a response rate of 68%, that is a good reason to try the drug up front. The data are sufficient that it should be offered to the patient, even without a control group.”
Capmatinib is the latest of many targeted drugs that have been launched in recent years, and several immunotherapies are also now available for treatment of this disease. These new therapies are making this a “very encouraging time in lung cancer,” Dr. Hirsch commented.
“We are seeing long-term survival, and, eventually, we may start seeing potential cures for some patients,” he said. “But at the very least, we are seeing very good long-term results with many of these targeted therapies, and we are continuing to learn more about resistant mechanisms. I can’t wait to see future in the field.”
The study was funded by Novartis Pharmaceuticals. Dr. Garon reports consulting or advisory roles with Dracen and research funding (institutional) from Merck, Genentech, AstraZeneca, Novartis, Lilly, Bristol-Myers Squibb, Mirati Therapeutics, Dynavax, Iovance Biotherapeutics, and Neon Therapeutics. His coauthors have disclosed numerous relationships with industry, as listed in the original article. Dr. Hirsch has disclosed no relevant financial relationships.
This article first appeared on Medscape.com.
conclude investigators of the pivotal trial that led to the drug’s approval.
Responses were seen in all patients regardless of how many previous drugs they had been treated with, although responses were particularly pronounced among patients who were treatment naive.
Capmatinib and a companion assay received FDA approval in May 2020 for the treatment of adults with metastatic NSCLC harboring MET exon 14–skipping mutations.
These MET mutations occur in 3%-4% of NSCLC patients. MET amplifications occur in 1%-6% of NSCLC patients. They have been associated with poor response to chemotherapy and immunotherapy.
“Prior to this approval, there weren’t any approved therapies for this group of patients,” noted Edward Garon, MD, associate professor of hematology and oncology at the University of California, Los Angeles, who led the pivotal trial.
“There are several drugs that have been used off label for MET exon 14 skipping mutations, but none with an indication for it,” he said in an interview.
Garon emphasized that capmatinib was particularly robust for patients who had not received prior therapy, although he added that it was also very effective for those who had been previously treated.
“The drug has been approved and it is available, and we have already written prescriptions for it at our clinic,” said Dr. Garon, “although, at our clinic, the majority of patients using it were part of the [pivotal] clinical trial.”
That trial is the phase 2 GEOMETRY mono-1 study. Results from the study were presented at a meeting earlier this year and have now been published in the New England Journal of Medicine.
It was conducted in a cohort of 364 patients with advanced NSCLC. Patients were stratified into five cohorts and two expansion cohorts, which were assigned according to MET status and previous lines of therapy. Across cohorts 1 through 5, a total of 97 patients had a MET exon 14–skipping mutation, and 210 had MET amplification. All patients were treated with capmatinib 400 mg twice daily.
Among patients with a MET exon 14 skipping mutation, an overall response was observed in 41% of previously treated patients and in 68% of those who had not previously been treated.
“That is a very high response rate, and clearly this drug is targeting this mutation,” said Fred Hirsch, MD, PhD, executive director, Center for Thoracic Oncology, Mount Sinai Health System, New York, who was approached for comment. “It’s very active, and you don’t get those responses with chemotherapy.”
The median duration of response was 9.7 months among previously treated patients and 12.6 months among those who were treatment naive. Median progression-free survival (PFS) was 5.4 months and 12.4 months, respectively.
In the cohort of patients with MET amplification, the overall response was 12% among those whose tumor tissue had a gene copy number of 6-9. The overall response rate was 9% among those with a gene copy number of 4 or 5, and it was 7% among those with a gene copy number of less than 4.
Median PFS was 2.7 months for patients whose tumor tissue had a gene copy number of 6-9 and in those with a gene copy number of 4 or 5. PFS rose to 3.6 months for patients with a gene copy number of less than 4.
The most frequently reported adverse events were peripheral edema (in 51%) and nausea (in 45%). These events were mostly of grade 1 or 2. Treatment-related serious adverse events occurred in 13% of patients. The incidence was lower in the groups with shorter duration of exposure. Treatment was discontinued in 11% of patients (consistent across cohorts) because of adverse events.
Dr. Hirsch commented that the results for patients with NSCLC and brain metastases were particularly noteworthy. “Brain metastases are, unfortunately, a common problem in patients with lung cancer,” he said. “Now, we have a drug that is effective for MET mutation and CNS involvement and can penetrate the blood-brain barrier, and this is a very encouraging situation.”
He pointed out that 7 of 13 patients with brain metastases responded to treatment with capmatinib. “Four patients have a complete response, and that is very encouraging,” said Dr. Hirsch. “This is clearly a deal-breaker in my opinion.”
The future is bright
Dr. Hirsch noted that the evidence supporting capmatinib is strong, even though a larger prospective study with a control group is lacking. “If we have a patient with this mutation, and knowing that there is a drug with a response rate of 68%, that is a good reason to try the drug up front. The data are sufficient that it should be offered to the patient, even without a control group.”
Capmatinib is the latest of many targeted drugs that have been launched in recent years, and several immunotherapies are also now available for treatment of this disease. These new therapies are making this a “very encouraging time in lung cancer,” Dr. Hirsch commented.
“We are seeing long-term survival, and, eventually, we may start seeing potential cures for some patients,” he said. “But at the very least, we are seeing very good long-term results with many of these targeted therapies, and we are continuing to learn more about resistant mechanisms. I can’t wait to see future in the field.”
The study was funded by Novartis Pharmaceuticals. Dr. Garon reports consulting or advisory roles with Dracen and research funding (institutional) from Merck, Genentech, AstraZeneca, Novartis, Lilly, Bristol-Myers Squibb, Mirati Therapeutics, Dynavax, Iovance Biotherapeutics, and Neon Therapeutics. His coauthors have disclosed numerous relationships with industry, as listed in the original article. Dr. Hirsch has disclosed no relevant financial relationships.
This article first appeared on Medscape.com.
Hair dye and cancer study ‘offers some reassurance’
Findings limited to White women in United States
The largest study of its kind has found no positive association between personal use of permanent hair dye and the risk for most cancers and cancer mortality.
The findings come from the Nurses’ Health Study, an ongoing prospective cohort study of more than 117,000 women who have been followed for 36 years and who did not have cancer at baseline.
The findings were published online on September 2 in the BMJ.
The results “offer some reassurance against concerns that personal use of permanent hair dyes might be associated with increased cancer risk or mortality,” write the investigators, with first author Yin Zhang, PhD, of Harvard Medical School, Boston.
The findings, which are limited to White women in the United States, indicate correlation, not causation, the authors emphasize.
Nevertheless, the researchers found an increased risk for some cancers among hair dye users, especially with greater cumulative dose (200 or more uses during the study period). The risk was increased for basal cell carcinoma, breast cancer (specifically, estrogen receptor negative [ER–], progesterone receptor negative [PR–], and hormone receptor negative [ER–, PR–]), and ovarian cancer.
A British expert not involved in the study dismissed these findings. “The reported associations are very weak, and, given the number of associations reported in this manuscript, they are very likely to be chance findings,” commented Paul Pharoah, PhD, professor of cancer epidemiology at the University of Cambridge (England).
“For the cancers where an increase in risk is reported, the results are not compelling. Even if they were real findings, the associations may not be cause-and-effect, and, even if they were causal associations, the magnitude of the effects are so small that any risk would be trivial.
“In short, none of the findings reported in this manuscript suggest that women who use hair dye are putting themselves at increased risk of cancer,” he stated.
A U.S. researcher who has previously coauthored a study suggesting an association between hair dye and breast cancer agreed that the increases in risk reported in this current study are “small.” But they are “of interest,” especially for breast and ovarian cancer, said Alexandra White, PhD, of the National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, N.C.
Hair dyes include compounds that “are not just potential carcinogens but also act as endocrine disruptors,” she said in an interview.
“In both breast and ovarian cancer, we know that hormones play an important part in the etiology ... so it’s biologically plausible that you would see [these associations in the current study],” added Dr. White, who was approached for comment.
However, she added that, even with the “modest” 20%-28% increase in the relative risk for certain breast cancers linked to a heavy cumulative dose of dyes in the current study, “there doesn’t seem to be any strong association with any cancer type.”
But she also pointed out that the most outstanding risk association was among ER–/PR– breast cancers, which are the “most aggressive and difficult to treat,” and thus the new findings are “important.”
Dr. White is the lead author of a 2019 study that received a lot of media attention because it rang an alarm bell about hair dyes and breast cancer risk.
That study concluded that ever using permanent hair dye or hair straighteners was associated with a higher risk for breast cancer than never using them and that this higher risk was especially associated with Black women. However, the study participants were from the prospective Sister Study. The participants in that study had no history of breast cancer, but they each had at least one sister who did. This family history of breast cancer may represent selection bias.
With changes in the 1980s, even safer now?
The study of hair dyes and cancer has “major public health implications” because the use of hair dye is widespread, Dr. Zhang and colleagues write in their article. They estimate that 50% to 80% of women and 10% of men aged 40 years and older in the United States and Europe use hair dye.
Permanent hair dyes “pose the greatest potential concern,” they stated, adding that these account for approximately 80% of hair dyes used in the United States and Europe and an even higher percentage in Asia.
The International Agency for Research on Cancer classifies occupational exposure to hair dyes as probably carcinogenic, but the carcinogenicity resulting from personal use of hair dyes is not classifiable – thus, there is no warning about at-home usage.
Notably, there was “a huge and very important” change in hair dye ingredients in the 1980s after the Food and Drug Administration warned about some chemicals in permanent hair dyes and the cosmetic industry altered their formulas, lead author Dr. Zhang said.
However, the researchers could not analyze use before and after the changes because not enough women reported first use of permanent hair dye after 1980 (only 1890 of 117,200 participants).
“We could expect that the current ingredients should make it safer,” Dr. Zhang said.
Study details
The researchers report that ever-users of permanent hair dyes had no significant increases in risk for solid cancers (n = 20,805; hazard ratio, 0.98, 95% confidence interval, 0.96-1.01) or hematopoietic cancers overall (n = 1,807; HR, 1.00; 95% CI, 0.91-1.10) compared with nonusers.
Additionally, ever-users did not have an increased risk for most specific cancers or cancer-related death (n = 4,860; HR, 0.96; 95% CI, 0.91-1.02).
As noted above, there were some exceptions.
Basal cell carcinoma risk was slightly increased for ever-users (n = 22,560; HR, 1.05; 95% CI, 1.02-1.08). Cumulative dose (a calculation of duration and frequency) was positively associated with risk for ER– breast cancer, PR– breast cancer, ER–/PR– breast cancer, and ovarian cancer, with risk rising in accordance with the total amount of dye.
Notably, at a cumulative dose of ≥200 uses, there was a 20% increase in the relative risk for ER- breast cancer (n = 1521; HR, 1.20; 95% CI, 1.02-1.41; P value for trend, .03). At the same cumulative dose, there was a 28% increase in the relative risk for ER-/PR- breast cancer (n = 1287; HR, 1.28, 95% CI, 1.08-1.52; P value for trend, .006).
In addition, an increased risk for Hodgkin lymphoma was observed, but only for women with naturally dark hair (the calculation was based on 70 women, 24 of whom had dark hair).
In a press statement, senior author Eva Schernhammer, PhD, of Harvard and the Medical University of Vienna, said the results “justify further prospective validation.”
She also explained that there are many variables to consider in this research, including different populations and countries, different susceptibility genotypes, different exposure settings (personal use vs. occupational exposure), and different colors of the permanent hair dyes used (dark dyes vs. light dyes).
Geographic location is a particularly important variable, suggested the study authors.
They pointed out that Europe, but not the United States, banned some individual hair dye ingredients that were considered carcinogenic during both the 1980s and 2000s. One country has even tighter oversight: “The most restrictive regulation of hair dyes exists in Japan, where cosmetic products are considered equivalent to drugs.”
The study was funded by the Centers for Disease Control and Prevention and the National Institute for Occupational Safety and Health. The study authors and Dr. White have disclosed no relevant financial relationships.
This article first appeared on Medscape.com.
Findings limited to White women in United States
Findings limited to White women in United States
The largest study of its kind has found no positive association between personal use of permanent hair dye and the risk for most cancers and cancer mortality.
The findings come from the Nurses’ Health Study, an ongoing prospective cohort study of more than 117,000 women who have been followed for 36 years and who did not have cancer at baseline.
The findings were published online on September 2 in the BMJ.
The results “offer some reassurance against concerns that personal use of permanent hair dyes might be associated with increased cancer risk or mortality,” write the investigators, with first author Yin Zhang, PhD, of Harvard Medical School, Boston.
The findings, which are limited to White women in the United States, indicate correlation, not causation, the authors emphasize.
Nevertheless, the researchers found an increased risk for some cancers among hair dye users, especially with greater cumulative dose (200 or more uses during the study period). The risk was increased for basal cell carcinoma, breast cancer (specifically, estrogen receptor negative [ER–], progesterone receptor negative [PR–], and hormone receptor negative [ER–, PR–]), and ovarian cancer.
A British expert not involved in the study dismissed these findings. “The reported associations are very weak, and, given the number of associations reported in this manuscript, they are very likely to be chance findings,” commented Paul Pharoah, PhD, professor of cancer epidemiology at the University of Cambridge (England).
“For the cancers where an increase in risk is reported, the results are not compelling. Even if they were real findings, the associations may not be cause-and-effect, and, even if they were causal associations, the magnitude of the effects are so small that any risk would be trivial.
“In short, none of the findings reported in this manuscript suggest that women who use hair dye are putting themselves at increased risk of cancer,” he stated.
A U.S. researcher who has previously coauthored a study suggesting an association between hair dye and breast cancer agreed that the increases in risk reported in this current study are “small.” But they are “of interest,” especially for breast and ovarian cancer, said Alexandra White, PhD, of the National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, N.C.
Hair dyes include compounds that “are not just potential carcinogens but also act as endocrine disruptors,” she said in an interview.
“In both breast and ovarian cancer, we know that hormones play an important part in the etiology ... so it’s biologically plausible that you would see [these associations in the current study],” added Dr. White, who was approached for comment.
However, she added that, even with the “modest” 20%-28% increase in the relative risk for certain breast cancers linked to a heavy cumulative dose of dyes in the current study, “there doesn’t seem to be any strong association with any cancer type.”
But she also pointed out that the most outstanding risk association was among ER–/PR– breast cancers, which are the “most aggressive and difficult to treat,” and thus the new findings are “important.”
Dr. White is the lead author of a 2019 study that received a lot of media attention because it rang an alarm bell about hair dyes and breast cancer risk.
That study concluded that ever using permanent hair dye or hair straighteners was associated with a higher risk for breast cancer than never using them and that this higher risk was especially associated with Black women. However, the study participants were from the prospective Sister Study. The participants in that study had no history of breast cancer, but they each had at least one sister who did. This family history of breast cancer may represent selection bias.
With changes in the 1980s, even safer now?
The study of hair dyes and cancer has “major public health implications” because the use of hair dye is widespread, Dr. Zhang and colleagues write in their article. They estimate that 50% to 80% of women and 10% of men aged 40 years and older in the United States and Europe use hair dye.
Permanent hair dyes “pose the greatest potential concern,” they stated, adding that these account for approximately 80% of hair dyes used in the United States and Europe and an even higher percentage in Asia.
The International Agency for Research on Cancer classifies occupational exposure to hair dyes as probably carcinogenic, but the carcinogenicity resulting from personal use of hair dyes is not classifiable – thus, there is no warning about at-home usage.
Notably, there was “a huge and very important” change in hair dye ingredients in the 1980s after the Food and Drug Administration warned about some chemicals in permanent hair dyes and the cosmetic industry altered their formulas, lead author Dr. Zhang said.
However, the researchers could not analyze use before and after the changes because not enough women reported first use of permanent hair dye after 1980 (only 1890 of 117,200 participants).
“We could expect that the current ingredients should make it safer,” Dr. Zhang said.
Study details
The researchers report that ever-users of permanent hair dyes had no significant increases in risk for solid cancers (n = 20,805; hazard ratio, 0.98, 95% confidence interval, 0.96-1.01) or hematopoietic cancers overall (n = 1,807; HR, 1.00; 95% CI, 0.91-1.10) compared with nonusers.
Additionally, ever-users did not have an increased risk for most specific cancers or cancer-related death (n = 4,860; HR, 0.96; 95% CI, 0.91-1.02).
As noted above, there were some exceptions.
Basal cell carcinoma risk was slightly increased for ever-users (n = 22,560; HR, 1.05; 95% CI, 1.02-1.08). Cumulative dose (a calculation of duration and frequency) was positively associated with risk for ER– breast cancer, PR– breast cancer, ER–/PR– breast cancer, and ovarian cancer, with risk rising in accordance with the total amount of dye.
Notably, at a cumulative dose of ≥200 uses, there was a 20% increase in the relative risk for ER- breast cancer (n = 1521; HR, 1.20; 95% CI, 1.02-1.41; P value for trend, .03). At the same cumulative dose, there was a 28% increase in the relative risk for ER-/PR- breast cancer (n = 1287; HR, 1.28, 95% CI, 1.08-1.52; P value for trend, .006).
In addition, an increased risk for Hodgkin lymphoma was observed, but only for women with naturally dark hair (the calculation was based on 70 women, 24 of whom had dark hair).
In a press statement, senior author Eva Schernhammer, PhD, of Harvard and the Medical University of Vienna, said the results “justify further prospective validation.”
She also explained that there are many variables to consider in this research, including different populations and countries, different susceptibility genotypes, different exposure settings (personal use vs. occupational exposure), and different colors of the permanent hair dyes used (dark dyes vs. light dyes).
Geographic location is a particularly important variable, suggested the study authors.
They pointed out that Europe, but not the United States, banned some individual hair dye ingredients that were considered carcinogenic during both the 1980s and 2000s. One country has even tighter oversight: “The most restrictive regulation of hair dyes exists in Japan, where cosmetic products are considered equivalent to drugs.”
The study was funded by the Centers for Disease Control and Prevention and the National Institute for Occupational Safety and Health. The study authors and Dr. White have disclosed no relevant financial relationships.
This article first appeared on Medscape.com.
The largest study of its kind has found no positive association between personal use of permanent hair dye and the risk for most cancers and cancer mortality.
The findings come from the Nurses’ Health Study, an ongoing prospective cohort study of more than 117,000 women who have been followed for 36 years and who did not have cancer at baseline.
The findings were published online on September 2 in the BMJ.
The results “offer some reassurance against concerns that personal use of permanent hair dyes might be associated with increased cancer risk or mortality,” write the investigators, with first author Yin Zhang, PhD, of Harvard Medical School, Boston.
The findings, which are limited to White women in the United States, indicate correlation, not causation, the authors emphasize.
Nevertheless, the researchers found an increased risk for some cancers among hair dye users, especially with greater cumulative dose (200 or more uses during the study period). The risk was increased for basal cell carcinoma, breast cancer (specifically, estrogen receptor negative [ER–], progesterone receptor negative [PR–], and hormone receptor negative [ER–, PR–]), and ovarian cancer.
A British expert not involved in the study dismissed these findings. “The reported associations are very weak, and, given the number of associations reported in this manuscript, they are very likely to be chance findings,” commented Paul Pharoah, PhD, professor of cancer epidemiology at the University of Cambridge (England).
“For the cancers where an increase in risk is reported, the results are not compelling. Even if they were real findings, the associations may not be cause-and-effect, and, even if they were causal associations, the magnitude of the effects are so small that any risk would be trivial.
“In short, none of the findings reported in this manuscript suggest that women who use hair dye are putting themselves at increased risk of cancer,” he stated.
A U.S. researcher who has previously coauthored a study suggesting an association between hair dye and breast cancer agreed that the increases in risk reported in this current study are “small.” But they are “of interest,” especially for breast and ovarian cancer, said Alexandra White, PhD, of the National Institute of Environmental Health Sciences, National Institutes of Health, Research Triangle Park, N.C.
Hair dyes include compounds that “are not just potential carcinogens but also act as endocrine disruptors,” she said in an interview.
“In both breast and ovarian cancer, we know that hormones play an important part in the etiology ... so it’s biologically plausible that you would see [these associations in the current study],” added Dr. White, who was approached for comment.
However, she added that, even with the “modest” 20%-28% increase in the relative risk for certain breast cancers linked to a heavy cumulative dose of dyes in the current study, “there doesn’t seem to be any strong association with any cancer type.”
But she also pointed out that the most outstanding risk association was among ER–/PR– breast cancers, which are the “most aggressive and difficult to treat,” and thus the new findings are “important.”
Dr. White is the lead author of a 2019 study that received a lot of media attention because it rang an alarm bell about hair dyes and breast cancer risk.
That study concluded that ever using permanent hair dye or hair straighteners was associated with a higher risk for breast cancer than never using them and that this higher risk was especially associated with Black women. However, the study participants were from the prospective Sister Study. The participants in that study had no history of breast cancer, but they each had at least one sister who did. This family history of breast cancer may represent selection bias.
With changes in the 1980s, even safer now?
The study of hair dyes and cancer has “major public health implications” because the use of hair dye is widespread, Dr. Zhang and colleagues write in their article. They estimate that 50% to 80% of women and 10% of men aged 40 years and older in the United States and Europe use hair dye.
Permanent hair dyes “pose the greatest potential concern,” they stated, adding that these account for approximately 80% of hair dyes used in the United States and Europe and an even higher percentage in Asia.
The International Agency for Research on Cancer classifies occupational exposure to hair dyes as probably carcinogenic, but the carcinogenicity resulting from personal use of hair dyes is not classifiable – thus, there is no warning about at-home usage.
Notably, there was “a huge and very important” change in hair dye ingredients in the 1980s after the Food and Drug Administration warned about some chemicals in permanent hair dyes and the cosmetic industry altered their formulas, lead author Dr. Zhang said.
However, the researchers could not analyze use before and after the changes because not enough women reported first use of permanent hair dye after 1980 (only 1890 of 117,200 participants).
“We could expect that the current ingredients should make it safer,” Dr. Zhang said.
Study details
The researchers report that ever-users of permanent hair dyes had no significant increases in risk for solid cancers (n = 20,805; hazard ratio, 0.98, 95% confidence interval, 0.96-1.01) or hematopoietic cancers overall (n = 1,807; HR, 1.00; 95% CI, 0.91-1.10) compared with nonusers.
Additionally, ever-users did not have an increased risk for most specific cancers or cancer-related death (n = 4,860; HR, 0.96; 95% CI, 0.91-1.02).
As noted above, there were some exceptions.
Basal cell carcinoma risk was slightly increased for ever-users (n = 22,560; HR, 1.05; 95% CI, 1.02-1.08). Cumulative dose (a calculation of duration and frequency) was positively associated with risk for ER– breast cancer, PR– breast cancer, ER–/PR– breast cancer, and ovarian cancer, with risk rising in accordance with the total amount of dye.
Notably, at a cumulative dose of ≥200 uses, there was a 20% increase in the relative risk for ER- breast cancer (n = 1521; HR, 1.20; 95% CI, 1.02-1.41; P value for trend, .03). At the same cumulative dose, there was a 28% increase in the relative risk for ER-/PR- breast cancer (n = 1287; HR, 1.28, 95% CI, 1.08-1.52; P value for trend, .006).
In addition, an increased risk for Hodgkin lymphoma was observed, but only for women with naturally dark hair (the calculation was based on 70 women, 24 of whom had dark hair).
In a press statement, senior author Eva Schernhammer, PhD, of Harvard and the Medical University of Vienna, said the results “justify further prospective validation.”
She also explained that there are many variables to consider in this research, including different populations and countries, different susceptibility genotypes, different exposure settings (personal use vs. occupational exposure), and different colors of the permanent hair dyes used (dark dyes vs. light dyes).
Geographic location is a particularly important variable, suggested the study authors.
They pointed out that Europe, but not the United States, banned some individual hair dye ingredients that were considered carcinogenic during both the 1980s and 2000s. One country has even tighter oversight: “The most restrictive regulation of hair dyes exists in Japan, where cosmetic products are considered equivalent to drugs.”
The study was funded by the Centers for Disease Control and Prevention and the National Institute for Occupational Safety and Health. The study authors and Dr. White have disclosed no relevant financial relationships.
This article first appeared on Medscape.com.
Chronicles of Cancer: A history of mammography, part 2
The push and pull of social forces
Science and technology emerge from and are shaped by social forces outside the laboratory and clinic. This is an essential fact of most new medical technology. In the Chronicles of Cancer series, part 1 of the story of mammography focused on the technological determinants of its development and use. Part 2 will focus on some of the social forces that shaped the development of mammography.
“Few medical issues have been as controversial – or as political, at least in the United States – as the role of mammographic screening for breast cancer,” according to Donald A. Berry, PhD, a biostatistician at the University of Texas MD Anderson Cancer Center, Houston.1
In fact, technology aside, the history of mammography has been and remains rife with controversy on the one side and vigorous promotion on the other, all enmeshed within the War on Cancer, corporate and professional interests, and the women’s rights movement’s growing issues with what was seen as a patriarchal medical establishment.
Today the issue of conflicts of interest are paramount in any discussion of new medical developments, from the early preclinical stages to ultimate deployment. Then, as now, professional and advocacy societies had a profound influence on government and social decision-making, but in that earlier, more trusting era, buoyed by the amazing changes that technology was bringing to everyday life and an unshakable commitment to and belief in “progress,” science and the medical community held a far more effective sway over the beliefs and behavior of the general population.
Women’s health observed
Although the main focus of the women’s movement with regard to breast cancer was a struggle against the common practice of routine radical mastectomies and a push toward breast-conserving surgeries, the issue of preventive care and screening with regard to women’s health was also a major concern.
Regarding mammography, early enthusiasm in the medical community and among the general public was profound. In 1969, Robert Egan described how mammography had a “certain magic appeal.” The patient, he continued, “feels something special is being done for her.” Women whose cancers had been discovered on a mammogram praised radiologists as heroes who had saved their lives.2
In that era, however, beyond the confines of the doctor’s office, mammography and breast cancer remained topics not discussed among the public at large, despite efforts by the American Cancer Society to change this.
ACS weighs in
Various groups had been promoting the benefits of breast self-examination since the 1930s, and in 1947, the American Cancer Society launched an awareness campaign, “Look for a Lump or Thickening in the Breast,” instructing women to perform a monthly breast self-exam. Between self-examination and clinical breast examinations in physicians’ offices, the ACS believed that smaller and more treatable breast cancers could be discovered.
In 1972, the ACS, working with the National Cancer Institute (NCI), inaugurated the Breast Cancer Detection Demonstration Project (BCDDP), which planned to screen over a quarter of a million American women for breast cancer. The initiative was a direct outgrowth of the National Cancer Act of 1971,3 the key legislation of the War on Cancer, promoted by President Richard Nixon in his State of the Union address in 1971 and responsible for the creation of the National Cancer Institute.
Arthur I. Holleb, MD, ACS senior vice president for medical affairs and research, announced that, “[T]he time has come for the American Cancer Society to mount a massive program on mammography just as we did with the Pap test,”2 according to Barron Lerner, MD, whose book “The Breast Cancer Wars” provides a history of the long-term controversies involved.4
The Pap test, widely promulgated in the 1950s and 1960s, had produced a decline in mortality from cervical cancer.
Regardless of the lack of data on effectiveness at earlier ages, the ACS chose to include women as young as 35 in the BCDDP in order “to inculcate them with ‘good health habits’ ” and “to make our screenee want to return periodically and to want to act as a missionary to bring other women into the screening process.”2
Celebrity status matters
All of the elements of a social revolution in the use of mammography were in place in the late 1960s, but the final triggers to raise social consciousness were the cases of several high-profile female celebrities. In 1973, beloved former child star Shirley Temple Black revealed her breast cancer diagnosis and mastectomy in an era when public discussion of cancer – especially breast cancer – was rare.4
But it wasn’t until 1974 that public awareness and media coverage exploded, sparked by the impact of First Lady Betty Ford’s outspokenness on her own experience of breast cancer. “In obituaries prior to the 1950s and 1960s, women who died from breast cancer were often listed as dying from ‘a prolonged disease’ or ‘a woman’s disease,’ ” according to Tasha Dubriwny, PhD, now an associate professor of communication and women’s and gender studies at Texas A&M University, College Station, when interviewed by the American Association for Cancer Research.5Betty Ford openly addressed her breast cancer diagnosis and treatment and became a prominent advocate for early screening, transforming the landscape of breast cancer awareness. And although Betty Ford’s diagnosis was based on clinical examination rather than mammography, its boost to overall screening was indisputable.
“Within weeks [after Betty Ford’s announcement] thousands of women who had been reluctant to examine their breasts inundated cancer screening centers,” according to a 1987 article in the New York Times.6 Among these women was Happy Rockefeller, the wife of Vice President Nelson A. Rockefeller. Happy Rockefeller also found that she had breast cancer upon screening, and with Betty Ford would become another icon thereafter for breast cancer screening.
“Ford’s lesson for other women was straightforward: Get a mammogram, which she had not done. The American Cancer Society and National Cancer Institute had recently mounted a demonstration project to promote the detection of breast cancer as early as possible, when it was presumed to be more curable. The degree to which women embraced Ford’s message became clear through the famous ‘Betty Ford blip.’ So many women got breast examinations and mammograms for the first time after Ford’s announcement that the actual incidence of breast cancer in the United States went up by 15 percent.”4
In a 1975 address to the American Cancer Society, Betty Ford said: “One day I appeared to be fine and the next day I was in the hospital for a mastectomy. It made me realize how many women in the country could be in the same situation. That realization made me decide to discuss my breast cancer operation openly, because I thought of all the lives in jeopardy. My experience and frank discussion of breast cancer did prompt many women to learn about self-examination, regular checkups, and such detection techniques as mammography. These are so important. I just cannot stress enough how necessary it is for women to take an active interest in their own health and body.”7
ACS guidelines evolve
It wasn’t until 1976 that the ACS issued its first major guidelines for mammography screening. The ACS suggested mammograms may be called for in women aged 35-39 if there was a personal history of breast cancer, and between ages 40 and 49 if their mother or sisters had a history of breast cancer. Women aged 50 years and older could have yearly screening. Thereafter, the use of mammography was encouraged more and more with each new set of recommendations.8
Between 1980 and 1982, these guidelines expanded to advising a baseline mammogram for women aged 35-39 years; that women consult with their physician between ages 40 and 49; and that women over 50 have a yearly mammogram.
Between 1983 and 1991, the recommendations were for a baseline mammogram for women aged 35-39 years; a mammogram every 1-2 years for women aged 40-49; and yearly mammograms for women aged 50 and up. The baseline mammogram recommendation was dropped in 1992.
Between 1997 and 2015, the stakes were upped, and women aged 40-49 years were now recommended to have yearly mammograms, as were still all women aged 50 years and older.
In October 2015, the ACS changed their recommendation to say that women aged 40-44 years should have the choice of initiating mammogram screening, and that the risks and benefits of doing so should be discussed with their physicians. Women aged 45 years and older were still recommended for yearly mammogram screening. That recommendation stands today.
Controversies arise over risk/benefit
The technology was not, however, universally embraced. “By the late 1970s, mammography had diffused much more widely but had become a source of tremendous controversy. On the one hand, advocates of the technology enthusiastically touted its ability to detect smaller, more curable cancers. On the other hand, critics asked whether breast x-rays, particularly for women aged 50 and younger, actually caused more harm than benefit.”2
In addition, meta-analyses of the nine major screening trials conducted between 1965 and 1991 indicated that the reduced breast cancer mortality with screening was dependent on age. In particular, the results for women aged 40-49 years and 50-59 years showed only borderline statistical significance, and they varied depending on how cases were accrued in individual trials.
“Assuming that differences actually exist, the absolute breast cancer mortality reduction per 10,000 women screened for 10 years ranged from 3 for age 39-49 years; 5-8 for age 50-59 years; and 12-21 for age 60=69 years,” according to a review by the U.S. Preventive Services Task Force.9
The estimates for the group aged 70-74 years were limited by low numbers of events in trials that had smaller numbers of women in this age group.
Age has continued to be a major factor in determining the cost/benefit of routine mammography screening, with the American College of Physicians stating in its 2019 guidelines, “The potential harms outweigh the benefits in most women aged 40 to 49 years,” and adding, “In average-risk women aged 75 years or older or in women with a life expectancy of 10 years or less, clinicians should discontinue screening for breast cancer.”10
A Cochrane Report from 2013 was equally critical: “If we assume that screening reduces breast cancer mortality by 15% after 13 years of follow-up and that overdiagnosis and overtreatment is at 30%, it means that for every 2,000 women invited for screening throughout 10 years, one will avoid dying of breast cancer and 10 healthy women, who would not have been diagnosed if there had not been screening, will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress including anxiety and uncertainty for years because of false positive findings.”11
Conflicting voices exist
These reports advising a more nuanced evaluation of the benefits of mammography, however, were received with skepticism from doctors committed to the vision of breast cancer screening and convinced by anecdotal evidence in their own practices.
These reports were also in direct contradiction to recommendations made in 1997 by the National Cancer Institute, which recommended screening mammograms every 3 years for women aged 40-49 years at average risk of breast cancer.
Such scientific vacillation has contributed to a love/hate relationship with mammography in the mainstream media, fueling new controversies with regard to breast cancer screening, sometimes as much driven by public suspicion and political advocacy as by scientific evolution.
Vocal opponents of universal mammography screening arose throughout the years, and even the cases of Betty Ford and Happy Rockefeller have been called into question as iconic demonstrations of the effectiveness of screening. And although not directly linked to the issue of screening, the rebellion against the routine use of radical mastectomies, a technique pioneered by Halsted in 1894 and in continuing use into the modern era, sparked outrage in women’s rights activists who saw it as evidence of a patriarchal medical establishment making arbitrary decisions concerning women’s bodies. For example, feminist and breast cancer activist Rose Kushner argued against the unnecessary disfigurement of women’s bodies and urged the use and development of less drastic techniques, including partial mastectomies and lumpectomies as viable choices. And these choices were increasingly supported by the medical community as safe and effective alternatives for many patients.12
A 2015 paper in the Journal of the Royal Society of Medicine was bluntly titled “Mammography screening is harmful and should be abandoned.”13 According to the author, who was the editor of the 2013 Cochrane Report, “I believe that if screening had been a drug, it would have been withdrawn from the market long ago.” And the popular press has not been shy at weighing in on the controversy, driven, in part, by the lack of consensus and continually changing guidelines, with major publications such as U.S. News and World Report, the Washington Post, and others addressing the issue over the years. And even public advocacy groups such as the Susan G. Komen organization14 are supporting the more modern professional guidelines in taking a more nuanced approach to the discussion of risks and benefits for individual women.
In 2014, the Swiss Medical Board, a nationally appointed body, recommended that new mammography screening programs should not be instituted in that country and that limits be placed on current programs because of the imbalance between risks and benefits of mammography screening.15 And a study done in Australia in 2020 agreed, stating, “Using data of 30% overdiagnosis of women aged 50 to 69 years in the NSW [New South Wales] BreastScreen program in 2012, we calculated an Australian ratio of harm of overdiagnosis to benefit (breast cancer deaths avoided) of 15:1 and recommended stopping the invitation to screening.”16
Conclusion
If nothing else, the history of mammography shows that the interconnection of social factors with the rise of a medical technology can have profound impacts on patient care. Technology developed by men for women became a touchstone of resentment in a world ever more aware of sex and gender biases in everything from the conduct of clinical trials to the care (or lack thereof) of women with heart disease. Tied for so many years to a radically disfiguring and drastic form of surgery that affected what many felt to be a hallmark and representation of womanhood1,17 mammography also carried the weight of both the real and imaginary fears of radiation exposure.
Well into its development, the technology still found itself under intense public scrutiny, and was enmeshed in a continual media circus, with ping-ponging discussions of risk/benefit in the scientific literature fueling complaints by many of the dominance of a patriarchal medical community over women’s bodies.
With guidelines for mammography still evolving, questions still remaining, and new technologies such as digital imaging falling short in their hoped-for promise, the story remains unfinished, and the future still uncertain. One thing remains clear, however: In the right circumstances, with the right patient population, and properly executed, mammography has saved lives when tied to effective, early treatment, whatever its flaws and failings. This truth goes hand in hand with another reality: It may have also contributed to considerable unanticipated harm through overdiagnosis and overtreatment.
Overall, the history of mammography is a cautionary tale for the entire medical community and for the development of new medical technologies. The push-pull of the demand for progress to save lives and the slowness and often inconclusiveness of scientific studies that validate new technologies create gray areas, where social determinants and professional interests vie in an information vacuum for control of the narrative of risks vs. benefits.
The story of mammography is not yet concluded, and may never be, especially given the unlikelihood of conducting the massive randomized clinical trials that would be needed to settle the issue. It is more likely to remain controversial, at least until the technology of mammography becomes obsolete, replaced by something new and different, which will likely start the push-pull cycle all over again.
And regardless of the risks and benefits of mammography screening, the issue of treatment once breast cancer is identified is perhaps one of more overwhelming import.
References
1. Berry, DA. The Breast. 2013;22[Supplement 2]:S73-S76.
2. Lerner, BH. “To See Today With the Eyes of Tomorrow: A History of Screening Mammography.” Background paper for the Institute of Medicine report Mammography and Beyond: Developing Technologies for the Early Detection of Breast Cancer. 2001.
3. NCI website. The National Cancer Act of 1971. www.cancer.gov/about-nci/overview/history/national-cancer-act-1971.
4. Lerner BH. The Huffington Post, Sep. 26, 2014.
5. Wu C. Cancer Today. 2012;2(3): Sep. 27.
6. “”The New York Times. Oct. 17, 1987.
7. Ford B. Remarks to the American Cancer Society. 1975.
8. The American Cancer Society website. History of ACS Recommendations for the Early Detection of Cancer in People Without Symptoms.
9. Nelson HD et al. Screening for Breast Cancer: A Systematic Review to Update the 2009 U.S. Preventive Services Task Force Recommendation. 2016; Evidence Syntheses, No. 124; pp.29-49.
10. Qasseem A et al. Annals of Internal Medicine. 2019;170(8):547-60.
11. Gotzsche PC et al. Cochrane Report 2013.
12. Lerner, BH. West J Med. May 2001;174(5):362-5.
13. Gotzsche PC. J R Soc Med. 2015;108(9): 341-5.
14. Susan G. Komen website. Weighing the Benefits and Risks of Mammography.
15. Biller-Andorno N et al. N Engl J Med 2014;370:1965-7.
16. Burton R et al. JAMA Netw Open. 2020;3(6):e208249.
17. Webb C et al. Plast Surg. 2019;27(1):49-53.
Mark Lesney is the editor of Hematology News and the managing editor of MDedge.com/IDPractioner. He has a PhD in plant virology and a PhD in the history of science, with a focus on the history of biotechnology and medicine. He has worked as a writer/editor for the American Chemical Society, and has served as an adjunct assistant professor in the department of biochemistry and molecular & cellular biology at Georgetown University, Washington.
The push and pull of social forces
The push and pull of social forces
Science and technology emerge from and are shaped by social forces outside the laboratory and clinic. This is an essential fact of most new medical technology. In the Chronicles of Cancer series, part 1 of the story of mammography focused on the technological determinants of its development and use. Part 2 will focus on some of the social forces that shaped the development of mammography.
“Few medical issues have been as controversial – or as political, at least in the United States – as the role of mammographic screening for breast cancer,” according to Donald A. Berry, PhD, a biostatistician at the University of Texas MD Anderson Cancer Center, Houston.1
In fact, technology aside, the history of mammography has been and remains rife with controversy on the one side and vigorous promotion on the other, all enmeshed within the War on Cancer, corporate and professional interests, and the women’s rights movement’s growing issues with what was seen as a patriarchal medical establishment.
Today the issue of conflicts of interest are paramount in any discussion of new medical developments, from the early preclinical stages to ultimate deployment. Then, as now, professional and advocacy societies had a profound influence on government and social decision-making, but in that earlier, more trusting era, buoyed by the amazing changes that technology was bringing to everyday life and an unshakable commitment to and belief in “progress,” science and the medical community held a far more effective sway over the beliefs and behavior of the general population.
Women’s health observed
Although the main focus of the women’s movement with regard to breast cancer was a struggle against the common practice of routine radical mastectomies and a push toward breast-conserving surgeries, the issue of preventive care and screening with regard to women’s health was also a major concern.
Regarding mammography, early enthusiasm in the medical community and among the general public was profound. In 1969, Robert Egan described how mammography had a “certain magic appeal.” The patient, he continued, “feels something special is being done for her.” Women whose cancers had been discovered on a mammogram praised radiologists as heroes who had saved their lives.2
In that era, however, beyond the confines of the doctor’s office, mammography and breast cancer remained topics not discussed among the public at large, despite efforts by the American Cancer Society to change this.
ACS weighs in
Various groups had been promoting the benefits of breast self-examination since the 1930s, and in 1947, the American Cancer Society launched an awareness campaign, “Look for a Lump or Thickening in the Breast,” instructing women to perform a monthly breast self-exam. Between self-examination and clinical breast examinations in physicians’ offices, the ACS believed that smaller and more treatable breast cancers could be discovered.
In 1972, the ACS, working with the National Cancer Institute (NCI), inaugurated the Breast Cancer Detection Demonstration Project (BCDDP), which planned to screen over a quarter of a million American women for breast cancer. The initiative was a direct outgrowth of the National Cancer Act of 1971,3 the key legislation of the War on Cancer, promoted by President Richard Nixon in his State of the Union address in 1971 and responsible for the creation of the National Cancer Institute.
Arthur I. Holleb, MD, ACS senior vice president for medical affairs and research, announced that, “[T]he time has come for the American Cancer Society to mount a massive program on mammography just as we did with the Pap test,”2 according to Barron Lerner, MD, whose book “The Breast Cancer Wars” provides a history of the long-term controversies involved.4
The Pap test, widely promulgated in the 1950s and 1960s, had produced a decline in mortality from cervical cancer.
Regardless of the lack of data on effectiveness at earlier ages, the ACS chose to include women as young as 35 in the BCDDP in order “to inculcate them with ‘good health habits’ ” and “to make our screenee want to return periodically and to want to act as a missionary to bring other women into the screening process.”2
Celebrity status matters
All of the elements of a social revolution in the use of mammography were in place in the late 1960s, but the final triggers to raise social consciousness were the cases of several high-profile female celebrities. In 1973, beloved former child star Shirley Temple Black revealed her breast cancer diagnosis and mastectomy in an era when public discussion of cancer – especially breast cancer – was rare.4
But it wasn’t until 1974 that public awareness and media coverage exploded, sparked by the impact of First Lady Betty Ford’s outspokenness on her own experience of breast cancer. “In obituaries prior to the 1950s and 1960s, women who died from breast cancer were often listed as dying from ‘a prolonged disease’ or ‘a woman’s disease,’ ” according to Tasha Dubriwny, PhD, now an associate professor of communication and women’s and gender studies at Texas A&M University, College Station, when interviewed by the American Association for Cancer Research.5Betty Ford openly addressed her breast cancer diagnosis and treatment and became a prominent advocate for early screening, transforming the landscape of breast cancer awareness. And although Betty Ford’s diagnosis was based on clinical examination rather than mammography, its boost to overall screening was indisputable.
“Within weeks [after Betty Ford’s announcement] thousands of women who had been reluctant to examine their breasts inundated cancer screening centers,” according to a 1987 article in the New York Times.6 Among these women was Happy Rockefeller, the wife of Vice President Nelson A. Rockefeller. Happy Rockefeller also found that she had breast cancer upon screening, and with Betty Ford would become another icon thereafter for breast cancer screening.
“Ford’s lesson for other women was straightforward: Get a mammogram, which she had not done. The American Cancer Society and National Cancer Institute had recently mounted a demonstration project to promote the detection of breast cancer as early as possible, when it was presumed to be more curable. The degree to which women embraced Ford’s message became clear through the famous ‘Betty Ford blip.’ So many women got breast examinations and mammograms for the first time after Ford’s announcement that the actual incidence of breast cancer in the United States went up by 15 percent.”4
In a 1975 address to the American Cancer Society, Betty Ford said: “One day I appeared to be fine and the next day I was in the hospital for a mastectomy. It made me realize how many women in the country could be in the same situation. That realization made me decide to discuss my breast cancer operation openly, because I thought of all the lives in jeopardy. My experience and frank discussion of breast cancer did prompt many women to learn about self-examination, regular checkups, and such detection techniques as mammography. These are so important. I just cannot stress enough how necessary it is for women to take an active interest in their own health and body.”7
ACS guidelines evolve
It wasn’t until 1976 that the ACS issued its first major guidelines for mammography screening. The ACS suggested mammograms may be called for in women aged 35-39 if there was a personal history of breast cancer, and between ages 40 and 49 if their mother or sisters had a history of breast cancer. Women aged 50 years and older could have yearly screening. Thereafter, the use of mammography was encouraged more and more with each new set of recommendations.8
Between 1980 and 1982, these guidelines expanded to advising a baseline mammogram for women aged 35-39 years; that women consult with their physician between ages 40 and 49; and that women over 50 have a yearly mammogram.
Between 1983 and 1991, the recommendations were for a baseline mammogram for women aged 35-39 years; a mammogram every 1-2 years for women aged 40-49; and yearly mammograms for women aged 50 and up. The baseline mammogram recommendation was dropped in 1992.
Between 1997 and 2015, the stakes were upped, and women aged 40-49 years were now recommended to have yearly mammograms, as were still all women aged 50 years and older.
In October 2015, the ACS changed their recommendation to say that women aged 40-44 years should have the choice of initiating mammogram screening, and that the risks and benefits of doing so should be discussed with their physicians. Women aged 45 years and older were still recommended for yearly mammogram screening. That recommendation stands today.
Controversies arise over risk/benefit
The technology was not, however, universally embraced. “By the late 1970s, mammography had diffused much more widely but had become a source of tremendous controversy. On the one hand, advocates of the technology enthusiastically touted its ability to detect smaller, more curable cancers. On the other hand, critics asked whether breast x-rays, particularly for women aged 50 and younger, actually caused more harm than benefit.”2
In addition, meta-analyses of the nine major screening trials conducted between 1965 and 1991 indicated that the reduced breast cancer mortality with screening was dependent on age. In particular, the results for women aged 40-49 years and 50-59 years showed only borderline statistical significance, and they varied depending on how cases were accrued in individual trials.
“Assuming that differences actually exist, the absolute breast cancer mortality reduction per 10,000 women screened for 10 years ranged from 3 for age 39-49 years; 5-8 for age 50-59 years; and 12-21 for age 60=69 years,” according to a review by the U.S. Preventive Services Task Force.9
The estimates for the group aged 70-74 years were limited by low numbers of events in trials that had smaller numbers of women in this age group.
Age has continued to be a major factor in determining the cost/benefit of routine mammography screening, with the American College of Physicians stating in its 2019 guidelines, “The potential harms outweigh the benefits in most women aged 40 to 49 years,” and adding, “In average-risk women aged 75 years or older or in women with a life expectancy of 10 years or less, clinicians should discontinue screening for breast cancer.”10
A Cochrane Report from 2013 was equally critical: “If we assume that screening reduces breast cancer mortality by 15% after 13 years of follow-up and that overdiagnosis and overtreatment is at 30%, it means that for every 2,000 women invited for screening throughout 10 years, one will avoid dying of breast cancer and 10 healthy women, who would not have been diagnosed if there had not been screening, will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress including anxiety and uncertainty for years because of false positive findings.”11
Conflicting voices exist
These reports advising a more nuanced evaluation of the benefits of mammography, however, were received with skepticism from doctors committed to the vision of breast cancer screening and convinced by anecdotal evidence in their own practices.
These reports were also in direct contradiction to recommendations made in 1997 by the National Cancer Institute, which recommended screening mammograms every 3 years for women aged 40-49 years at average risk of breast cancer.
Such scientific vacillation has contributed to a love/hate relationship with mammography in the mainstream media, fueling new controversies with regard to breast cancer screening, sometimes as much driven by public suspicion and political advocacy as by scientific evolution.
Vocal opponents of universal mammography screening arose throughout the years, and even the cases of Betty Ford and Happy Rockefeller have been called into question as iconic demonstrations of the effectiveness of screening. And although not directly linked to the issue of screening, the rebellion against the routine use of radical mastectomies, a technique pioneered by Halsted in 1894 and in continuing use into the modern era, sparked outrage in women’s rights activists who saw it as evidence of a patriarchal medical establishment making arbitrary decisions concerning women’s bodies. For example, feminist and breast cancer activist Rose Kushner argued against the unnecessary disfigurement of women’s bodies and urged the use and development of less drastic techniques, including partial mastectomies and lumpectomies as viable choices. And these choices were increasingly supported by the medical community as safe and effective alternatives for many patients.12
A 2015 paper in the Journal of the Royal Society of Medicine was bluntly titled “Mammography screening is harmful and should be abandoned.”13 According to the author, who was the editor of the 2013 Cochrane Report, “I believe that if screening had been a drug, it would have been withdrawn from the market long ago.” And the popular press has not been shy at weighing in on the controversy, driven, in part, by the lack of consensus and continually changing guidelines, with major publications such as U.S. News and World Report, the Washington Post, and others addressing the issue over the years. And even public advocacy groups such as the Susan G. Komen organization14 are supporting the more modern professional guidelines in taking a more nuanced approach to the discussion of risks and benefits for individual women.
In 2014, the Swiss Medical Board, a nationally appointed body, recommended that new mammography screening programs should not be instituted in that country and that limits be placed on current programs because of the imbalance between risks and benefits of mammography screening.15 And a study done in Australia in 2020 agreed, stating, “Using data of 30% overdiagnosis of women aged 50 to 69 years in the NSW [New South Wales] BreastScreen program in 2012, we calculated an Australian ratio of harm of overdiagnosis to benefit (breast cancer deaths avoided) of 15:1 and recommended stopping the invitation to screening.”16
Conclusion
If nothing else, the history of mammography shows that the interconnection of social factors with the rise of a medical technology can have profound impacts on patient care. Technology developed by men for women became a touchstone of resentment in a world ever more aware of sex and gender biases in everything from the conduct of clinical trials to the care (or lack thereof) of women with heart disease. Tied for so many years to a radically disfiguring and drastic form of surgery that affected what many felt to be a hallmark and representation of womanhood1,17 mammography also carried the weight of both the real and imaginary fears of radiation exposure.
Well into its development, the technology still found itself under intense public scrutiny, and was enmeshed in a continual media circus, with ping-ponging discussions of risk/benefit in the scientific literature fueling complaints by many of the dominance of a patriarchal medical community over women’s bodies.
With guidelines for mammography still evolving, questions still remaining, and new technologies such as digital imaging falling short in their hoped-for promise, the story remains unfinished, and the future still uncertain. One thing remains clear, however: In the right circumstances, with the right patient population, and properly executed, mammography has saved lives when tied to effective, early treatment, whatever its flaws and failings. This truth goes hand in hand with another reality: It may have also contributed to considerable unanticipated harm through overdiagnosis and overtreatment.
Overall, the history of mammography is a cautionary tale for the entire medical community and for the development of new medical technologies. The push-pull of the demand for progress to save lives and the slowness and often inconclusiveness of scientific studies that validate new technologies create gray areas, where social determinants and professional interests vie in an information vacuum for control of the narrative of risks vs. benefits.
The story of mammography is not yet concluded, and may never be, especially given the unlikelihood of conducting the massive randomized clinical trials that would be needed to settle the issue. It is more likely to remain controversial, at least until the technology of mammography becomes obsolete, replaced by something new and different, which will likely start the push-pull cycle all over again.
And regardless of the risks and benefits of mammography screening, the issue of treatment once breast cancer is identified is perhaps one of more overwhelming import.
References
1. Berry, DA. The Breast. 2013;22[Supplement 2]:S73-S76.
2. Lerner, BH. “To See Today With the Eyes of Tomorrow: A History of Screening Mammography.” Background paper for the Institute of Medicine report Mammography and Beyond: Developing Technologies for the Early Detection of Breast Cancer. 2001.
3. NCI website. The National Cancer Act of 1971. www.cancer.gov/about-nci/overview/history/national-cancer-act-1971.
4. Lerner BH. The Huffington Post, Sep. 26, 2014.
5. Wu C. Cancer Today. 2012;2(3): Sep. 27.
6. “”The New York Times. Oct. 17, 1987.
7. Ford B. Remarks to the American Cancer Society. 1975.
8. The American Cancer Society website. History of ACS Recommendations for the Early Detection of Cancer in People Without Symptoms.
9. Nelson HD et al. Screening for Breast Cancer: A Systematic Review to Update the 2009 U.S. Preventive Services Task Force Recommendation. 2016; Evidence Syntheses, No. 124; pp.29-49.
10. Qasseem A et al. Annals of Internal Medicine. 2019;170(8):547-60.
11. Gotzsche PC et al. Cochrane Report 2013.
12. Lerner, BH. West J Med. May 2001;174(5):362-5.
13. Gotzsche PC. J R Soc Med. 2015;108(9): 341-5.
14. Susan G. Komen website. Weighing the Benefits and Risks of Mammography.
15. Biller-Andorno N et al. N Engl J Med 2014;370:1965-7.
16. Burton R et al. JAMA Netw Open. 2020;3(6):e208249.
17. Webb C et al. Plast Surg. 2019;27(1):49-53.
Mark Lesney is the editor of Hematology News and the managing editor of MDedge.com/IDPractioner. He has a PhD in plant virology and a PhD in the history of science, with a focus on the history of biotechnology and medicine. He has worked as a writer/editor for the American Chemical Society, and has served as an adjunct assistant professor in the department of biochemistry and molecular & cellular biology at Georgetown University, Washington.
Science and technology emerge from and are shaped by social forces outside the laboratory and clinic. This is an essential fact of most new medical technology. In the Chronicles of Cancer series, part 1 of the story of mammography focused on the technological determinants of its development and use. Part 2 will focus on some of the social forces that shaped the development of mammography.
“Few medical issues have been as controversial – or as political, at least in the United States – as the role of mammographic screening for breast cancer,” according to Donald A. Berry, PhD, a biostatistician at the University of Texas MD Anderson Cancer Center, Houston.1
In fact, technology aside, the history of mammography has been and remains rife with controversy on the one side and vigorous promotion on the other, all enmeshed within the War on Cancer, corporate and professional interests, and the women’s rights movement’s growing issues with what was seen as a patriarchal medical establishment.
Today the issue of conflicts of interest are paramount in any discussion of new medical developments, from the early preclinical stages to ultimate deployment. Then, as now, professional and advocacy societies had a profound influence on government and social decision-making, but in that earlier, more trusting era, buoyed by the amazing changes that technology was bringing to everyday life and an unshakable commitment to and belief in “progress,” science and the medical community held a far more effective sway over the beliefs and behavior of the general population.
Women’s health observed
Although the main focus of the women’s movement with regard to breast cancer was a struggle against the common practice of routine radical mastectomies and a push toward breast-conserving surgeries, the issue of preventive care and screening with regard to women’s health was also a major concern.
Regarding mammography, early enthusiasm in the medical community and among the general public was profound. In 1969, Robert Egan described how mammography had a “certain magic appeal.” The patient, he continued, “feels something special is being done for her.” Women whose cancers had been discovered on a mammogram praised radiologists as heroes who had saved their lives.2
In that era, however, beyond the confines of the doctor’s office, mammography and breast cancer remained topics not discussed among the public at large, despite efforts by the American Cancer Society to change this.
ACS weighs in
Various groups had been promoting the benefits of breast self-examination since the 1930s, and in 1947, the American Cancer Society launched an awareness campaign, “Look for a Lump or Thickening in the Breast,” instructing women to perform a monthly breast self-exam. Between self-examination and clinical breast examinations in physicians’ offices, the ACS believed that smaller and more treatable breast cancers could be discovered.
In 1972, the ACS, working with the National Cancer Institute (NCI), inaugurated the Breast Cancer Detection Demonstration Project (BCDDP), which planned to screen over a quarter of a million American women for breast cancer. The initiative was a direct outgrowth of the National Cancer Act of 1971,3 the key legislation of the War on Cancer, promoted by President Richard Nixon in his State of the Union address in 1971 and responsible for the creation of the National Cancer Institute.
Arthur I. Holleb, MD, ACS senior vice president for medical affairs and research, announced that, “[T]he time has come for the American Cancer Society to mount a massive program on mammography just as we did with the Pap test,”2 according to Barron Lerner, MD, whose book “The Breast Cancer Wars” provides a history of the long-term controversies involved.4
The Pap test, widely promulgated in the 1950s and 1960s, had produced a decline in mortality from cervical cancer.
Regardless of the lack of data on effectiveness at earlier ages, the ACS chose to include women as young as 35 in the BCDDP in order “to inculcate them with ‘good health habits’ ” and “to make our screenee want to return periodically and to want to act as a missionary to bring other women into the screening process.”2
Celebrity status matters
All of the elements of a social revolution in the use of mammography were in place in the late 1960s, but the final triggers to raise social consciousness were the cases of several high-profile female celebrities. In 1973, beloved former child star Shirley Temple Black revealed her breast cancer diagnosis and mastectomy in an era when public discussion of cancer – especially breast cancer – was rare.4
But it wasn’t until 1974 that public awareness and media coverage exploded, sparked by the impact of First Lady Betty Ford’s outspokenness on her own experience of breast cancer. “In obituaries prior to the 1950s and 1960s, women who died from breast cancer were often listed as dying from ‘a prolonged disease’ or ‘a woman’s disease,’ ” according to Tasha Dubriwny, PhD, now an associate professor of communication and women’s and gender studies at Texas A&M University, College Station, when interviewed by the American Association for Cancer Research.5Betty Ford openly addressed her breast cancer diagnosis and treatment and became a prominent advocate for early screening, transforming the landscape of breast cancer awareness. And although Betty Ford’s diagnosis was based on clinical examination rather than mammography, its boost to overall screening was indisputable.
“Within weeks [after Betty Ford’s announcement] thousands of women who had been reluctant to examine their breasts inundated cancer screening centers,” according to a 1987 article in the New York Times.6 Among these women was Happy Rockefeller, the wife of Vice President Nelson A. Rockefeller. Happy Rockefeller also found that she had breast cancer upon screening, and with Betty Ford would become another icon thereafter for breast cancer screening.
“Ford’s lesson for other women was straightforward: Get a mammogram, which she had not done. The American Cancer Society and National Cancer Institute had recently mounted a demonstration project to promote the detection of breast cancer as early as possible, when it was presumed to be more curable. The degree to which women embraced Ford’s message became clear through the famous ‘Betty Ford blip.’ So many women got breast examinations and mammograms for the first time after Ford’s announcement that the actual incidence of breast cancer in the United States went up by 15 percent.”4
In a 1975 address to the American Cancer Society, Betty Ford said: “One day I appeared to be fine and the next day I was in the hospital for a mastectomy. It made me realize how many women in the country could be in the same situation. That realization made me decide to discuss my breast cancer operation openly, because I thought of all the lives in jeopardy. My experience and frank discussion of breast cancer did prompt many women to learn about self-examination, regular checkups, and such detection techniques as mammography. These are so important. I just cannot stress enough how necessary it is for women to take an active interest in their own health and body.”7
ACS guidelines evolve
It wasn’t until 1976 that the ACS issued its first major guidelines for mammography screening. The ACS suggested mammograms may be called for in women aged 35-39 if there was a personal history of breast cancer, and between ages 40 and 49 if their mother or sisters had a history of breast cancer. Women aged 50 years and older could have yearly screening. Thereafter, the use of mammography was encouraged more and more with each new set of recommendations.8
Between 1980 and 1982, these guidelines expanded to advising a baseline mammogram for women aged 35-39 years; that women consult with their physician between ages 40 and 49; and that women over 50 have a yearly mammogram.
Between 1983 and 1991, the recommendations were for a baseline mammogram for women aged 35-39 years; a mammogram every 1-2 years for women aged 40-49; and yearly mammograms for women aged 50 and up. The baseline mammogram recommendation was dropped in 1992.
Between 1997 and 2015, the stakes were upped, and women aged 40-49 years were now recommended to have yearly mammograms, as were still all women aged 50 years and older.
In October 2015, the ACS changed their recommendation to say that women aged 40-44 years should have the choice of initiating mammogram screening, and that the risks and benefits of doing so should be discussed with their physicians. Women aged 45 years and older were still recommended for yearly mammogram screening. That recommendation stands today.
Controversies arise over risk/benefit
The technology was not, however, universally embraced. “By the late 1970s, mammography had diffused much more widely but had become a source of tremendous controversy. On the one hand, advocates of the technology enthusiastically touted its ability to detect smaller, more curable cancers. On the other hand, critics asked whether breast x-rays, particularly for women aged 50 and younger, actually caused more harm than benefit.”2
In addition, meta-analyses of the nine major screening trials conducted between 1965 and 1991 indicated that the reduced breast cancer mortality with screening was dependent on age. In particular, the results for women aged 40-49 years and 50-59 years showed only borderline statistical significance, and they varied depending on how cases were accrued in individual trials.
“Assuming that differences actually exist, the absolute breast cancer mortality reduction per 10,000 women screened for 10 years ranged from 3 for age 39-49 years; 5-8 for age 50-59 years; and 12-21 for age 60=69 years,” according to a review by the U.S. Preventive Services Task Force.9
The estimates for the group aged 70-74 years were limited by low numbers of events in trials that had smaller numbers of women in this age group.
Age has continued to be a major factor in determining the cost/benefit of routine mammography screening, with the American College of Physicians stating in its 2019 guidelines, “The potential harms outweigh the benefits in most women aged 40 to 49 years,” and adding, “In average-risk women aged 75 years or older or in women with a life expectancy of 10 years or less, clinicians should discontinue screening for breast cancer.”10
A Cochrane Report from 2013 was equally critical: “If we assume that screening reduces breast cancer mortality by 15% after 13 years of follow-up and that overdiagnosis and overtreatment is at 30%, it means that for every 2,000 women invited for screening throughout 10 years, one will avoid dying of breast cancer and 10 healthy women, who would not have been diagnosed if there had not been screening, will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress including anxiety and uncertainty for years because of false positive findings.”11
Conflicting voices exist
These reports advising a more nuanced evaluation of the benefits of mammography, however, were received with skepticism from doctors committed to the vision of breast cancer screening and convinced by anecdotal evidence in their own practices.
These reports were also in direct contradiction to recommendations made in 1997 by the National Cancer Institute, which recommended screening mammograms every 3 years for women aged 40-49 years at average risk of breast cancer.
Such scientific vacillation has contributed to a love/hate relationship with mammography in the mainstream media, fueling new controversies with regard to breast cancer screening, sometimes as much driven by public suspicion and political advocacy as by scientific evolution.
Vocal opponents of universal mammography screening arose throughout the years, and even the cases of Betty Ford and Happy Rockefeller have been called into question as iconic demonstrations of the effectiveness of screening. And although not directly linked to the issue of screening, the rebellion against the routine use of radical mastectomies, a technique pioneered by Halsted in 1894 and in continuing use into the modern era, sparked outrage in women’s rights activists who saw it as evidence of a patriarchal medical establishment making arbitrary decisions concerning women’s bodies. For example, feminist and breast cancer activist Rose Kushner argued against the unnecessary disfigurement of women’s bodies and urged the use and development of less drastic techniques, including partial mastectomies and lumpectomies as viable choices. And these choices were increasingly supported by the medical community as safe and effective alternatives for many patients.12
A 2015 paper in the Journal of the Royal Society of Medicine was bluntly titled “Mammography screening is harmful and should be abandoned.”13 According to the author, who was the editor of the 2013 Cochrane Report, “I believe that if screening had been a drug, it would have been withdrawn from the market long ago.” And the popular press has not been shy at weighing in on the controversy, driven, in part, by the lack of consensus and continually changing guidelines, with major publications such as U.S. News and World Report, the Washington Post, and others addressing the issue over the years. And even public advocacy groups such as the Susan G. Komen organization14 are supporting the more modern professional guidelines in taking a more nuanced approach to the discussion of risks and benefits for individual women.
In 2014, the Swiss Medical Board, a nationally appointed body, recommended that new mammography screening programs should not be instituted in that country and that limits be placed on current programs because of the imbalance between risks and benefits of mammography screening.15 And a study done in Australia in 2020 agreed, stating, “Using data of 30% overdiagnosis of women aged 50 to 69 years in the NSW [New South Wales] BreastScreen program in 2012, we calculated an Australian ratio of harm of overdiagnosis to benefit (breast cancer deaths avoided) of 15:1 and recommended stopping the invitation to screening.”16
Conclusion
If nothing else, the history of mammography shows that the interconnection of social factors with the rise of a medical technology can have profound impacts on patient care. Technology developed by men for women became a touchstone of resentment in a world ever more aware of sex and gender biases in everything from the conduct of clinical trials to the care (or lack thereof) of women with heart disease. Tied for so many years to a radically disfiguring and drastic form of surgery that affected what many felt to be a hallmark and representation of womanhood1,17 mammography also carried the weight of both the real and imaginary fears of radiation exposure.
Well into its development, the technology still found itself under intense public scrutiny, and was enmeshed in a continual media circus, with ping-ponging discussions of risk/benefit in the scientific literature fueling complaints by many of the dominance of a patriarchal medical community over women’s bodies.
With guidelines for mammography still evolving, questions still remaining, and new technologies such as digital imaging falling short in their hoped-for promise, the story remains unfinished, and the future still uncertain. One thing remains clear, however: In the right circumstances, with the right patient population, and properly executed, mammography has saved lives when tied to effective, early treatment, whatever its flaws and failings. This truth goes hand in hand with another reality: It may have also contributed to considerable unanticipated harm through overdiagnosis and overtreatment.
Overall, the history of mammography is a cautionary tale for the entire medical community and for the development of new medical technologies. The push-pull of the demand for progress to save lives and the slowness and often inconclusiveness of scientific studies that validate new technologies create gray areas, where social determinants and professional interests vie in an information vacuum for control of the narrative of risks vs. benefits.
The story of mammography is not yet concluded, and may never be, especially given the unlikelihood of conducting the massive randomized clinical trials that would be needed to settle the issue. It is more likely to remain controversial, at least until the technology of mammography becomes obsolete, replaced by something new and different, which will likely start the push-pull cycle all over again.
And regardless of the risks and benefits of mammography screening, the issue of treatment once breast cancer is identified is perhaps one of more overwhelming import.
References
1. Berry, DA. The Breast. 2013;22[Supplement 2]:S73-S76.
2. Lerner, BH. “To See Today With the Eyes of Tomorrow: A History of Screening Mammography.” Background paper for the Institute of Medicine report Mammography and Beyond: Developing Technologies for the Early Detection of Breast Cancer. 2001.
3. NCI website. The National Cancer Act of 1971. www.cancer.gov/about-nci/overview/history/national-cancer-act-1971.
4. Lerner BH. The Huffington Post, Sep. 26, 2014.
5. Wu C. Cancer Today. 2012;2(3): Sep. 27.
6. “”The New York Times. Oct. 17, 1987.
7. Ford B. Remarks to the American Cancer Society. 1975.
8. The American Cancer Society website. History of ACS Recommendations for the Early Detection of Cancer in People Without Symptoms.
9. Nelson HD et al. Screening for Breast Cancer: A Systematic Review to Update the 2009 U.S. Preventive Services Task Force Recommendation. 2016; Evidence Syntheses, No. 124; pp.29-49.
10. Qasseem A et al. Annals of Internal Medicine. 2019;170(8):547-60.
11. Gotzsche PC et al. Cochrane Report 2013.
12. Lerner, BH. West J Med. May 2001;174(5):362-5.
13. Gotzsche PC. J R Soc Med. 2015;108(9): 341-5.
14. Susan G. Komen website. Weighing the Benefits and Risks of Mammography.
15. Biller-Andorno N et al. N Engl J Med 2014;370:1965-7.
16. Burton R et al. JAMA Netw Open. 2020;3(6):e208249.
17. Webb C et al. Plast Surg. 2019;27(1):49-53.
Mark Lesney is the editor of Hematology News and the managing editor of MDedge.com/IDPractioner. He has a PhD in plant virology and a PhD in the history of science, with a focus on the history of biotechnology and medicine. He has worked as a writer/editor for the American Chemical Society, and has served as an adjunct assistant professor in the department of biochemistry and molecular & cellular biology at Georgetown University, Washington.
Blood biomarkers could help predict when athletes recover from concussions
Cassandra L. Pattinson, PhD, of the University of Queensland, Brisbane, Australia, and the National Institutes of Health, Bethesda, Md., along with coauthors. The study was published in JAMA Network Open.
, according to a new study of collegiate athletes and recovery time. “Although preliminary, the current results highlight the potential role of biomarkers in tracking neuronal recovery, which may be associated with duration of [return to sport],” wroteTo determine if three specific blood biomarkers – total tau protein, glial fibrillary acidic protein (GFAP), and neurofilament light chain protein (NfL) – can help predict when athletes should return from sports-related concussions, a multicenter, prospective diagnostic study was launched and led by the Advanced Research Core (ARC) of the Concussion Assessment, Research, and Education (CARE) Consortium. The consortium is a joint effort of the National Collegiate Athletics Association (NCAA) and the U.S. Department of Defense.
From among the CARE ARC database, researchers evaluated 127 eligible student athletes who had experienced a sports-related concussion, underwent clinical testing and blood collection before and after their injuries, and returned to their sports. Their average age was 18.9 years old, 76% were men, and 65% were White. Biomarker levels were measured from nonfasting blood samples via ultrasensitive single molecule array technology. As current NCAA guidelines indicate that most athletes will be asymptomatic roughly 2 weeks after a concussion, the study used 14 days as a cutoff period.
Among the 127 athletes, the median return-to-sport time was 14 days; 65 returned to their sports in less than 14 days while 62 returned to their sports in 14 days or more. According to the study’s linear mixed models, athletes with a return-to-sport time of 14 days or longer had significantly higher total tau levels at 24-48 hours post injury (mean difference –0.51 pg/mL, 95% confidence interval, –0.88 to –0.14; P = .008) and when symptoms had resolved (mean difference –0.71 pg/mL, 95% CI, –1.09 to –0.34; P < .001) compared with athletes with a return-to-sport time of less than 14 days. Athletes who returned in 14 days or more also had comparatively lower levels of GFAP postinjury than did those who returned in under 14 days (4.39 pg/mL versus 4.72 pg/mL; P = .04).
Preliminary steps toward an appropriate point-of-care test
“This particular study is one of several emerging studies on what these biomarkers look like,” Brian W. Hainline, MD, chief medical officer of the NCAA, said in an interview. “It’s all still very preliminary – you couldn’t make policy changes based on what we have – but the data is accumulating. Ultimately, we should be able to perform a multivariate analysis of all the different objective biomarkers, looking at repetitive head impact exposure, looking at imaging, looking at these blood-based biomarkers. Then you can say, ‘OK, what can we do? Can we actually predict recovery, who is likely or less likely to do well?’ ”
“It’s not realistic to be taking blood samples all the time,” said Dr. Hainline, who was not involved in the study. “Another goal, once we know which biomarkers are valuable, is to convert to a point-of-care test. You get a finger prick or even a salivary test and we get the result immediately; that’s the direction that all of this is heading. But first, we have to lay out the groundwork. We envision a day, in the not too distant future, where we can get this information much more quickly.”
The authors acknowledged their study’s limitations, including an inability to standardize the time of biomarker collection and the fact that they analyzed a “relatively small number of athletes” who met their specific criteria. That said, they emphasized that their work is based on “the largest prospective sample of sports-related concussions in athletes to date” and that they “anticipate that we will be able to continue to gather a more representative sample” in the future to better generalize to the larger collegiate community.
The study was supported by the Grand Alliance Concussion Assessment, Research, and Education Consortium, which was funded in part by the NCAA and the Department of Defense. The authors disclosed receiving grants and travel reimbursements from – or working as advisers or consultants for – various organizations, college programs, and sports leagues.
SOURCE: Pattinson CL, et al. JAMA Netw Open. 2020 Aug 27. doi: 10.1001/jamanetworkopen.2020.13191.
Cassandra L. Pattinson, PhD, of the University of Queensland, Brisbane, Australia, and the National Institutes of Health, Bethesda, Md., along with coauthors. The study was published in JAMA Network Open.
, according to a new study of collegiate athletes and recovery time. “Although preliminary, the current results highlight the potential role of biomarkers in tracking neuronal recovery, which may be associated with duration of [return to sport],” wroteTo determine if three specific blood biomarkers – total tau protein, glial fibrillary acidic protein (GFAP), and neurofilament light chain protein (NfL) – can help predict when athletes should return from sports-related concussions, a multicenter, prospective diagnostic study was launched and led by the Advanced Research Core (ARC) of the Concussion Assessment, Research, and Education (CARE) Consortium. The consortium is a joint effort of the National Collegiate Athletics Association (NCAA) and the U.S. Department of Defense.
From among the CARE ARC database, researchers evaluated 127 eligible student athletes who had experienced a sports-related concussion, underwent clinical testing and blood collection before and after their injuries, and returned to their sports. Their average age was 18.9 years old, 76% were men, and 65% were White. Biomarker levels were measured from nonfasting blood samples via ultrasensitive single molecule array technology. As current NCAA guidelines indicate that most athletes will be asymptomatic roughly 2 weeks after a concussion, the study used 14 days as a cutoff period.
Among the 127 athletes, the median return-to-sport time was 14 days; 65 returned to their sports in less than 14 days while 62 returned to their sports in 14 days or more. According to the study’s linear mixed models, athletes with a return-to-sport time of 14 days or longer had significantly higher total tau levels at 24-48 hours post injury (mean difference –0.51 pg/mL, 95% confidence interval, –0.88 to –0.14; P = .008) and when symptoms had resolved (mean difference –0.71 pg/mL, 95% CI, –1.09 to –0.34; P < .001) compared with athletes with a return-to-sport time of less than 14 days. Athletes who returned in 14 days or more also had comparatively lower levels of GFAP postinjury than did those who returned in under 14 days (4.39 pg/mL versus 4.72 pg/mL; P = .04).
Preliminary steps toward an appropriate point-of-care test
“This particular study is one of several emerging studies on what these biomarkers look like,” Brian W. Hainline, MD, chief medical officer of the NCAA, said in an interview. “It’s all still very preliminary – you couldn’t make policy changes based on what we have – but the data is accumulating. Ultimately, we should be able to perform a multivariate analysis of all the different objective biomarkers, looking at repetitive head impact exposure, looking at imaging, looking at these blood-based biomarkers. Then you can say, ‘OK, what can we do? Can we actually predict recovery, who is likely or less likely to do well?’ ”
“It’s not realistic to be taking blood samples all the time,” said Dr. Hainline, who was not involved in the study. “Another goal, once we know which biomarkers are valuable, is to convert to a point-of-care test. You get a finger prick or even a salivary test and we get the result immediately; that’s the direction that all of this is heading. But first, we have to lay out the groundwork. We envision a day, in the not too distant future, where we can get this information much more quickly.”
The authors acknowledged their study’s limitations, including an inability to standardize the time of biomarker collection and the fact that they analyzed a “relatively small number of athletes” who met their specific criteria. That said, they emphasized that their work is based on “the largest prospective sample of sports-related concussions in athletes to date” and that they “anticipate that we will be able to continue to gather a more representative sample” in the future to better generalize to the larger collegiate community.
The study was supported by the Grand Alliance Concussion Assessment, Research, and Education Consortium, which was funded in part by the NCAA and the Department of Defense. The authors disclosed receiving grants and travel reimbursements from – or working as advisers or consultants for – various organizations, college programs, and sports leagues.
SOURCE: Pattinson CL, et al. JAMA Netw Open. 2020 Aug 27. doi: 10.1001/jamanetworkopen.2020.13191.
Cassandra L. Pattinson, PhD, of the University of Queensland, Brisbane, Australia, and the National Institutes of Health, Bethesda, Md., along with coauthors. The study was published in JAMA Network Open.
, according to a new study of collegiate athletes and recovery time. “Although preliminary, the current results highlight the potential role of biomarkers in tracking neuronal recovery, which may be associated with duration of [return to sport],” wroteTo determine if three specific blood biomarkers – total tau protein, glial fibrillary acidic protein (GFAP), and neurofilament light chain protein (NfL) – can help predict when athletes should return from sports-related concussions, a multicenter, prospective diagnostic study was launched and led by the Advanced Research Core (ARC) of the Concussion Assessment, Research, and Education (CARE) Consortium. The consortium is a joint effort of the National Collegiate Athletics Association (NCAA) and the U.S. Department of Defense.
From among the CARE ARC database, researchers evaluated 127 eligible student athletes who had experienced a sports-related concussion, underwent clinical testing and blood collection before and after their injuries, and returned to their sports. Their average age was 18.9 years old, 76% were men, and 65% were White. Biomarker levels were measured from nonfasting blood samples via ultrasensitive single molecule array technology. As current NCAA guidelines indicate that most athletes will be asymptomatic roughly 2 weeks after a concussion, the study used 14 days as a cutoff period.
Among the 127 athletes, the median return-to-sport time was 14 days; 65 returned to their sports in less than 14 days while 62 returned to their sports in 14 days or more. According to the study’s linear mixed models, athletes with a return-to-sport time of 14 days or longer had significantly higher total tau levels at 24-48 hours post injury (mean difference –0.51 pg/mL, 95% confidence interval, –0.88 to –0.14; P = .008) and when symptoms had resolved (mean difference –0.71 pg/mL, 95% CI, –1.09 to –0.34; P < .001) compared with athletes with a return-to-sport time of less than 14 days. Athletes who returned in 14 days or more also had comparatively lower levels of GFAP postinjury than did those who returned in under 14 days (4.39 pg/mL versus 4.72 pg/mL; P = .04).
Preliminary steps toward an appropriate point-of-care test
“This particular study is one of several emerging studies on what these biomarkers look like,” Brian W. Hainline, MD, chief medical officer of the NCAA, said in an interview. “It’s all still very preliminary – you couldn’t make policy changes based on what we have – but the data is accumulating. Ultimately, we should be able to perform a multivariate analysis of all the different objective biomarkers, looking at repetitive head impact exposure, looking at imaging, looking at these blood-based biomarkers. Then you can say, ‘OK, what can we do? Can we actually predict recovery, who is likely or less likely to do well?’ ”
“It’s not realistic to be taking blood samples all the time,” said Dr. Hainline, who was not involved in the study. “Another goal, once we know which biomarkers are valuable, is to convert to a point-of-care test. You get a finger prick or even a salivary test and we get the result immediately; that’s the direction that all of this is heading. But first, we have to lay out the groundwork. We envision a day, in the not too distant future, where we can get this information much more quickly.”
The authors acknowledged their study’s limitations, including an inability to standardize the time of biomarker collection and the fact that they analyzed a “relatively small number of athletes” who met their specific criteria. That said, they emphasized that their work is based on “the largest prospective sample of sports-related concussions in athletes to date” and that they “anticipate that we will be able to continue to gather a more representative sample” in the future to better generalize to the larger collegiate community.
The study was supported by the Grand Alliance Concussion Assessment, Research, and Education Consortium, which was funded in part by the NCAA and the Department of Defense. The authors disclosed receiving grants and travel reimbursements from – or working as advisers or consultants for – various organizations, college programs, and sports leagues.
SOURCE: Pattinson CL, et al. JAMA Netw Open. 2020 Aug 27. doi: 10.1001/jamanetworkopen.2020.13191.
FROM JAMA NETWORK OPEN
New schizophrenia treatment guideline released
The American Psychiatric Association has released a new evidence-based practice guideline for the treatment of schizophrenia.
The guideline focuses on assessment and treatment planning, which are integral to patient-centered care, and includes recommendations regarding pharmacotherapy, with particular focus on clozapine, as well as previously recommended and new psychosocial interventions.
“Our intention was to make recommendations to treat the whole person and take into account their family and other significant people in their lives,” George Keepers, MD, chair of the guideline writing group, said in an interview.
‘State-of-the-art methodology’
Dr. Keepers, professor of psychiatry at Oregon Health and Science University, Portland, explained the rigorous process that informs the current guideline, which was “based not solely on expert consensus but was preceded by an evidence-based review of the literature that was then discussed, digested, and distilled into specific recommendations.”
Many current recommendations are “similar to previous recommendations, but there are a few important differences,” he said.
Two experts in schizophrenia who were not involved in guideline authorship praised it for its usefulness and methodology.
Philip D. Harvey, PhD, Leonard M. Miller Professor of Psychiatry and Behavioral Sciences, University of Miami, said in an interview that the guideline “clarified the typical treatment algorithm from first episode to treatment resistance [which is] very clearly laid out for the first time.”
Christoph Correll, MD, professor of psychiatry and molecular medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, N.Y., said in an interview that the guideline “followed state-of-the-art methodology.”
First steps
The guideline recommends beginning with assessment of the patient and determination of the treatment plan.
Patients should be “treated with an antipsychotic medication and monitored for effectiveness and side effects.” Even after the patient’s symptoms have improved, antipsychotic treatment should continue.
For patients whose symptoms have improved, treatment should continue with the same antipsychotic and should not be switched.
“The problem we’re addressing in this recommendation is that patients are often treated with an effective medication and then forced, by circumstances or their insurance company, to switch to another that may not be effective for them, resulting in unnecessary relapses of the illness,” said Dr. Keepers.
“ and do what’s in the best interest of the patient,” he said.
“The guideline called out that antipsychotics that are effective and tolerated should be continued, without specifying a duration of treatment, thereby indicating indirectly that there is no clear end of the recommendation for ongoing maintenance treatment in individuals with schizophrenia,” said Dr. Correll.
Clozapine underutilized
The guideline highlights the role of clozapine and recommends its use for patients with treatment-resistant schizophrenia and those at risk for suicide. Clozapine is also recommended for patients at “substantial” risk for aggressive behavior, regardless of other treatments.
“Clozapine is underutilized for treatment of schizophrenia in the U.S. and a number of other countries, but it is a really important treatment for patients who don’t respond to other antipsychotic agents,” said Dr. Keepers.
“With this recommendation, we hope that more patients will wind up receiving the medication and benefiting from it,” he added.
In addition, patients should receive treatment with a long-acting injectable antipsychotic “if they prefer such treatment or if they have a history of poor or uncertain adherence” (level of evidence, 2B).
The guideline authors “are recommending long-acting injectable medications for people who want them, not just people with poor prior adherence, which is a critical step,” said Dr. Harvey, director of the division of psychology at the University of Miami.
Managing antipsychotic side effects
The guideline offers recommendations for patients experiencing antipsychotic-induced side effects.
VMAT2s, which represent a “class of drugs that have become available since the last schizophrenia guidelines, are effective in tardive dyskinesia. It is important that patients with tardive dyskinesia have access to these drugs because they do work,” Dr. Keepers said.
Adequate funding needed
Recommended psychosocial interventions include treatment in a specialty care program for patients with schizophrenia who are experiencing a first episode of psychosis, use of cognitive-behavioral therapy for psychosis, psychoeducation, and supported employment services (2B).
“We reviewed very good data showing that patients who receive these services are more likely to be able to be employed and less likely to be rehospitalized or have a relapse,” Dr. Keepers observed.
In addition, patients with schizophrenia should receive assertive community treatment interventions if there is a “history of poor engagement with services leading to frequent relapse or social disruption.”
Family interventions are recommended for patients who have ongoing contact with their families (2B), and patients should also receive interventions “aimed at developing self-management skills and enhancing person-oriented recovery.” They should receive cognitive remediation, social skills training, and supportive psychotherapy.
Dr. Keepers pointed to “major barriers” to providing some of these psychosocial treatments. “They are beyond the scope of someone in an individual private practice situation, so they need to be delivered within the context of treatment programs that are either publicly or privately based,” he said.
“Psychiatrists can and do work closely with community and mental health centers, psychologists, and social workers who can provide these kinds of treatments,” but “many [treatments] require specialized skills and training before they can be offered, and there is a shortage of personnel to deliver them,” he noted.
“Both the national and state governments have not provided adequate funding for treatment of individuals with this condition [schizophrenia],” he added.
Dr. Keepers reports no relevant financial relationships. The other authors’ disclosures are listed in the original article. Dr. Harvey reports no relevant financial relationships. Dr. Correll disclosed ties to Acadia, Alkermes, Allergan, Angelini, Axsome, Gedeon Richter, Gerson Lehrman Group, Indivior, IntraCellular Therapies, Janssen/J&J, LB Pharma, Lundbeck, MedAvante-ProPhase, Medscape, Merck, Mylan, Neurocrine, Noven, Otsuka, Pfizer, Recordati, Rovi, Servier, Sumitomo Dainippon, Sunovion, Supernus, Takeda, and Teva. He has received grant support from Janssen and Takeda. He is also a stock option holder of LB Pharma.
A version of this article originally appeared on Medscape.com.
The American Psychiatric Association has released a new evidence-based practice guideline for the treatment of schizophrenia.
The guideline focuses on assessment and treatment planning, which are integral to patient-centered care, and includes recommendations regarding pharmacotherapy, with particular focus on clozapine, as well as previously recommended and new psychosocial interventions.
“Our intention was to make recommendations to treat the whole person and take into account their family and other significant people in their lives,” George Keepers, MD, chair of the guideline writing group, said in an interview.
‘State-of-the-art methodology’
Dr. Keepers, professor of psychiatry at Oregon Health and Science University, Portland, explained the rigorous process that informs the current guideline, which was “based not solely on expert consensus but was preceded by an evidence-based review of the literature that was then discussed, digested, and distilled into specific recommendations.”
Many current recommendations are “similar to previous recommendations, but there are a few important differences,” he said.
Two experts in schizophrenia who were not involved in guideline authorship praised it for its usefulness and methodology.
Philip D. Harvey, PhD, Leonard M. Miller Professor of Psychiatry and Behavioral Sciences, University of Miami, said in an interview that the guideline “clarified the typical treatment algorithm from first episode to treatment resistance [which is] very clearly laid out for the first time.”
Christoph Correll, MD, professor of psychiatry and molecular medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, N.Y., said in an interview that the guideline “followed state-of-the-art methodology.”
First steps
The guideline recommends beginning with assessment of the patient and determination of the treatment plan.
Patients should be “treated with an antipsychotic medication and monitored for effectiveness and side effects.” Even after the patient’s symptoms have improved, antipsychotic treatment should continue.
For patients whose symptoms have improved, treatment should continue with the same antipsychotic and should not be switched.
“The problem we’re addressing in this recommendation is that patients are often treated with an effective medication and then forced, by circumstances or their insurance company, to switch to another that may not be effective for them, resulting in unnecessary relapses of the illness,” said Dr. Keepers.
“ and do what’s in the best interest of the patient,” he said.
“The guideline called out that antipsychotics that are effective and tolerated should be continued, without specifying a duration of treatment, thereby indicating indirectly that there is no clear end of the recommendation for ongoing maintenance treatment in individuals with schizophrenia,” said Dr. Correll.
Clozapine underutilized
The guideline highlights the role of clozapine and recommends its use for patients with treatment-resistant schizophrenia and those at risk for suicide. Clozapine is also recommended for patients at “substantial” risk for aggressive behavior, regardless of other treatments.
“Clozapine is underutilized for treatment of schizophrenia in the U.S. and a number of other countries, but it is a really important treatment for patients who don’t respond to other antipsychotic agents,” said Dr. Keepers.
“With this recommendation, we hope that more patients will wind up receiving the medication and benefiting from it,” he added.
In addition, patients should receive treatment with a long-acting injectable antipsychotic “if they prefer such treatment or if they have a history of poor or uncertain adherence” (level of evidence, 2B).
The guideline authors “are recommending long-acting injectable medications for people who want them, not just people with poor prior adherence, which is a critical step,” said Dr. Harvey, director of the division of psychology at the University of Miami.
Managing antipsychotic side effects
The guideline offers recommendations for patients experiencing antipsychotic-induced side effects.
VMAT2s, which represent a “class of drugs that have become available since the last schizophrenia guidelines, are effective in tardive dyskinesia. It is important that patients with tardive dyskinesia have access to these drugs because they do work,” Dr. Keepers said.
Adequate funding needed
Recommended psychosocial interventions include treatment in a specialty care program for patients with schizophrenia who are experiencing a first episode of psychosis, use of cognitive-behavioral therapy for psychosis, psychoeducation, and supported employment services (2B).
“We reviewed very good data showing that patients who receive these services are more likely to be able to be employed and less likely to be rehospitalized or have a relapse,” Dr. Keepers observed.
In addition, patients with schizophrenia should receive assertive community treatment interventions if there is a “history of poor engagement with services leading to frequent relapse or social disruption.”
Family interventions are recommended for patients who have ongoing contact with their families (2B), and patients should also receive interventions “aimed at developing self-management skills and enhancing person-oriented recovery.” They should receive cognitive remediation, social skills training, and supportive psychotherapy.
Dr. Keepers pointed to “major barriers” to providing some of these psychosocial treatments. “They are beyond the scope of someone in an individual private practice situation, so they need to be delivered within the context of treatment programs that are either publicly or privately based,” he said.
“Psychiatrists can and do work closely with community and mental health centers, psychologists, and social workers who can provide these kinds of treatments,” but “many [treatments] require specialized skills and training before they can be offered, and there is a shortage of personnel to deliver them,” he noted.
“Both the national and state governments have not provided adequate funding for treatment of individuals with this condition [schizophrenia],” he added.
Dr. Keepers reports no relevant financial relationships. The other authors’ disclosures are listed in the original article. Dr. Harvey reports no relevant financial relationships. Dr. Correll disclosed ties to Acadia, Alkermes, Allergan, Angelini, Axsome, Gedeon Richter, Gerson Lehrman Group, Indivior, IntraCellular Therapies, Janssen/J&J, LB Pharma, Lundbeck, MedAvante-ProPhase, Medscape, Merck, Mylan, Neurocrine, Noven, Otsuka, Pfizer, Recordati, Rovi, Servier, Sumitomo Dainippon, Sunovion, Supernus, Takeda, and Teva. He has received grant support from Janssen and Takeda. He is also a stock option holder of LB Pharma.
A version of this article originally appeared on Medscape.com.
The American Psychiatric Association has released a new evidence-based practice guideline for the treatment of schizophrenia.
The guideline focuses on assessment and treatment planning, which are integral to patient-centered care, and includes recommendations regarding pharmacotherapy, with particular focus on clozapine, as well as previously recommended and new psychosocial interventions.
“Our intention was to make recommendations to treat the whole person and take into account their family and other significant people in their lives,” George Keepers, MD, chair of the guideline writing group, said in an interview.
‘State-of-the-art methodology’
Dr. Keepers, professor of psychiatry at Oregon Health and Science University, Portland, explained the rigorous process that informs the current guideline, which was “based not solely on expert consensus but was preceded by an evidence-based review of the literature that was then discussed, digested, and distilled into specific recommendations.”
Many current recommendations are “similar to previous recommendations, but there are a few important differences,” he said.
Two experts in schizophrenia who were not involved in guideline authorship praised it for its usefulness and methodology.
Philip D. Harvey, PhD, Leonard M. Miller Professor of Psychiatry and Behavioral Sciences, University of Miami, said in an interview that the guideline “clarified the typical treatment algorithm from first episode to treatment resistance [which is] very clearly laid out for the first time.”
Christoph Correll, MD, professor of psychiatry and molecular medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, N.Y., said in an interview that the guideline “followed state-of-the-art methodology.”
First steps
The guideline recommends beginning with assessment of the patient and determination of the treatment plan.
Patients should be “treated with an antipsychotic medication and monitored for effectiveness and side effects.” Even after the patient’s symptoms have improved, antipsychotic treatment should continue.
For patients whose symptoms have improved, treatment should continue with the same antipsychotic and should not be switched.
“The problem we’re addressing in this recommendation is that patients are often treated with an effective medication and then forced, by circumstances or their insurance company, to switch to another that may not be effective for them, resulting in unnecessary relapses of the illness,” said Dr. Keepers.
“ and do what’s in the best interest of the patient,” he said.
“The guideline called out that antipsychotics that are effective and tolerated should be continued, without specifying a duration of treatment, thereby indicating indirectly that there is no clear end of the recommendation for ongoing maintenance treatment in individuals with schizophrenia,” said Dr. Correll.
Clozapine underutilized
The guideline highlights the role of clozapine and recommends its use for patients with treatment-resistant schizophrenia and those at risk for suicide. Clozapine is also recommended for patients at “substantial” risk for aggressive behavior, regardless of other treatments.
“Clozapine is underutilized for treatment of schizophrenia in the U.S. and a number of other countries, but it is a really important treatment for patients who don’t respond to other antipsychotic agents,” said Dr. Keepers.
“With this recommendation, we hope that more patients will wind up receiving the medication and benefiting from it,” he added.
In addition, patients should receive treatment with a long-acting injectable antipsychotic “if they prefer such treatment or if they have a history of poor or uncertain adherence” (level of evidence, 2B).
The guideline authors “are recommending long-acting injectable medications for people who want them, not just people with poor prior adherence, which is a critical step,” said Dr. Harvey, director of the division of psychology at the University of Miami.
Managing antipsychotic side effects
The guideline offers recommendations for patients experiencing antipsychotic-induced side effects.
VMAT2s, which represent a “class of drugs that have become available since the last schizophrenia guidelines, are effective in tardive dyskinesia. It is important that patients with tardive dyskinesia have access to these drugs because they do work,” Dr. Keepers said.
Adequate funding needed
Recommended psychosocial interventions include treatment in a specialty care program for patients with schizophrenia who are experiencing a first episode of psychosis, use of cognitive-behavioral therapy for psychosis, psychoeducation, and supported employment services (2B).
“We reviewed very good data showing that patients who receive these services are more likely to be able to be employed and less likely to be rehospitalized or have a relapse,” Dr. Keepers observed.
In addition, patients with schizophrenia should receive assertive community treatment interventions if there is a “history of poor engagement with services leading to frequent relapse or social disruption.”
Family interventions are recommended for patients who have ongoing contact with their families (2B), and patients should also receive interventions “aimed at developing self-management skills and enhancing person-oriented recovery.” They should receive cognitive remediation, social skills training, and supportive psychotherapy.
Dr. Keepers pointed to “major barriers” to providing some of these psychosocial treatments. “They are beyond the scope of someone in an individual private practice situation, so they need to be delivered within the context of treatment programs that are either publicly or privately based,” he said.
“Psychiatrists can and do work closely with community and mental health centers, psychologists, and social workers who can provide these kinds of treatments,” but “many [treatments] require specialized skills and training before they can be offered, and there is a shortage of personnel to deliver them,” he noted.
“Both the national and state governments have not provided adequate funding for treatment of individuals with this condition [schizophrenia],” he added.
Dr. Keepers reports no relevant financial relationships. The other authors’ disclosures are listed in the original article. Dr. Harvey reports no relevant financial relationships. Dr. Correll disclosed ties to Acadia, Alkermes, Allergan, Angelini, Axsome, Gedeon Richter, Gerson Lehrman Group, Indivior, IntraCellular Therapies, Janssen/J&J, LB Pharma, Lundbeck, MedAvante-ProPhase, Medscape, Merck, Mylan, Neurocrine, Noven, Otsuka, Pfizer, Recordati, Rovi, Servier, Sumitomo Dainippon, Sunovion, Supernus, Takeda, and Teva. He has received grant support from Janssen and Takeda. He is also a stock option holder of LB Pharma.
A version of this article originally appeared on Medscape.com.
Gene signature may improve prognostication in ovarian cancer
study published in Annals of Oncology.
according to a“Gene expression signature tests for prognosis are available for other cancers, such as breast cancer, and these help with treatment decisions, but no such tests are available for ovarian cancer,” senior investigator Susan J. Ramus, PhD, of Lowy Cancer Research Centre, University of NSW Sydney, commented in an interview.
Dr. Ramus and associates developed and validated their 101-gene expression signature using pretreatment tumor tissue from 3,769 women with high-grade serous ovarian cancer treated on 21 studies.
The investigators found this signature, called OTTA-SPOT (Ovarian Tumor Tissue Analysis Consortium–Stratified Prognosis of Ovarian Tumors), performed well at stratifying women according to overall survival. Median overall survival times ranged from about 2 years for patients in the top quintile of scores to more than 9 years for patients in the bottom quintile.
Moreover, OTTA-SPOT significantly improved prognostication when added to age and stage.
“This tumor test works on formalin-fixed, paraffin-embedded tumors, as collected routinely in clinical practice,” Dr. Ramus noted. “Women predicted to have poor survival using current treatments could be included in clinical trials to rapidly get alternative treatment. Many of the genes included in this test are targets of known drugs, so this information could lead to alternative targeted treatments.
“This test is not ready for routine clinical care yet,” she added. “The next step would be to include this signature as part of a clinical trial. If patients predicted to have poor survival are given alternative treatments that improve their survival, then the test could be included in treatment decisions.”
Study details
Dr. Ramus and colleagues began this work by measuring tumor expression of 513 genes selected via meta-analysis. The team then developed a gene expression assay and a prognostic signature for overall survival, which they trained on tumors from 2,702 women in 15 studies and validated on an independent set of tumors from 1,067 women in 6 studies.
In analyses adjusted for covariates, expression levels of 276 genes were associated with overall survival. The signature with the best prognostic performance contained 101 genes that were enriched in pathways having treatment implications, such as pathways involved in immune response, mitosis, and homologous recombination repair.
Adding the signature to age and stage alone improved prediction of 2- and 5-year overall survival. The area under the curve increased from 0.61 to 0.69 for 2-year overall survival and from 0.62 to 0.75 for 5-year overall survival (with nonoverlapping 95% confidence intervals for 5-year survival).
Each standard deviation increase in the gene expression score was associated with a more than doubling of the risk of death (hazard ratio, 2.35; P < .001).
The median overall survival by gene expression score quintile was 9.5 years for patients in the first quintile, 5.4 years for patients in the second, 3.8 years for patients in the third, 3.2 years for patients in the fourth, and 2.3 years for patients in the fifth.
This study was funded by the National Institutes of Health/National Cancer Institute, the Canadian Institutes for Health Research, and the Department of Defense Ovarian Cancer Research Program. Some of the authors disclosed financial relationships with a range of companies. Dr. Ramus disclosed no conflicts of interest.
SOURCE: Millstein J et al. Ann Oncol. 2020 Sep;31(9):1240-50.
study published in Annals of Oncology.
according to a“Gene expression signature tests for prognosis are available for other cancers, such as breast cancer, and these help with treatment decisions, but no such tests are available for ovarian cancer,” senior investigator Susan J. Ramus, PhD, of Lowy Cancer Research Centre, University of NSW Sydney, commented in an interview.
Dr. Ramus and associates developed and validated their 101-gene expression signature using pretreatment tumor tissue from 3,769 women with high-grade serous ovarian cancer treated on 21 studies.
The investigators found this signature, called OTTA-SPOT (Ovarian Tumor Tissue Analysis Consortium–Stratified Prognosis of Ovarian Tumors), performed well at stratifying women according to overall survival. Median overall survival times ranged from about 2 years for patients in the top quintile of scores to more than 9 years for patients in the bottom quintile.
Moreover, OTTA-SPOT significantly improved prognostication when added to age and stage.
“This tumor test works on formalin-fixed, paraffin-embedded tumors, as collected routinely in clinical practice,” Dr. Ramus noted. “Women predicted to have poor survival using current treatments could be included in clinical trials to rapidly get alternative treatment. Many of the genes included in this test are targets of known drugs, so this information could lead to alternative targeted treatments.
“This test is not ready for routine clinical care yet,” she added. “The next step would be to include this signature as part of a clinical trial. If patients predicted to have poor survival are given alternative treatments that improve their survival, then the test could be included in treatment decisions.”
Study details
Dr. Ramus and colleagues began this work by measuring tumor expression of 513 genes selected via meta-analysis. The team then developed a gene expression assay and a prognostic signature for overall survival, which they trained on tumors from 2,702 women in 15 studies and validated on an independent set of tumors from 1,067 women in 6 studies.
In analyses adjusted for covariates, expression levels of 276 genes were associated with overall survival. The signature with the best prognostic performance contained 101 genes that were enriched in pathways having treatment implications, such as pathways involved in immune response, mitosis, and homologous recombination repair.
Adding the signature to age and stage alone improved prediction of 2- and 5-year overall survival. The area under the curve increased from 0.61 to 0.69 for 2-year overall survival and from 0.62 to 0.75 for 5-year overall survival (with nonoverlapping 95% confidence intervals for 5-year survival).
Each standard deviation increase in the gene expression score was associated with a more than doubling of the risk of death (hazard ratio, 2.35; P < .001).
The median overall survival by gene expression score quintile was 9.5 years for patients in the first quintile, 5.4 years for patients in the second, 3.8 years for patients in the third, 3.2 years for patients in the fourth, and 2.3 years for patients in the fifth.
This study was funded by the National Institutes of Health/National Cancer Institute, the Canadian Institutes for Health Research, and the Department of Defense Ovarian Cancer Research Program. Some of the authors disclosed financial relationships with a range of companies. Dr. Ramus disclosed no conflicts of interest.
SOURCE: Millstein J et al. Ann Oncol. 2020 Sep;31(9):1240-50.
study published in Annals of Oncology.
according to a“Gene expression signature tests for prognosis are available for other cancers, such as breast cancer, and these help with treatment decisions, but no such tests are available for ovarian cancer,” senior investigator Susan J. Ramus, PhD, of Lowy Cancer Research Centre, University of NSW Sydney, commented in an interview.
Dr. Ramus and associates developed and validated their 101-gene expression signature using pretreatment tumor tissue from 3,769 women with high-grade serous ovarian cancer treated on 21 studies.
The investigators found this signature, called OTTA-SPOT (Ovarian Tumor Tissue Analysis Consortium–Stratified Prognosis of Ovarian Tumors), performed well at stratifying women according to overall survival. Median overall survival times ranged from about 2 years for patients in the top quintile of scores to more than 9 years for patients in the bottom quintile.
Moreover, OTTA-SPOT significantly improved prognostication when added to age and stage.
“This tumor test works on formalin-fixed, paraffin-embedded tumors, as collected routinely in clinical practice,” Dr. Ramus noted. “Women predicted to have poor survival using current treatments could be included in clinical trials to rapidly get alternative treatment. Many of the genes included in this test are targets of known drugs, so this information could lead to alternative targeted treatments.
“This test is not ready for routine clinical care yet,” she added. “The next step would be to include this signature as part of a clinical trial. If patients predicted to have poor survival are given alternative treatments that improve their survival, then the test could be included in treatment decisions.”
Study details
Dr. Ramus and colleagues began this work by measuring tumor expression of 513 genes selected via meta-analysis. The team then developed a gene expression assay and a prognostic signature for overall survival, which they trained on tumors from 2,702 women in 15 studies and validated on an independent set of tumors from 1,067 women in 6 studies.
In analyses adjusted for covariates, expression levels of 276 genes were associated with overall survival. The signature with the best prognostic performance contained 101 genes that were enriched in pathways having treatment implications, such as pathways involved in immune response, mitosis, and homologous recombination repair.
Adding the signature to age and stage alone improved prediction of 2- and 5-year overall survival. The area under the curve increased from 0.61 to 0.69 for 2-year overall survival and from 0.62 to 0.75 for 5-year overall survival (with nonoverlapping 95% confidence intervals for 5-year survival).
Each standard deviation increase in the gene expression score was associated with a more than doubling of the risk of death (hazard ratio, 2.35; P < .001).
The median overall survival by gene expression score quintile was 9.5 years for patients in the first quintile, 5.4 years for patients in the second, 3.8 years for patients in the third, 3.2 years for patients in the fourth, and 2.3 years for patients in the fifth.
This study was funded by the National Institutes of Health/National Cancer Institute, the Canadian Institutes for Health Research, and the Department of Defense Ovarian Cancer Research Program. Some of the authors disclosed financial relationships with a range of companies. Dr. Ramus disclosed no conflicts of interest.
SOURCE: Millstein J et al. Ann Oncol. 2020 Sep;31(9):1240-50.
FROM ANNALS OF ONCOLOGY
Hyperpigmentation of the Tongue
The Diagnosis: Addison Disease in the Context of Polyglandular Autoimmune Syndrome Type 2
The patient’s hormone levels as well as distinct clinical features led to a diagnosis of Addison disease in the context of polyglandular autoimmune syndrome type 2 (PAS-2). Approximately 50% of PAS-2 cases are familiar, and different modes of inheritance—autosomal recessive, autosomal dominant, and polygenic—have been reported. Women are affected up to 3 times more often than men.1,2 The age of onset ranges from infancy to late adulthood, with most cases occurring in early adulthood. Primary adrenal insufficiency (Addison disease) is the principal manifestation of PAS-2. It appears in approximately 50% of patients, occurring simultaneously with autoimmune thyroid disease or diabetes mellitus in 20% of patients and following them in 30% of patients.1,2 Autoimmune thyroid diseases such as chronic autoimmune thyroiditis and occasionally Graves disease as well as type 1 diabetes mellitus also are common. Polyglandular autoimmune syndrome type 2 with primary adrenal insufficiency and autoimmune thyroid disease was formerly referred to as Schmidt syndrome.3 It must be differentiated from polyglandular autoimmune syndrome type 1, a rare condition that also is referred to as autoimmune polyendocrinopathycandidiasis-ectodermal dystrophy syndrome.1,3 As with any other cause of adrenal insufficiency, the treatment involves hormone replacement therapy up to normal levels and then tapering according to stress levels (ie, surgery or infections that require a dose increase). Our patient was diagnosed according to hormone levels and clinical features and was started on 30 mg daily of hydrocortisone and 50 μg daily of levothyroxine. No improvement in her condition was noted after 6 months of treatment. The patient is still under yearly follow-up, and the mucous hyperpigmentation faded approximately 6 months after hormonal homeostasis was achieved.
Peutz-Jeghers syndrome is inherited in an autosomal-dominant fashion. It is characterized by multiple hamartomatous polyps in the gastrointestinal tract, mucocutaneous pigmentation, and an increased risk for gastrointestinal and nongastrointestinal cancer. Mucocutaneous pigmented macules most commonly occur on the lips and perioral region, buccal mucosa, and the palms and soles. However, mucocutaneous pigmentation usually occurs during the first 1 to 2 years of life, increases in size and number over the ensuing years, and usually fades after puberty.4
Laugier-Hunziker syndrome is an acquired benign disorder presenting in adults with lentigines on the lips and buccal mucosa. It frequently is accompaniedby longitudinal melanonychia, macular pigmentation of the genitals, and involvement of the palms and soles. The diagnosis of Laugier-Hunziker syndrome is one of exclusion and is made after ruling out other causes of oral and labial hyperpigmentation, including physiologic pigmentation seen in darker-skinned individuals as well as inherited diseases associated with lentiginosis, requiring complete physical examination, endoscopy, and colonscopy.5
A wide variety of drugs and chemicals can lead to diffuse cutaneous hyperpigmentation. Increased production of melanin and/or the deposition of drug complexes or metals in the dermis is responsible for the skin discoloration. Drugs that most often cause hyperpigmentation on mucosal surfaces are hydroxychloroquine, minocycline, nicotine, silver, and some chemotherapy agents. The hyperpigmentation usually resolves with discontinuation of the offending agent, but the course may be prolonged over months to years.6
Changes in the skin and subcutaneous tissue occur in patients with Cushing syndrome. Hyperpigmentation is induced by increased secretion of adrenocorticotropic hormone, not cortisol, and occurs most often in patients with the ectopic adrenocorticotropic hormone syndrome. Hyperpigmentation may be generalized but is more intense in areas exposed to light (eg, face, neck, dorsal aspects of the hands) or to chronic mild trauma, friction, or pressure (eg, elbows, knees, spine, knuckles). Patchy pigmentation may occur on the inner surface of the lips and the buccal mucosa along the line of dental occlusion. Acanthosis nigricans also can be present in the axillae and around the neck.7
- Ferre EM, Rose SR, Rosenzweig SD, et al. Redefined clinical features and diagnostic criteria in autoimmune polyendocrinopathycandidiasis-ectodermal dystrophy. JCI Insight. 2016;1:E88782.
- Orlova EM, Sozaeva LS, Kareva MA, et al. Expanding the phenotypic and genotypic landscape of autoimmune polyendocrine syndrome type 1. J Clin Endocrinol Metab. 2017;102:3546-3556.
- Ahonen P, Myllärniemi S, Sipilä I, et al. Clinical variation of autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED) in a series of 68 patients. N Engl J Med. 1990;322:1829-1836.
- Utsunomiya J, Gocho H, Miyanaga T, et al. Peutz-Jeghers syndrome: its natural course and management. Johns Hopkins Med J. 1975;136:71-82.
- Nayak RS, Kotrashetti VS, Hosmani JV. Laugier-Hunziker syndrome. J Oral Maxillofac Pathol. 2012;16:245-250.
- Krause W. Drug-induced hyperpigmentation: a systematic review. J Dtsch Dermatol Ges. 2013;11:644-651.
- Newell-Price J, Trainer P, Besser M, et al. The diagnosis and differential diagnosis of Cushing’s syndrome and pseudo-Cushing’s states. Endocr Rev. 1998;19:647-672.
The Diagnosis: Addison Disease in the Context of Polyglandular Autoimmune Syndrome Type 2
The patient’s hormone levels as well as distinct clinical features led to a diagnosis of Addison disease in the context of polyglandular autoimmune syndrome type 2 (PAS-2). Approximately 50% of PAS-2 cases are familiar, and different modes of inheritance—autosomal recessive, autosomal dominant, and polygenic—have been reported. Women are affected up to 3 times more often than men.1,2 The age of onset ranges from infancy to late adulthood, with most cases occurring in early adulthood. Primary adrenal insufficiency (Addison disease) is the principal manifestation of PAS-2. It appears in approximately 50% of patients, occurring simultaneously with autoimmune thyroid disease or diabetes mellitus in 20% of patients and following them in 30% of patients.1,2 Autoimmune thyroid diseases such as chronic autoimmune thyroiditis and occasionally Graves disease as well as type 1 diabetes mellitus also are common. Polyglandular autoimmune syndrome type 2 with primary adrenal insufficiency and autoimmune thyroid disease was formerly referred to as Schmidt syndrome.3 It must be differentiated from polyglandular autoimmune syndrome type 1, a rare condition that also is referred to as autoimmune polyendocrinopathycandidiasis-ectodermal dystrophy syndrome.1,3 As with any other cause of adrenal insufficiency, the treatment involves hormone replacement therapy up to normal levels and then tapering according to stress levels (ie, surgery or infections that require a dose increase). Our patient was diagnosed according to hormone levels and clinical features and was started on 30 mg daily of hydrocortisone and 50 μg daily of levothyroxine. No improvement in her condition was noted after 6 months of treatment. The patient is still under yearly follow-up, and the mucous hyperpigmentation faded approximately 6 months after hormonal homeostasis was achieved.
Peutz-Jeghers syndrome is inherited in an autosomal-dominant fashion. It is characterized by multiple hamartomatous polyps in the gastrointestinal tract, mucocutaneous pigmentation, and an increased risk for gastrointestinal and nongastrointestinal cancer. Mucocutaneous pigmented macules most commonly occur on the lips and perioral region, buccal mucosa, and the palms and soles. However, mucocutaneous pigmentation usually occurs during the first 1 to 2 years of life, increases in size and number over the ensuing years, and usually fades after puberty.4
Laugier-Hunziker syndrome is an acquired benign disorder presenting in adults with lentigines on the lips and buccal mucosa. It frequently is accompaniedby longitudinal melanonychia, macular pigmentation of the genitals, and involvement of the palms and soles. The diagnosis of Laugier-Hunziker syndrome is one of exclusion and is made after ruling out other causes of oral and labial hyperpigmentation, including physiologic pigmentation seen in darker-skinned individuals as well as inherited diseases associated with lentiginosis, requiring complete physical examination, endoscopy, and colonscopy.5
A wide variety of drugs and chemicals can lead to diffuse cutaneous hyperpigmentation. Increased production of melanin and/or the deposition of drug complexes or metals in the dermis is responsible for the skin discoloration. Drugs that most often cause hyperpigmentation on mucosal surfaces are hydroxychloroquine, minocycline, nicotine, silver, and some chemotherapy agents. The hyperpigmentation usually resolves with discontinuation of the offending agent, but the course may be prolonged over months to years.6
Changes in the skin and subcutaneous tissue occur in patients with Cushing syndrome. Hyperpigmentation is induced by increased secretion of adrenocorticotropic hormone, not cortisol, and occurs most often in patients with the ectopic adrenocorticotropic hormone syndrome. Hyperpigmentation may be generalized but is more intense in areas exposed to light (eg, face, neck, dorsal aspects of the hands) or to chronic mild trauma, friction, or pressure (eg, elbows, knees, spine, knuckles). Patchy pigmentation may occur on the inner surface of the lips and the buccal mucosa along the line of dental occlusion. Acanthosis nigricans also can be present in the axillae and around the neck.7
The Diagnosis: Addison Disease in the Context of Polyglandular Autoimmune Syndrome Type 2
The patient’s hormone levels as well as distinct clinical features led to a diagnosis of Addison disease in the context of polyglandular autoimmune syndrome type 2 (PAS-2). Approximately 50% of PAS-2 cases are familiar, and different modes of inheritance—autosomal recessive, autosomal dominant, and polygenic—have been reported. Women are affected up to 3 times more often than men.1,2 The age of onset ranges from infancy to late adulthood, with most cases occurring in early adulthood. Primary adrenal insufficiency (Addison disease) is the principal manifestation of PAS-2. It appears in approximately 50% of patients, occurring simultaneously with autoimmune thyroid disease or diabetes mellitus in 20% of patients and following them in 30% of patients.1,2 Autoimmune thyroid diseases such as chronic autoimmune thyroiditis and occasionally Graves disease as well as type 1 diabetes mellitus also are common. Polyglandular autoimmune syndrome type 2 with primary adrenal insufficiency and autoimmune thyroid disease was formerly referred to as Schmidt syndrome.3 It must be differentiated from polyglandular autoimmune syndrome type 1, a rare condition that also is referred to as autoimmune polyendocrinopathycandidiasis-ectodermal dystrophy syndrome.1,3 As with any other cause of adrenal insufficiency, the treatment involves hormone replacement therapy up to normal levels and then tapering according to stress levels (ie, surgery or infections that require a dose increase). Our patient was diagnosed according to hormone levels and clinical features and was started on 30 mg daily of hydrocortisone and 50 μg daily of levothyroxine. No improvement in her condition was noted after 6 months of treatment. The patient is still under yearly follow-up, and the mucous hyperpigmentation faded approximately 6 months after hormonal homeostasis was achieved.
Peutz-Jeghers syndrome is inherited in an autosomal-dominant fashion. It is characterized by multiple hamartomatous polyps in the gastrointestinal tract, mucocutaneous pigmentation, and an increased risk for gastrointestinal and nongastrointestinal cancer. Mucocutaneous pigmented macules most commonly occur on the lips and perioral region, buccal mucosa, and the palms and soles. However, mucocutaneous pigmentation usually occurs during the first 1 to 2 years of life, increases in size and number over the ensuing years, and usually fades after puberty.4
Laugier-Hunziker syndrome is an acquired benign disorder presenting in adults with lentigines on the lips and buccal mucosa. It frequently is accompaniedby longitudinal melanonychia, macular pigmentation of the genitals, and involvement of the palms and soles. The diagnosis of Laugier-Hunziker syndrome is one of exclusion and is made after ruling out other causes of oral and labial hyperpigmentation, including physiologic pigmentation seen in darker-skinned individuals as well as inherited diseases associated with lentiginosis, requiring complete physical examination, endoscopy, and colonscopy.5
A wide variety of drugs and chemicals can lead to diffuse cutaneous hyperpigmentation. Increased production of melanin and/or the deposition of drug complexes or metals in the dermis is responsible for the skin discoloration. Drugs that most often cause hyperpigmentation on mucosal surfaces are hydroxychloroquine, minocycline, nicotine, silver, and some chemotherapy agents. The hyperpigmentation usually resolves with discontinuation of the offending agent, but the course may be prolonged over months to years.6
Changes in the skin and subcutaneous tissue occur in patients with Cushing syndrome. Hyperpigmentation is induced by increased secretion of adrenocorticotropic hormone, not cortisol, and occurs most often in patients with the ectopic adrenocorticotropic hormone syndrome. Hyperpigmentation may be generalized but is more intense in areas exposed to light (eg, face, neck, dorsal aspects of the hands) or to chronic mild trauma, friction, or pressure (eg, elbows, knees, spine, knuckles). Patchy pigmentation may occur on the inner surface of the lips and the buccal mucosa along the line of dental occlusion. Acanthosis nigricans also can be present in the axillae and around the neck.7
- Ferre EM, Rose SR, Rosenzweig SD, et al. Redefined clinical features and diagnostic criteria in autoimmune polyendocrinopathycandidiasis-ectodermal dystrophy. JCI Insight. 2016;1:E88782.
- Orlova EM, Sozaeva LS, Kareva MA, et al. Expanding the phenotypic and genotypic landscape of autoimmune polyendocrine syndrome type 1. J Clin Endocrinol Metab. 2017;102:3546-3556.
- Ahonen P, Myllärniemi S, Sipilä I, et al. Clinical variation of autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED) in a series of 68 patients. N Engl J Med. 1990;322:1829-1836.
- Utsunomiya J, Gocho H, Miyanaga T, et al. Peutz-Jeghers syndrome: its natural course and management. Johns Hopkins Med J. 1975;136:71-82.
- Nayak RS, Kotrashetti VS, Hosmani JV. Laugier-Hunziker syndrome. J Oral Maxillofac Pathol. 2012;16:245-250.
- Krause W. Drug-induced hyperpigmentation: a systematic review. J Dtsch Dermatol Ges. 2013;11:644-651.
- Newell-Price J, Trainer P, Besser M, et al. The diagnosis and differential diagnosis of Cushing’s syndrome and pseudo-Cushing’s states. Endocr Rev. 1998;19:647-672.
- Ferre EM, Rose SR, Rosenzweig SD, et al. Redefined clinical features and diagnostic criteria in autoimmune polyendocrinopathycandidiasis-ectodermal dystrophy. JCI Insight. 2016;1:E88782.
- Orlova EM, Sozaeva LS, Kareva MA, et al. Expanding the phenotypic and genotypic landscape of autoimmune polyendocrine syndrome type 1. J Clin Endocrinol Metab. 2017;102:3546-3556.
- Ahonen P, Myllärniemi S, Sipilä I, et al. Clinical variation of autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED) in a series of 68 patients. N Engl J Med. 1990;322:1829-1836.
- Utsunomiya J, Gocho H, Miyanaga T, et al. Peutz-Jeghers syndrome: its natural course and management. Johns Hopkins Med J. 1975;136:71-82.
- Nayak RS, Kotrashetti VS, Hosmani JV. Laugier-Hunziker syndrome. J Oral Maxillofac Pathol. 2012;16:245-250.
- Krause W. Drug-induced hyperpigmentation: a systematic review. J Dtsch Dermatol Ges. 2013;11:644-651.
- Newell-Price J, Trainer P, Besser M, et al. The diagnosis and differential diagnosis of Cushing’s syndrome and pseudo-Cushing’s states. Endocr Rev. 1998;19:647-672.
An otherwise healthy 17-year-old adolescent girl from Spain presented with hyperpigmentation on the tongue of several weeks’ duration. She denied licking graphite pencils or pens. Physical examination revealed pigmentation in the palmar creases and a slight generalized tan. The patient denied sun exposure. Neither melanonychia nor genital hyperpigmented lesions were noted. Blood tests showed overt hypothyroidism.