User login
Headache after drinking red wine? This could be why
This transcript has been edited for clarity.
Robert Louis Stevenson famously said, “Wine is bottled poetry.” And I think it works quite well. I’ve had wines that are simple, elegant, and unpretentious like Emily Dickinson, and passionate and mysterious like Pablo Neruda. And I’ve had wines that are more analogous to the limerick you might read scrawled on a rest-stop bathroom wall. Those ones give me headaches.
Headaches are common, and headaches after drinking alcohol are particularly common. An interesting epidemiologic phenomenon, not yet adequately explained, is why red wine is associated with more headache than other forms of alcohol. There have been many studies fingering many suspects, from sulfites to tannins to various phenolic compounds, but none have really provided a concrete explanation for what might be going on.
A new hypothesis came to the fore on Nov. 20 in the journal Scientific Reports:
To understand the idea, first a reminder of what happens when you drink alcohol, physiologically.
Alcohol is metabolized by the enzyme alcohol dehydrogenase in the gut and then in the liver. That turns it into acetaldehyde, a toxic metabolite. In most of us, aldehyde dehydrogenase (ALDH) quickly metabolizes acetaldehyde to the inert acetate, which can be safely excreted.
I say “most of us” because some populations, particularly those with East Asian ancestry, have a mutation in the ALDH gene which can lead to accumulation of toxic acetaldehyde with alcohol consumption – leading to facial flushing, nausea, and headache.
We can also inhibit the enzyme medically. That’s what the drug disulfiram, also known as Antabuse, does. It doesn’t prevent you from wanting to drink; it makes the consequences of drinking incredibly aversive.
The researchers focused in on the aldehyde dehydrogenase enzyme and conducted a screening study. Are there any compounds in red wine that naturally inhibit ALDH?
The results pointed squarely at quercetin, and particularly its metabolite quercetin glucuronide, which, at 20 micromolar concentrations, inhibited about 80% of ALDH activity.
Quercetin is a flavonoid – a compound that gives color to a variety of vegetables and fruits, including grapes. In a test tube, it is an antioxidant, which is enough evidence to spawn a small quercetin-as-supplement industry, but there is no convincing evidence that it is medically useful. The authors then examined the concentration of quercetin glucuronide to achieve various inhibitions of ALDH, as you can see in this graph here.
By about 10 micromolar, we see a decent amount of inhibition. Disulfiram is about 10 times more potent than that, but then again, you don’t drink three glasses of disulfiram with Thanksgiving dinner.
This is where this study stops. But it obviously tells us very little about what might be happening in the human body. For that, we need to ask the question: Can we get our quercetin levels to 10 micromolar? Is that remotely achievable?
Let’s start with how much quercetin there is in red wine. Like all things wine, it varies, but this study examining Australian wines found mean concentrations of 11 mg/L. The highest value I saw was close to 50 mg/L.
So let’s do some math. To make the numbers easy, let’s say you drank a liter of Australian wine, taking in 50 mg of quercetin glucuronide.
How much of that gets into your bloodstream? Some studies suggest a bioavailability of less than 1%, which basically means none and should probably put the quercetin hypothesis to bed. But there is some variation here too; it seems to depend on the form of quercetin you ingest.
Let’s say all 50 mg gets into your bloodstream. What blood concentration would that lead to? Well, I’ll keep the stoichiometry in the graphics and just say that if we assume that the volume of distribution of the compound is restricted to plasma alone, then you could achieve similar concentrations to what was done in petri dishes during this study.
Of course, if quercetin is really the culprit behind red wine headache, I have some questions: Why aren’t the Amazon reviews of quercetin supplements chock full of warnings not to take them with alcohol? And other foods have way higher quercetin concentration than wine, but you don’t hear people warning not to take your red onions with alcohol, or your capers, or lingonberries.
There’s some more work to be done here – most importantly, some human studies. Let’s give people wine with different amounts of quercetin and see what happens. Sign me up. Seriously.
As for Thanksgiving, it’s worth noting that cranberries have a lot of quercetin in them. So between the cranberry sauce, the Beaujolais, and your uncle ranting about the contrails again, the probability of headache is pretty darn high. Stay safe out there, and Happy Thanksgiving.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Robert Louis Stevenson famously said, “Wine is bottled poetry.” And I think it works quite well. I’ve had wines that are simple, elegant, and unpretentious like Emily Dickinson, and passionate and mysterious like Pablo Neruda. And I’ve had wines that are more analogous to the limerick you might read scrawled on a rest-stop bathroom wall. Those ones give me headaches.
Headaches are common, and headaches after drinking alcohol are particularly common. An interesting epidemiologic phenomenon, not yet adequately explained, is why red wine is associated with more headache than other forms of alcohol. There have been many studies fingering many suspects, from sulfites to tannins to various phenolic compounds, but none have really provided a concrete explanation for what might be going on.
A new hypothesis came to the fore on Nov. 20 in the journal Scientific Reports:
To understand the idea, first a reminder of what happens when you drink alcohol, physiologically.
Alcohol is metabolized by the enzyme alcohol dehydrogenase in the gut and then in the liver. That turns it into acetaldehyde, a toxic metabolite. In most of us, aldehyde dehydrogenase (ALDH) quickly metabolizes acetaldehyde to the inert acetate, which can be safely excreted.
I say “most of us” because some populations, particularly those with East Asian ancestry, have a mutation in the ALDH gene which can lead to accumulation of toxic acetaldehyde with alcohol consumption – leading to facial flushing, nausea, and headache.
We can also inhibit the enzyme medically. That’s what the drug disulfiram, also known as Antabuse, does. It doesn’t prevent you from wanting to drink; it makes the consequences of drinking incredibly aversive.
The researchers focused in on the aldehyde dehydrogenase enzyme and conducted a screening study. Are there any compounds in red wine that naturally inhibit ALDH?
The results pointed squarely at quercetin, and particularly its metabolite quercetin glucuronide, which, at 20 micromolar concentrations, inhibited about 80% of ALDH activity.
Quercetin is a flavonoid – a compound that gives color to a variety of vegetables and fruits, including grapes. In a test tube, it is an antioxidant, which is enough evidence to spawn a small quercetin-as-supplement industry, but there is no convincing evidence that it is medically useful. The authors then examined the concentration of quercetin glucuronide to achieve various inhibitions of ALDH, as you can see in this graph here.
By about 10 micromolar, we see a decent amount of inhibition. Disulfiram is about 10 times more potent than that, but then again, you don’t drink three glasses of disulfiram with Thanksgiving dinner.
This is where this study stops. But it obviously tells us very little about what might be happening in the human body. For that, we need to ask the question: Can we get our quercetin levels to 10 micromolar? Is that remotely achievable?
Let’s start with how much quercetin there is in red wine. Like all things wine, it varies, but this study examining Australian wines found mean concentrations of 11 mg/L. The highest value I saw was close to 50 mg/L.
So let’s do some math. To make the numbers easy, let’s say you drank a liter of Australian wine, taking in 50 mg of quercetin glucuronide.
How much of that gets into your bloodstream? Some studies suggest a bioavailability of less than 1%, which basically means none and should probably put the quercetin hypothesis to bed. But there is some variation here too; it seems to depend on the form of quercetin you ingest.
Let’s say all 50 mg gets into your bloodstream. What blood concentration would that lead to? Well, I’ll keep the stoichiometry in the graphics and just say that if we assume that the volume of distribution of the compound is restricted to plasma alone, then you could achieve similar concentrations to what was done in petri dishes during this study.
Of course, if quercetin is really the culprit behind red wine headache, I have some questions: Why aren’t the Amazon reviews of quercetin supplements chock full of warnings not to take them with alcohol? And other foods have way higher quercetin concentration than wine, but you don’t hear people warning not to take your red onions with alcohol, or your capers, or lingonberries.
There’s some more work to be done here – most importantly, some human studies. Let’s give people wine with different amounts of quercetin and see what happens. Sign me up. Seriously.
As for Thanksgiving, it’s worth noting that cranberries have a lot of quercetin in them. So between the cranberry sauce, the Beaujolais, and your uncle ranting about the contrails again, the probability of headache is pretty darn high. Stay safe out there, and Happy Thanksgiving.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Robert Louis Stevenson famously said, “Wine is bottled poetry.” And I think it works quite well. I’ve had wines that are simple, elegant, and unpretentious like Emily Dickinson, and passionate and mysterious like Pablo Neruda. And I’ve had wines that are more analogous to the limerick you might read scrawled on a rest-stop bathroom wall. Those ones give me headaches.
Headaches are common, and headaches after drinking alcohol are particularly common. An interesting epidemiologic phenomenon, not yet adequately explained, is why red wine is associated with more headache than other forms of alcohol. There have been many studies fingering many suspects, from sulfites to tannins to various phenolic compounds, but none have really provided a concrete explanation for what might be going on.
A new hypothesis came to the fore on Nov. 20 in the journal Scientific Reports:
To understand the idea, first a reminder of what happens when you drink alcohol, physiologically.
Alcohol is metabolized by the enzyme alcohol dehydrogenase in the gut and then in the liver. That turns it into acetaldehyde, a toxic metabolite. In most of us, aldehyde dehydrogenase (ALDH) quickly metabolizes acetaldehyde to the inert acetate, which can be safely excreted.
I say “most of us” because some populations, particularly those with East Asian ancestry, have a mutation in the ALDH gene which can lead to accumulation of toxic acetaldehyde with alcohol consumption – leading to facial flushing, nausea, and headache.
We can also inhibit the enzyme medically. That’s what the drug disulfiram, also known as Antabuse, does. It doesn’t prevent you from wanting to drink; it makes the consequences of drinking incredibly aversive.
The researchers focused in on the aldehyde dehydrogenase enzyme and conducted a screening study. Are there any compounds in red wine that naturally inhibit ALDH?
The results pointed squarely at quercetin, and particularly its metabolite quercetin glucuronide, which, at 20 micromolar concentrations, inhibited about 80% of ALDH activity.
Quercetin is a flavonoid – a compound that gives color to a variety of vegetables and fruits, including grapes. In a test tube, it is an antioxidant, which is enough evidence to spawn a small quercetin-as-supplement industry, but there is no convincing evidence that it is medically useful. The authors then examined the concentration of quercetin glucuronide to achieve various inhibitions of ALDH, as you can see in this graph here.
By about 10 micromolar, we see a decent amount of inhibition. Disulfiram is about 10 times more potent than that, but then again, you don’t drink three glasses of disulfiram with Thanksgiving dinner.
This is where this study stops. But it obviously tells us very little about what might be happening in the human body. For that, we need to ask the question: Can we get our quercetin levels to 10 micromolar? Is that remotely achievable?
Let’s start with how much quercetin there is in red wine. Like all things wine, it varies, but this study examining Australian wines found mean concentrations of 11 mg/L. The highest value I saw was close to 50 mg/L.
So let’s do some math. To make the numbers easy, let’s say you drank a liter of Australian wine, taking in 50 mg of quercetin glucuronide.
How much of that gets into your bloodstream? Some studies suggest a bioavailability of less than 1%, which basically means none and should probably put the quercetin hypothesis to bed. But there is some variation here too; it seems to depend on the form of quercetin you ingest.
Let’s say all 50 mg gets into your bloodstream. What blood concentration would that lead to? Well, I’ll keep the stoichiometry in the graphics and just say that if we assume that the volume of distribution of the compound is restricted to plasma alone, then you could achieve similar concentrations to what was done in petri dishes during this study.
Of course, if quercetin is really the culprit behind red wine headache, I have some questions: Why aren’t the Amazon reviews of quercetin supplements chock full of warnings not to take them with alcohol? And other foods have way higher quercetin concentration than wine, but you don’t hear people warning not to take your red onions with alcohol, or your capers, or lingonberries.
There’s some more work to be done here – most importantly, some human studies. Let’s give people wine with different amounts of quercetin and see what happens. Sign me up. Seriously.
As for Thanksgiving, it’s worth noting that cranberries have a lot of quercetin in them. So between the cranberry sauce, the Beaujolais, and your uncle ranting about the contrails again, the probability of headache is pretty darn high. Stay safe out there, and Happy Thanksgiving.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
The future of medicine is RNA
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
Every once in a while, medicine changes in a fundamental way, and we may not realize it while it’s happening. I wasn’t around in 1928 when Fleming discovered penicillin; or in 1953 when Watson, Crick, and Franklin characterized the double-helical structure of DNA.
But looking at medicine today, there are essentially two places where I think we will see, in retrospect, that we were at a fundamental turning point. One is artificial intelligence, which gets so much attention and hype that I will simply say yes, this will change things, stay tuned.
The other is a bit more obscure, but I suspect it may be just as impactful. That other thing is
I want to start with the idea that many diseases are, fundamentally, a problem of proteins. In some cases, like hypercholesterolemia, the body produces too much protein; in others, like hemophilia, too little.
When you think about disease this way, you realize that our current medications take effect late in the disease game. We have these molecules that try to block a protein from its receptor, prevent a protein from cleaving another protein, or increase the rate that a protein is broken down. It’s all distal to the fundamental problem: the production of the bad protein in the first place.
Enter small inhibitory RNAs, or siRNAs for short, discovered in 1998 by Andrew Fire and Craig Mello at UMass Worcester. The two won the Nobel prize in medicine just 8 years later; that’s a really short time, highlighting just how important this discovery was. In contrast, Karikó and Weissman won the Nobel for mRNA vaccines this year, after inventing them 18 years ago.
siRNAs are the body’s way of targeting proteins for destruction before they are ever created. About 20 base pairs long, siRNAs seek out a complementary target mRNA, attach to it, and call in a group of proteins to destroy it. With the target mRNA gone, no protein can be created.
You see where this is going, right? How does high cholesterol kill you? Proteins. How does Staphylococcus aureus kill you? Proteins. Even viruses can’t replicate if their RNA is prevented from being turned into proteins.
So, how do we use siRNAs? A new paper appearing in JAMA describes a fairly impressive use case.
The background here is that higher levels of lipoprotein(a), an LDL-like protein, are associated with cardiovascular disease, heart attack, and stroke. But unfortunately, statins really don’t have any effect on lipoprotein(a) levels. Neither does diet. Your lipoprotein(a) level seems to be more or less hard-coded genetically.
So, what if we stop the genetic machinery from working? Enter lepodisiran, a drug from Eli Lilly. Unlike so many other medications, which are usually found in nature, purified, and synthesized, lepodisiran was created from scratch. It’s not hard. Thanks to the Human Genome Project, we know the genetic code for lipoprotein(a), so inventing an siRNA to target it specifically is trivial. That’s one of the key features of siRNA – you don’t have to find a chemical that binds strongly to some protein receptor, and worry about the off-target effects and all that nonsense. You just pick a protein you want to suppress and you suppress it.
Okay, it’s not that simple. siRNA is broken down very quickly by the body, so it needs to be targeted to the organ of interest – in this case, the liver, since that is where lipoprotein(a) is synthesized. Lepodisiran is targeted to the liver by this special targeting label here.
The report is a standard dose-escalation trial. Six patients, all with elevated lipoprotein(a) levels, were started with a 4-mg dose (two additional individuals got placebo). They were intensely monitored, spending 3 days in a research unit for multiple blood draws followed by weekly, and then biweekly outpatient visits. Once they had done well, the next group of six people received a higher dose (two more got placebo), and the process was repeated – six times total – until the highest dose, 608 mg, was reached.
This is an injection, of course; siRNA wouldn’t withstand the harshness of the digestive system. And it’s only one injection. You can see from the blood concentration curves that within about 48 hours, circulating lepodisiran was not detectable.
But check out these results. Remember, this is from a single injection of lepodisiran.
Lipoprotein(a) levels start to drop within a week of administration, and they stay down. In the higher-dose groups, levels are nearly undetectable a year after that injection.
It was this graph that made me sit back and think that there might be something new under the sun. A single injection that can suppress protein synthesis for an entire year? If it really works, it changes the game.
Of course, this study wasn’t powered to look at important outcomes like heart attacks and strokes. It was primarily designed to assess safety, and the drug was pretty well tolerated, with similar rates of adverse events in the drug and placebo groups.
As crazy as it sounds, the real concern here might be that this drug is too good; is it safe to drop your lipoprotein(a) levels to zero for a year? I don’t know. But lower doses don’t have quite as strong an effect.
Trust me, these drugs are going to change things. They already are. In July, The New England Journal of Medicine published a study of zilebesiran, an siRNA that inhibits the production of angiotensinogen, to control blood pressure. Similar story: One injection led to a basically complete suppression of angiotensinogen and a sustained decrease in blood pressure.
I’m not exaggerating when I say that there may come a time when you go to your doctor once a year, get your RNA shots, and don’t have to take any other medication from that point on. And that time may be, like, 5 years from now. It’s wild.
Seems to me that that rapid Nobel Prize was very well deserved.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
Every once in a while, medicine changes in a fundamental way, and we may not realize it while it’s happening. I wasn’t around in 1928 when Fleming discovered penicillin; or in 1953 when Watson, Crick, and Franklin characterized the double-helical structure of DNA.
But looking at medicine today, there are essentially two places where I think we will see, in retrospect, that we were at a fundamental turning point. One is artificial intelligence, which gets so much attention and hype that I will simply say yes, this will change things, stay tuned.
The other is a bit more obscure, but I suspect it may be just as impactful. That other thing is
I want to start with the idea that many diseases are, fundamentally, a problem of proteins. In some cases, like hypercholesterolemia, the body produces too much protein; in others, like hemophilia, too little.
When you think about disease this way, you realize that our current medications take effect late in the disease game. We have these molecules that try to block a protein from its receptor, prevent a protein from cleaving another protein, or increase the rate that a protein is broken down. It’s all distal to the fundamental problem: the production of the bad protein in the first place.
Enter small inhibitory RNAs, or siRNAs for short, discovered in 1998 by Andrew Fire and Craig Mello at UMass Worcester. The two won the Nobel prize in medicine just 8 years later; that’s a really short time, highlighting just how important this discovery was. In contrast, Karikó and Weissman won the Nobel for mRNA vaccines this year, after inventing them 18 years ago.
siRNAs are the body’s way of targeting proteins for destruction before they are ever created. About 20 base pairs long, siRNAs seek out a complementary target mRNA, attach to it, and call in a group of proteins to destroy it. With the target mRNA gone, no protein can be created.
You see where this is going, right? How does high cholesterol kill you? Proteins. How does Staphylococcus aureus kill you? Proteins. Even viruses can’t replicate if their RNA is prevented from being turned into proteins.
So, how do we use siRNAs? A new paper appearing in JAMA describes a fairly impressive use case.
The background here is that higher levels of lipoprotein(a), an LDL-like protein, are associated with cardiovascular disease, heart attack, and stroke. But unfortunately, statins really don’t have any effect on lipoprotein(a) levels. Neither does diet. Your lipoprotein(a) level seems to be more or less hard-coded genetically.
So, what if we stop the genetic machinery from working? Enter lepodisiran, a drug from Eli Lilly. Unlike so many other medications, which are usually found in nature, purified, and synthesized, lepodisiran was created from scratch. It’s not hard. Thanks to the Human Genome Project, we know the genetic code for lipoprotein(a), so inventing an siRNA to target it specifically is trivial. That’s one of the key features of siRNA – you don’t have to find a chemical that binds strongly to some protein receptor, and worry about the off-target effects and all that nonsense. You just pick a protein you want to suppress and you suppress it.
Okay, it’s not that simple. siRNA is broken down very quickly by the body, so it needs to be targeted to the organ of interest – in this case, the liver, since that is where lipoprotein(a) is synthesized. Lepodisiran is targeted to the liver by this special targeting label here.
The report is a standard dose-escalation trial. Six patients, all with elevated lipoprotein(a) levels, were started with a 4-mg dose (two additional individuals got placebo). They were intensely monitored, spending 3 days in a research unit for multiple blood draws followed by weekly, and then biweekly outpatient visits. Once they had done well, the next group of six people received a higher dose (two more got placebo), and the process was repeated – six times total – until the highest dose, 608 mg, was reached.
This is an injection, of course; siRNA wouldn’t withstand the harshness of the digestive system. And it’s only one injection. You can see from the blood concentration curves that within about 48 hours, circulating lepodisiran was not detectable.
But check out these results. Remember, this is from a single injection of lepodisiran.
Lipoprotein(a) levels start to drop within a week of administration, and they stay down. In the higher-dose groups, levels are nearly undetectable a year after that injection.
It was this graph that made me sit back and think that there might be something new under the sun. A single injection that can suppress protein synthesis for an entire year? If it really works, it changes the game.
Of course, this study wasn’t powered to look at important outcomes like heart attacks and strokes. It was primarily designed to assess safety, and the drug was pretty well tolerated, with similar rates of adverse events in the drug and placebo groups.
As crazy as it sounds, the real concern here might be that this drug is too good; is it safe to drop your lipoprotein(a) levels to zero for a year? I don’t know. But lower doses don’t have quite as strong an effect.
Trust me, these drugs are going to change things. They already are. In July, The New England Journal of Medicine published a study of zilebesiran, an siRNA that inhibits the production of angiotensinogen, to control blood pressure. Similar story: One injection led to a basically complete suppression of angiotensinogen and a sustained decrease in blood pressure.
I’m not exaggerating when I say that there may come a time when you go to your doctor once a year, get your RNA shots, and don’t have to take any other medication from that point on. And that time may be, like, 5 years from now. It’s wild.
Seems to me that that rapid Nobel Prize was very well deserved.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
Every once in a while, medicine changes in a fundamental way, and we may not realize it while it’s happening. I wasn’t around in 1928 when Fleming discovered penicillin; or in 1953 when Watson, Crick, and Franklin characterized the double-helical structure of DNA.
But looking at medicine today, there are essentially two places where I think we will see, in retrospect, that we were at a fundamental turning point. One is artificial intelligence, which gets so much attention and hype that I will simply say yes, this will change things, stay tuned.
The other is a bit more obscure, but I suspect it may be just as impactful. That other thing is
I want to start with the idea that many diseases are, fundamentally, a problem of proteins. In some cases, like hypercholesterolemia, the body produces too much protein; in others, like hemophilia, too little.
When you think about disease this way, you realize that our current medications take effect late in the disease game. We have these molecules that try to block a protein from its receptor, prevent a protein from cleaving another protein, or increase the rate that a protein is broken down. It’s all distal to the fundamental problem: the production of the bad protein in the first place.
Enter small inhibitory RNAs, or siRNAs for short, discovered in 1998 by Andrew Fire and Craig Mello at UMass Worcester. The two won the Nobel prize in medicine just 8 years later; that’s a really short time, highlighting just how important this discovery was. In contrast, Karikó and Weissman won the Nobel for mRNA vaccines this year, after inventing them 18 years ago.
siRNAs are the body’s way of targeting proteins for destruction before they are ever created. About 20 base pairs long, siRNAs seek out a complementary target mRNA, attach to it, and call in a group of proteins to destroy it. With the target mRNA gone, no protein can be created.
You see where this is going, right? How does high cholesterol kill you? Proteins. How does Staphylococcus aureus kill you? Proteins. Even viruses can’t replicate if their RNA is prevented from being turned into proteins.
So, how do we use siRNAs? A new paper appearing in JAMA describes a fairly impressive use case.
The background here is that higher levels of lipoprotein(a), an LDL-like protein, are associated with cardiovascular disease, heart attack, and stroke. But unfortunately, statins really don’t have any effect on lipoprotein(a) levels. Neither does diet. Your lipoprotein(a) level seems to be more or less hard-coded genetically.
So, what if we stop the genetic machinery from working? Enter lepodisiran, a drug from Eli Lilly. Unlike so many other medications, which are usually found in nature, purified, and synthesized, lepodisiran was created from scratch. It’s not hard. Thanks to the Human Genome Project, we know the genetic code for lipoprotein(a), so inventing an siRNA to target it specifically is trivial. That’s one of the key features of siRNA – you don’t have to find a chemical that binds strongly to some protein receptor, and worry about the off-target effects and all that nonsense. You just pick a protein you want to suppress and you suppress it.
Okay, it’s not that simple. siRNA is broken down very quickly by the body, so it needs to be targeted to the organ of interest – in this case, the liver, since that is where lipoprotein(a) is synthesized. Lepodisiran is targeted to the liver by this special targeting label here.
The report is a standard dose-escalation trial. Six patients, all with elevated lipoprotein(a) levels, were started with a 4-mg dose (two additional individuals got placebo). They were intensely monitored, spending 3 days in a research unit for multiple blood draws followed by weekly, and then biweekly outpatient visits. Once they had done well, the next group of six people received a higher dose (two more got placebo), and the process was repeated – six times total – until the highest dose, 608 mg, was reached.
This is an injection, of course; siRNA wouldn’t withstand the harshness of the digestive system. And it’s only one injection. You can see from the blood concentration curves that within about 48 hours, circulating lepodisiran was not detectable.
But check out these results. Remember, this is from a single injection of lepodisiran.
Lipoprotein(a) levels start to drop within a week of administration, and they stay down. In the higher-dose groups, levels are nearly undetectable a year after that injection.
It was this graph that made me sit back and think that there might be something new under the sun. A single injection that can suppress protein synthesis for an entire year? If it really works, it changes the game.
Of course, this study wasn’t powered to look at important outcomes like heart attacks and strokes. It was primarily designed to assess safety, and the drug was pretty well tolerated, with similar rates of adverse events in the drug and placebo groups.
As crazy as it sounds, the real concern here might be that this drug is too good; is it safe to drop your lipoprotein(a) levels to zero for a year? I don’t know. But lower doses don’t have quite as strong an effect.
Trust me, these drugs are going to change things. They already are. In July, The New England Journal of Medicine published a study of zilebesiran, an siRNA that inhibits the production of angiotensinogen, to control blood pressure. Similar story: One injection led to a basically complete suppression of angiotensinogen and a sustained decrease in blood pressure.
I’m not exaggerating when I say that there may come a time when you go to your doctor once a year, get your RNA shots, and don’t have to take any other medication from that point on. And that time may be, like, 5 years from now. It’s wild.
Seems to me that that rapid Nobel Prize was very well deserved.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.
A version of this article appeared on Medscape.com.
Even one night in the ED raises risk for death
This transcript has been edited for clarity.
As a consulting nephrologist, I go all over the hospital. Medicine floors, surgical floors, the ICU – I’ve even done consults in the operating room. And more and more, I do consults in the emergency department.
The reason I am doing more consults in the ED is not because the ED docs are getting gun shy with creatinine increases; it’s because patients are staying for extended periods in the ED despite being formally admitted to the hospital. It’s a phenomenon known as boarding, because there are simply not enough beds. You know the scene if you have ever been to a busy hospital: The ED is full to breaking, with patients on stretchers in hallways. It can often feel more like a warzone than a place for healing.
This is a huge problem.
The Joint Commission specifies that admitted patients should spend no more than 4 hours in the ED waiting for a bed in the hospital.
That is, based on what I’ve seen, hugely ambitious. But I should point out that I work in a hospital that runs near capacity all the time, and studies – from some of my Yale colleagues, actually – have shown that once hospital capacity exceeds 85%, boarding rates skyrocket.
I want to discuss some of the causes of extended boarding and some solutions. But before that, I should prove to you that this really matters, and for that we are going to dig in to a new study which suggests that ED boarding kills.
To put some hard numbers to the boarding problem, we turn to this paper out of France, appearing in JAMA Internal Medicine.
This is a unique study design. Basically, on a single day – Dec. 12, 2022 – researchers fanned out across France to 97 EDs and started counting patients. The study focused on those older than age 75 who were admitted to a hospital ward from the ED. The researchers then defined two groups: those who were sent up to the hospital floor before midnight, and those who spent at least from midnight until 8 AM in the ED (basically, people forced to sleep in the ED for a night). The middle-ground people who were sent up between midnight and 8 AM were excluded.
The baseline characteristics between the two groups of patients were pretty similar: median age around 86, 55% female. There were no significant differences in comorbidities. That said, comporting with previous studies, people in an urban ED, an academic ED, or a busy ED were much more likely to board overnight.
So, what we have are two similar groups of patients treated quite differently. Not quite a randomized trial, given the hospital differences, but not bad for purposes of analysis.
Here are the most important numbers from the trial:
This difference held up even after adjustment for patient and hospital characteristics. Put another way, you’d need to send 22 patients to the floor instead of boarding in the ED to save one life. Not a bad return on investment.
It’s not entirely clear what the mechanism for the excess mortality might be, but the researchers note that patients kept in the ED overnight were about twice as likely to have a fall during their hospital stay – not surprising, given the dangers of gurneys in hallways and the sleep deprivation that trying to rest in a busy ED engenders.
I should point out that this could be worse in the United States. French ED doctors continue to care for admitted patients boarding in the ED, whereas in many hospitals in the United States, admitted patients are the responsibility of the floor team, regardless of where they are, making it more likely that these individuals may be neglected.
So, if boarding in the ED is a life-threatening situation, why do we do it? What conditions predispose to this?
You’ll hear a lot of talk, mostly from hospital administrators, saying that this is simply a problem of supply and demand. There are not enough beds for the number of patients who need beds. And staffing shortages don’t help either.
However, they never want to talk about the reasons for the staffing shortages, like poor pay, poor support, and, of course, the moral injury of treating patients in hallways.
The issue of volume is real. We could do a lot to prevent ED visits and hospital admissions by providing better access to preventive and primary care and improving our outpatient mental health infrastructure. But I think this framing passes the buck a little.
Another reason ED boarding occurs is the way our health care system is paid for. If you are building a hospital, you have little incentive to build in excess capacity. The most efficient hospital, from a profit-and-loss standpoint, is one that is 100% full as often as possible. That may be fine at times, but throw in a respiratory virus or even a pandemic, and those systems fracture under the pressure.
Let us also remember that not all hospital beds are given to patients who acutely need hospital beds. Many beds, in many hospitals, are necessary to handle postoperative patients undergoing elective procedures. Those patients having a knee replacement or abdominoplasty don’t spend the night in the ED when they leave the OR; they go to a hospital bed. And those procedures are – let’s face it – more profitable than an ED admission for a medical issue. That’s why, even when hospitals expand the number of beds they have, they do it with an eye toward increasing the rate of those profitable procedures, not decreasing the burden faced by their ED.
For now, the band-aid to the solution might be to better triage individuals boarding in the ED for floor access, prioritizing those of older age, greater frailty, or more medical complexity. But it feels like a stop-gap measure as long as the incentives are aligned to view an empty hospital bed as a sign of failure in the health system instead of success.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
As a consulting nephrologist, I go all over the hospital. Medicine floors, surgical floors, the ICU – I’ve even done consults in the operating room. And more and more, I do consults in the emergency department.
The reason I am doing more consults in the ED is not because the ED docs are getting gun shy with creatinine increases; it’s because patients are staying for extended periods in the ED despite being formally admitted to the hospital. It’s a phenomenon known as boarding, because there are simply not enough beds. You know the scene if you have ever been to a busy hospital: The ED is full to breaking, with patients on stretchers in hallways. It can often feel more like a warzone than a place for healing.
This is a huge problem.
The Joint Commission specifies that admitted patients should spend no more than 4 hours in the ED waiting for a bed in the hospital.
That is, based on what I’ve seen, hugely ambitious. But I should point out that I work in a hospital that runs near capacity all the time, and studies – from some of my Yale colleagues, actually – have shown that once hospital capacity exceeds 85%, boarding rates skyrocket.
I want to discuss some of the causes of extended boarding and some solutions. But before that, I should prove to you that this really matters, and for that we are going to dig in to a new study which suggests that ED boarding kills.
To put some hard numbers to the boarding problem, we turn to this paper out of France, appearing in JAMA Internal Medicine.
This is a unique study design. Basically, on a single day – Dec. 12, 2022 – researchers fanned out across France to 97 EDs and started counting patients. The study focused on those older than age 75 who were admitted to a hospital ward from the ED. The researchers then defined two groups: those who were sent up to the hospital floor before midnight, and those who spent at least from midnight until 8 AM in the ED (basically, people forced to sleep in the ED for a night). The middle-ground people who were sent up between midnight and 8 AM were excluded.
The baseline characteristics between the two groups of patients were pretty similar: median age around 86, 55% female. There were no significant differences in comorbidities. That said, comporting with previous studies, people in an urban ED, an academic ED, or a busy ED were much more likely to board overnight.
So, what we have are two similar groups of patients treated quite differently. Not quite a randomized trial, given the hospital differences, but not bad for purposes of analysis.
Here are the most important numbers from the trial:
This difference held up even after adjustment for patient and hospital characteristics. Put another way, you’d need to send 22 patients to the floor instead of boarding in the ED to save one life. Not a bad return on investment.
It’s not entirely clear what the mechanism for the excess mortality might be, but the researchers note that patients kept in the ED overnight were about twice as likely to have a fall during their hospital stay – not surprising, given the dangers of gurneys in hallways and the sleep deprivation that trying to rest in a busy ED engenders.
I should point out that this could be worse in the United States. French ED doctors continue to care for admitted patients boarding in the ED, whereas in many hospitals in the United States, admitted patients are the responsibility of the floor team, regardless of where they are, making it more likely that these individuals may be neglected.
So, if boarding in the ED is a life-threatening situation, why do we do it? What conditions predispose to this?
You’ll hear a lot of talk, mostly from hospital administrators, saying that this is simply a problem of supply and demand. There are not enough beds for the number of patients who need beds. And staffing shortages don’t help either.
However, they never want to talk about the reasons for the staffing shortages, like poor pay, poor support, and, of course, the moral injury of treating patients in hallways.
The issue of volume is real. We could do a lot to prevent ED visits and hospital admissions by providing better access to preventive and primary care and improving our outpatient mental health infrastructure. But I think this framing passes the buck a little.
Another reason ED boarding occurs is the way our health care system is paid for. If you are building a hospital, you have little incentive to build in excess capacity. The most efficient hospital, from a profit-and-loss standpoint, is one that is 100% full as often as possible. That may be fine at times, but throw in a respiratory virus or even a pandemic, and those systems fracture under the pressure.
Let us also remember that not all hospital beds are given to patients who acutely need hospital beds. Many beds, in many hospitals, are necessary to handle postoperative patients undergoing elective procedures. Those patients having a knee replacement or abdominoplasty don’t spend the night in the ED when they leave the OR; they go to a hospital bed. And those procedures are – let’s face it – more profitable than an ED admission for a medical issue. That’s why, even when hospitals expand the number of beds they have, they do it with an eye toward increasing the rate of those profitable procedures, not decreasing the burden faced by their ED.
For now, the band-aid to the solution might be to better triage individuals boarding in the ED for floor access, prioritizing those of older age, greater frailty, or more medical complexity. But it feels like a stop-gap measure as long as the incentives are aligned to view an empty hospital bed as a sign of failure in the health system instead of success.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
As a consulting nephrologist, I go all over the hospital. Medicine floors, surgical floors, the ICU – I’ve even done consults in the operating room. And more and more, I do consults in the emergency department.
The reason I am doing more consults in the ED is not because the ED docs are getting gun shy with creatinine increases; it’s because patients are staying for extended periods in the ED despite being formally admitted to the hospital. It’s a phenomenon known as boarding, because there are simply not enough beds. You know the scene if you have ever been to a busy hospital: The ED is full to breaking, with patients on stretchers in hallways. It can often feel more like a warzone than a place for healing.
This is a huge problem.
The Joint Commission specifies that admitted patients should spend no more than 4 hours in the ED waiting for a bed in the hospital.
That is, based on what I’ve seen, hugely ambitious. But I should point out that I work in a hospital that runs near capacity all the time, and studies – from some of my Yale colleagues, actually – have shown that once hospital capacity exceeds 85%, boarding rates skyrocket.
I want to discuss some of the causes of extended boarding and some solutions. But before that, I should prove to you that this really matters, and for that we are going to dig in to a new study which suggests that ED boarding kills.
To put some hard numbers to the boarding problem, we turn to this paper out of France, appearing in JAMA Internal Medicine.
This is a unique study design. Basically, on a single day – Dec. 12, 2022 – researchers fanned out across France to 97 EDs and started counting patients. The study focused on those older than age 75 who were admitted to a hospital ward from the ED. The researchers then defined two groups: those who were sent up to the hospital floor before midnight, and those who spent at least from midnight until 8 AM in the ED (basically, people forced to sleep in the ED for a night). The middle-ground people who were sent up between midnight and 8 AM were excluded.
The baseline characteristics between the two groups of patients were pretty similar: median age around 86, 55% female. There were no significant differences in comorbidities. That said, comporting with previous studies, people in an urban ED, an academic ED, or a busy ED were much more likely to board overnight.
So, what we have are two similar groups of patients treated quite differently. Not quite a randomized trial, given the hospital differences, but not bad for purposes of analysis.
Here are the most important numbers from the trial:
This difference held up even after adjustment for patient and hospital characteristics. Put another way, you’d need to send 22 patients to the floor instead of boarding in the ED to save one life. Not a bad return on investment.
It’s not entirely clear what the mechanism for the excess mortality might be, but the researchers note that patients kept in the ED overnight were about twice as likely to have a fall during their hospital stay – not surprising, given the dangers of gurneys in hallways and the sleep deprivation that trying to rest in a busy ED engenders.
I should point out that this could be worse in the United States. French ED doctors continue to care for admitted patients boarding in the ED, whereas in many hospitals in the United States, admitted patients are the responsibility of the floor team, regardless of where they are, making it more likely that these individuals may be neglected.
So, if boarding in the ED is a life-threatening situation, why do we do it? What conditions predispose to this?
You’ll hear a lot of talk, mostly from hospital administrators, saying that this is simply a problem of supply and demand. There are not enough beds for the number of patients who need beds. And staffing shortages don’t help either.
However, they never want to talk about the reasons for the staffing shortages, like poor pay, poor support, and, of course, the moral injury of treating patients in hallways.
The issue of volume is real. We could do a lot to prevent ED visits and hospital admissions by providing better access to preventive and primary care and improving our outpatient mental health infrastructure. But I think this framing passes the buck a little.
Another reason ED boarding occurs is the way our health care system is paid for. If you are building a hospital, you have little incentive to build in excess capacity. The most efficient hospital, from a profit-and-loss standpoint, is one that is 100% full as often as possible. That may be fine at times, but throw in a respiratory virus or even a pandemic, and those systems fracture under the pressure.
Let us also remember that not all hospital beds are given to patients who acutely need hospital beds. Many beds, in many hospitals, are necessary to handle postoperative patients undergoing elective procedures. Those patients having a knee replacement or abdominoplasty don’t spend the night in the ED when they leave the OR; they go to a hospital bed. And those procedures are – let’s face it – more profitable than an ED admission for a medical issue. That’s why, even when hospitals expand the number of beds they have, they do it with an eye toward increasing the rate of those profitable procedures, not decreasing the burden faced by their ED.
For now, the band-aid to the solution might be to better triage individuals boarding in the ED for floor access, prioritizing those of older age, greater frailty, or more medical complexity. But it feels like a stop-gap measure as long as the incentives are aligned to view an empty hospital bed as a sign of failure in the health system instead of success.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This drug works, but wait till you hear what’s in it
This transcript has been edited for clarity.
As some of you may know, I do a fair amount of clinical research developing and evaluating artificial intelligence (AI) models, particularly machine learning algorithms that predict certain outcomes.
A thorny issue that comes up as algorithms have gotten more complicated is “explainability.” The problem is that AI can be a black box. Even if you have a model that is very accurate at predicting death, clinicians don’t trust it unless you can explain how it makes its predictions – how it works. “It just works” is not good enough to build trust.
It’s easier to build trust when you’re talking about a medication rather than a computer program. When a new blood pressure drug comes out that lowers blood pressure, importantly, we know why it lowers blood pressure. Every drug has a mechanism of action and, for most of the drugs in our arsenal, we know what that mechanism is.
But what if there were a drug – or better yet, a treatment – that worked? And I can honestly say we have no idea how it works. That’s what came across my desk today in what I believe is the largest, most rigorous trial of a traditional Chinese medication in history.
“Traditional Chinese medicine” is an omnibus term that refers to a class of therapies and health practices that are fundamentally different from how we practice medicine in the West.
It’s a highly personalized practice, with practitioners using often esoteric means to choose what substance to give what patient. That personalization makes traditional Chinese medicine nearly impossible to study in the typical randomized trial framework because treatments are not chosen solely on the basis of disease states.
The lack of scientific rigor in traditional Chinese medicine means that it is rife with practices and beliefs that can legitimately be called pseudoscience. As a nephrologist who has treated someone for “Chinese herb nephropathy,” I can tell you that some of the practices may be actively harmful.
But that doesn’t mean there is nothing there. I do not subscribe to the “argument from antiquity” – the idea that because something has been done for a long time it must be correct. But at the same time, traditional and non–science-based medicine practices could still identify therapies that work.
And with that, let me introduce you to Tongxinluo. Tongxinluo literally means “to open the network of the heart,” and it is a substance that has been used for centuries by traditional Chinese medicine practitioners to treat angina but was approved by the Chinese state medicine agency for use in 1996.
Like many traditional Chinese medicine preparations, Tongxinluo is not a single chemical – far from it. It is a powder made from a variety of plant and insect parts, as you can see here.
I can’t imagine running a trial of this concoction in the United States; I just don’t see an institutional review board signing off, given the ingredient list.
But let’s set that aside and talk about the study itself.
While I don’t have access to any primary data, the write-up of the study suggests that it was highly rigorous. Chinese researchers randomized 3,797 patients with ST-elevation MI to take Tongxinluo – four capsules, three times a day for 12 months – or matching placebo. The placebo was designed to look just like the Tongxinluo capsules and, if the capsules were opened, to smell like them as well.
Researchers and participants were blinded, and the statistical analysis was done both by the primary team and an independent research agency, also in China.
And the results were pretty good. The primary outcome, 30-day major cardiovascular and cerebral events, were significantly lower in the intervention group than in the placebo group.
One-year outcomes were similarly good; 8.3% of the placebo group suffered a major cardiovascular or cerebral event in that time frame, compared with 5.3% of the Tongxinluo group. In short, if this were a pure chemical compound from a major pharmaceutical company, well, you might be seeing a new treatment for heart attack – and a boost in stock price.
But there are some issues here, generalizability being a big one. This study was done entirely in China, so its applicability to a more diverse population is unclear. Moreover, the quality of post-MI care in this study is quite a bit worse than what we’d see here in the United States, with just over 50% of patients being discharged on a beta-blocker, for example.
But issues of generalizability and potentially substandard supplementary treatments are the usual reasons we worry about new medication trials. And those concerns seem to pale before the big one I have here which is, you know – we don’t know why this works.
Is it the extract of leech in the preparation perhaps thinning the blood a bit? Or is it the antioxidants in the ginseng, or something from the Pacific centipede or the sandalwood?
This trial doesn’t read to me as a vindication of traditional Chinese medicine but rather as an example of missed opportunity. More rigorous scientific study over the centuries that Tongxinluo has been used could have identified one, or perhaps more, compounds with strong therapeutic potential.
Purity of medical substances is incredibly important. Pure substances have predictable effects and side effects. Pure substances interact with other treatments we give patients in predictable ways. Pure substances can be quantified for purity by third parties, they can be manufactured according to accepted standards, and they can be assessed for adulteration. In short, pure substances pose less risk.
Now, I know that may come off as particularly sterile. Some people will feel that a “natural” substance has some inherent benefit over pure compounds. And, of course, there is something soothing about imagining a traditional preparation handed down over centuries, being prepared with care by a single practitioner, in contrast to the sterile industrial processes of a for-profit pharmaceutical company. I get it. But natural is not the same as safe. I am glad I have access to purified aspirin and don’t have to chew willow bark. I like my pure penicillin and am glad I don’t have to make a mold slurry to treat a bacterial infection.
I applaud the researchers for subjecting Tongxinluo to the rigor of a well-designed trial. They have generated data that are incredibly exciting, but not because we have a new treatment for ST-elevation MI on our hands; it’s because we have a map to a new treatment. The next big thing in heart attack care is not the mixture that is Tongxinluo, but it might be in the mixture.
A version of this article first appeared on Medscape.com.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t,” is available now.
This transcript has been edited for clarity.
As some of you may know, I do a fair amount of clinical research developing and evaluating artificial intelligence (AI) models, particularly machine learning algorithms that predict certain outcomes.
A thorny issue that comes up as algorithms have gotten more complicated is “explainability.” The problem is that AI can be a black box. Even if you have a model that is very accurate at predicting death, clinicians don’t trust it unless you can explain how it makes its predictions – how it works. “It just works” is not good enough to build trust.
It’s easier to build trust when you’re talking about a medication rather than a computer program. When a new blood pressure drug comes out that lowers blood pressure, importantly, we know why it lowers blood pressure. Every drug has a mechanism of action and, for most of the drugs in our arsenal, we know what that mechanism is.
But what if there were a drug – or better yet, a treatment – that worked? And I can honestly say we have no idea how it works. That’s what came across my desk today in what I believe is the largest, most rigorous trial of a traditional Chinese medication in history.
“Traditional Chinese medicine” is an omnibus term that refers to a class of therapies and health practices that are fundamentally different from how we practice medicine in the West.
It’s a highly personalized practice, with practitioners using often esoteric means to choose what substance to give what patient. That personalization makes traditional Chinese medicine nearly impossible to study in the typical randomized trial framework because treatments are not chosen solely on the basis of disease states.
The lack of scientific rigor in traditional Chinese medicine means that it is rife with practices and beliefs that can legitimately be called pseudoscience. As a nephrologist who has treated someone for “Chinese herb nephropathy,” I can tell you that some of the practices may be actively harmful.
But that doesn’t mean there is nothing there. I do not subscribe to the “argument from antiquity” – the idea that because something has been done for a long time it must be correct. But at the same time, traditional and non–science-based medicine practices could still identify therapies that work.
And with that, let me introduce you to Tongxinluo. Tongxinluo literally means “to open the network of the heart,” and it is a substance that has been used for centuries by traditional Chinese medicine practitioners to treat angina but was approved by the Chinese state medicine agency for use in 1996.
Like many traditional Chinese medicine preparations, Tongxinluo is not a single chemical – far from it. It is a powder made from a variety of plant and insect parts, as you can see here.
I can’t imagine running a trial of this concoction in the United States; I just don’t see an institutional review board signing off, given the ingredient list.
But let’s set that aside and talk about the study itself.
While I don’t have access to any primary data, the write-up of the study suggests that it was highly rigorous. Chinese researchers randomized 3,797 patients with ST-elevation MI to take Tongxinluo – four capsules, three times a day for 12 months – or matching placebo. The placebo was designed to look just like the Tongxinluo capsules and, if the capsules were opened, to smell like them as well.
Researchers and participants were blinded, and the statistical analysis was done both by the primary team and an independent research agency, also in China.
And the results were pretty good. The primary outcome, 30-day major cardiovascular and cerebral events, were significantly lower in the intervention group than in the placebo group.
One-year outcomes were similarly good; 8.3% of the placebo group suffered a major cardiovascular or cerebral event in that time frame, compared with 5.3% of the Tongxinluo group. In short, if this were a pure chemical compound from a major pharmaceutical company, well, you might be seeing a new treatment for heart attack – and a boost in stock price.
But there are some issues here, generalizability being a big one. This study was done entirely in China, so its applicability to a more diverse population is unclear. Moreover, the quality of post-MI care in this study is quite a bit worse than what we’d see here in the United States, with just over 50% of patients being discharged on a beta-blocker, for example.
But issues of generalizability and potentially substandard supplementary treatments are the usual reasons we worry about new medication trials. And those concerns seem to pale before the big one I have here which is, you know – we don’t know why this works.
Is it the extract of leech in the preparation perhaps thinning the blood a bit? Or is it the antioxidants in the ginseng, or something from the Pacific centipede or the sandalwood?
This trial doesn’t read to me as a vindication of traditional Chinese medicine but rather as an example of missed opportunity. More rigorous scientific study over the centuries that Tongxinluo has been used could have identified one, or perhaps more, compounds with strong therapeutic potential.
Purity of medical substances is incredibly important. Pure substances have predictable effects and side effects. Pure substances interact with other treatments we give patients in predictable ways. Pure substances can be quantified for purity by third parties, they can be manufactured according to accepted standards, and they can be assessed for adulteration. In short, pure substances pose less risk.
Now, I know that may come off as particularly sterile. Some people will feel that a “natural” substance has some inherent benefit over pure compounds. And, of course, there is something soothing about imagining a traditional preparation handed down over centuries, being prepared with care by a single practitioner, in contrast to the sterile industrial processes of a for-profit pharmaceutical company. I get it. But natural is not the same as safe. I am glad I have access to purified aspirin and don’t have to chew willow bark. I like my pure penicillin and am glad I don’t have to make a mold slurry to treat a bacterial infection.
I applaud the researchers for subjecting Tongxinluo to the rigor of a well-designed trial. They have generated data that are incredibly exciting, but not because we have a new treatment for ST-elevation MI on our hands; it’s because we have a map to a new treatment. The next big thing in heart attack care is not the mixture that is Tongxinluo, but it might be in the mixture.
A version of this article first appeared on Medscape.com.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t,” is available now.
This transcript has been edited for clarity.
As some of you may know, I do a fair amount of clinical research developing and evaluating artificial intelligence (AI) models, particularly machine learning algorithms that predict certain outcomes.
A thorny issue that comes up as algorithms have gotten more complicated is “explainability.” The problem is that AI can be a black box. Even if you have a model that is very accurate at predicting death, clinicians don’t trust it unless you can explain how it makes its predictions – how it works. “It just works” is not good enough to build trust.
It’s easier to build trust when you’re talking about a medication rather than a computer program. When a new blood pressure drug comes out that lowers blood pressure, importantly, we know why it lowers blood pressure. Every drug has a mechanism of action and, for most of the drugs in our arsenal, we know what that mechanism is.
But what if there were a drug – or better yet, a treatment – that worked? And I can honestly say we have no idea how it works. That’s what came across my desk today in what I believe is the largest, most rigorous trial of a traditional Chinese medication in history.
“Traditional Chinese medicine” is an omnibus term that refers to a class of therapies and health practices that are fundamentally different from how we practice medicine in the West.
It’s a highly personalized practice, with practitioners using often esoteric means to choose what substance to give what patient. That personalization makes traditional Chinese medicine nearly impossible to study in the typical randomized trial framework because treatments are not chosen solely on the basis of disease states.
The lack of scientific rigor in traditional Chinese medicine means that it is rife with practices and beliefs that can legitimately be called pseudoscience. As a nephrologist who has treated someone for “Chinese herb nephropathy,” I can tell you that some of the practices may be actively harmful.
But that doesn’t mean there is nothing there. I do not subscribe to the “argument from antiquity” – the idea that because something has been done for a long time it must be correct. But at the same time, traditional and non–science-based medicine practices could still identify therapies that work.
And with that, let me introduce you to Tongxinluo. Tongxinluo literally means “to open the network of the heart,” and it is a substance that has been used for centuries by traditional Chinese medicine practitioners to treat angina but was approved by the Chinese state medicine agency for use in 1996.
Like many traditional Chinese medicine preparations, Tongxinluo is not a single chemical – far from it. It is a powder made from a variety of plant and insect parts, as you can see here.
I can’t imagine running a trial of this concoction in the United States; I just don’t see an institutional review board signing off, given the ingredient list.
But let’s set that aside and talk about the study itself.
While I don’t have access to any primary data, the write-up of the study suggests that it was highly rigorous. Chinese researchers randomized 3,797 patients with ST-elevation MI to take Tongxinluo – four capsules, three times a day for 12 months – or matching placebo. The placebo was designed to look just like the Tongxinluo capsules and, if the capsules were opened, to smell like them as well.
Researchers and participants were blinded, and the statistical analysis was done both by the primary team and an independent research agency, also in China.
And the results were pretty good. The primary outcome, 30-day major cardiovascular and cerebral events, were significantly lower in the intervention group than in the placebo group.
One-year outcomes were similarly good; 8.3% of the placebo group suffered a major cardiovascular or cerebral event in that time frame, compared with 5.3% of the Tongxinluo group. In short, if this were a pure chemical compound from a major pharmaceutical company, well, you might be seeing a new treatment for heart attack – and a boost in stock price.
But there are some issues here, generalizability being a big one. This study was done entirely in China, so its applicability to a more diverse population is unclear. Moreover, the quality of post-MI care in this study is quite a bit worse than what we’d see here in the United States, with just over 50% of patients being discharged on a beta-blocker, for example.
But issues of generalizability and potentially substandard supplementary treatments are the usual reasons we worry about new medication trials. And those concerns seem to pale before the big one I have here which is, you know – we don’t know why this works.
Is it the extract of leech in the preparation perhaps thinning the blood a bit? Or is it the antioxidants in the ginseng, or something from the Pacific centipede or the sandalwood?
This trial doesn’t read to me as a vindication of traditional Chinese medicine but rather as an example of missed opportunity. More rigorous scientific study over the centuries that Tongxinluo has been used could have identified one, or perhaps more, compounds with strong therapeutic potential.
Purity of medical substances is incredibly important. Pure substances have predictable effects and side effects. Pure substances interact with other treatments we give patients in predictable ways. Pure substances can be quantified for purity by third parties, they can be manufactured according to accepted standards, and they can be assessed for adulteration. In short, pure substances pose less risk.
Now, I know that may come off as particularly sterile. Some people will feel that a “natural” substance has some inherent benefit over pure compounds. And, of course, there is something soothing about imagining a traditional preparation handed down over centuries, being prepared with care by a single practitioner, in contrast to the sterile industrial processes of a for-profit pharmaceutical company. I get it. But natural is not the same as safe. I am glad I have access to purified aspirin and don’t have to chew willow bark. I like my pure penicillin and am glad I don’t have to make a mold slurry to treat a bacterial infection.
I applaud the researchers for subjecting Tongxinluo to the rigor of a well-designed trial. They have generated data that are incredibly exciting, but not because we have a new treatment for ST-elevation MI on our hands; it’s because we have a map to a new treatment. The next big thing in heart attack care is not the mixture that is Tongxinluo, but it might be in the mixture.
A version of this article first appeared on Medscape.com.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t,” is available now.
AI in medicine has a major Cassandra problem
This transcript has been edited for clarity.
Today I’m going to talk to you about a study at the cutting edge of modern medicine, one that uses an artificial intelligence (AI) model to guide care. But before I do, I need to take you back to the late Bronze Age, to a city located on the coast of what is now Turkey.
Troy’s towering walls made it seem unassailable, but that would not stop the Achaeans and their fleet of black ships from making landfall, and, after a siege, destroying the city. The destruction of Troy, as told in the Iliad and the Aeneid, was foretold by Cassandra, the daughter of King Priam and Priestess of Troy.
Cassandra had been given the gift of prophecy by the god Apollo in exchange for her favors. But after the gift was bestowed, she rejected the bright god and, in his rage, he added a curse to her blessing: that no one would ever believe her prophecies.
Thus it was that when her brother Paris set off to Sparta to abduct Helen, she warned him that his actions would lead to the downfall of their great city. He, of course, ignored her.
And you know the rest of the story.
Why am I telling you the story of Cassandra of Troy when we’re supposed to be talking about AI in medicine? Because AI has a major Cassandra problem.
The recent history of AI, and particularly the subset of AI known as machine learning in medicine, has been characterized by an accuracy arms race.
The electronic health record allows for the collection of volumes of data orders of magnitude greater than what we have ever been able to collect before. And all that data can be crunched by various algorithms to make predictions about, well, anything – whether a patient will be transferred to the intensive care unit, whether a GI bleed will need an intervention, whether someone will die in the next year.
Studies in this area tend to rely on retrospective datasets, and as time has gone on, better algorithms and more data have led to better and better predictions. In some simpler cases, machine-learning models have achieved near-perfect accuracy – Cassandra-level accuracy – as in the reading of chest x-rays for pneumonia, for example.
But as Cassandra teaches us, even perfect prediction is useless if no one believes you, if they don’t change their behavior. And this is the central problem of AI in medicine today. Many people are focusing on accuracy of the prediction but have forgotten that high accuracy is just table stakes for an AI model to be useful. It has to not only be accurate, but its use also has to change outcomes for patients. We need to be able to save Troy.
The best way to determine whether an AI model will help patients is to treat a model like we treat a new medication and evaluate it through a randomized trial. That’s what researchers, led by Shannon Walker of Vanderbilt University, Nashville, Tenn., did in a paper appearing in JAMA Network Open.
The model in question was one that predicted venous thromboembolism – blood clots – in hospitalized children. The model took in a variety of data points from the health record: a history of blood clot, history of cancer, presence of a central line, a variety of lab values. And the predictive model was very good – maybe not Cassandra good, but it achieved an AUC of 0.90, which means it had very high accuracy.
But again, accuracy is just table stakes.
The authors deployed the model in the live health record and recorded the results. For half of the kids, that was all that happened; no one actually saw the predictions. For those randomized to the intervention, the hematology team would be notified when the risk for clot was calculated to be greater than 2.5%. The hematology team would then contact the primary team to discuss prophylactic anticoagulation.
This is an elegant approach.
Let’s start with those table stakes – accuracy. The predictions were, by and large, pretty accurate in this trial. Of the 135 kids who developed blood clots, 121 had been flagged by the model in advance. That’s about 90%. The model flagged about 10% of kids who didn’t get a blood clot as well, but that’s not entirely surprising since the threshold for flagging was a 2.5% risk.
Given that the model preidentified almost every kid who would go on to develop a blood clot, it would make sense that kids randomized to the intervention would do better; after all, Cassandra was calling out her warnings.
But those kids didn’t do better. The rate of blood clot was no different between the group that used the accurate prediction model and the group that did not.
Why? Why does the use of an accurate model not necessarily improve outcomes?
First of all, a warning must lead to some change in management. Indeed, the kids in the intervention group were more likely to receive anticoagulation, but barely so. There were lots of reasons for this: physician preference, imminent discharge, active bleeding, and so on.
But let’s take a look at the 77 kids in the intervention arm who developed blood clots, because I think this is an instructive analysis.
Six of them did not meet the 2.5% threshold criteria, a case where the model missed its mark. Again, accuracy is table stakes.
Of the remaining 71, only 16 got a recommendation from the hematologist to start anticoagulation. Why not more? Well, the model identified some of the high-risk kids on the weekend, and it seems that the study team did not contact treatment teams during that time. That may account for about 40% of these cases. The remainder had some contraindication to anticoagulation.
Most tellingly, of the 16 who did get a recommendation to start anticoagulation, the recommendation was followed in only seven patients.
This is the gap between accurate prediction and the ability to change outcomes for patients. A prediction is useless if it is wrong, for sure. But it’s also useless if you don’t tell anyone about it. It’s useless if you tell someone but they can’t do anything about it. And it’s useless if they could do something about it but choose not to.
That’s the gulf that these models need to cross at this point. So, the next time some slick company tells you how accurate their AI model is, ask them if accuracy is really the most important thing. If they say, “Well, yes, of course,” then tell them about Cassandra.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Today I’m going to talk to you about a study at the cutting edge of modern medicine, one that uses an artificial intelligence (AI) model to guide care. But before I do, I need to take you back to the late Bronze Age, to a city located on the coast of what is now Turkey.
Troy’s towering walls made it seem unassailable, but that would not stop the Achaeans and their fleet of black ships from making landfall, and, after a siege, destroying the city. The destruction of Troy, as told in the Iliad and the Aeneid, was foretold by Cassandra, the daughter of King Priam and Priestess of Troy.
Cassandra had been given the gift of prophecy by the god Apollo in exchange for her favors. But after the gift was bestowed, she rejected the bright god and, in his rage, he added a curse to her blessing: that no one would ever believe her prophecies.
Thus it was that when her brother Paris set off to Sparta to abduct Helen, she warned him that his actions would lead to the downfall of their great city. He, of course, ignored her.
And you know the rest of the story.
Why am I telling you the story of Cassandra of Troy when we’re supposed to be talking about AI in medicine? Because AI has a major Cassandra problem.
The recent history of AI, and particularly the subset of AI known as machine learning in medicine, has been characterized by an accuracy arms race.
The electronic health record allows for the collection of volumes of data orders of magnitude greater than what we have ever been able to collect before. And all that data can be crunched by various algorithms to make predictions about, well, anything – whether a patient will be transferred to the intensive care unit, whether a GI bleed will need an intervention, whether someone will die in the next year.
Studies in this area tend to rely on retrospective datasets, and as time has gone on, better algorithms and more data have led to better and better predictions. In some simpler cases, machine-learning models have achieved near-perfect accuracy – Cassandra-level accuracy – as in the reading of chest x-rays for pneumonia, for example.
But as Cassandra teaches us, even perfect prediction is useless if no one believes you, if they don’t change their behavior. And this is the central problem of AI in medicine today. Many people are focusing on accuracy of the prediction but have forgotten that high accuracy is just table stakes for an AI model to be useful. It has to not only be accurate, but its use also has to change outcomes for patients. We need to be able to save Troy.
The best way to determine whether an AI model will help patients is to treat a model like we treat a new medication and evaluate it through a randomized trial. That’s what researchers, led by Shannon Walker of Vanderbilt University, Nashville, Tenn., did in a paper appearing in JAMA Network Open.
The model in question was one that predicted venous thromboembolism – blood clots – in hospitalized children. The model took in a variety of data points from the health record: a history of blood clot, history of cancer, presence of a central line, a variety of lab values. And the predictive model was very good – maybe not Cassandra good, but it achieved an AUC of 0.90, which means it had very high accuracy.
But again, accuracy is just table stakes.
The authors deployed the model in the live health record and recorded the results. For half of the kids, that was all that happened; no one actually saw the predictions. For those randomized to the intervention, the hematology team would be notified when the risk for clot was calculated to be greater than 2.5%. The hematology team would then contact the primary team to discuss prophylactic anticoagulation.
This is an elegant approach.
Let’s start with those table stakes – accuracy. The predictions were, by and large, pretty accurate in this trial. Of the 135 kids who developed blood clots, 121 had been flagged by the model in advance. That’s about 90%. The model flagged about 10% of kids who didn’t get a blood clot as well, but that’s not entirely surprising since the threshold for flagging was a 2.5% risk.
Given that the model preidentified almost every kid who would go on to develop a blood clot, it would make sense that kids randomized to the intervention would do better; after all, Cassandra was calling out her warnings.
But those kids didn’t do better. The rate of blood clot was no different between the group that used the accurate prediction model and the group that did not.
Why? Why does the use of an accurate model not necessarily improve outcomes?
First of all, a warning must lead to some change in management. Indeed, the kids in the intervention group were more likely to receive anticoagulation, but barely so. There were lots of reasons for this: physician preference, imminent discharge, active bleeding, and so on.
But let’s take a look at the 77 kids in the intervention arm who developed blood clots, because I think this is an instructive analysis.
Six of them did not meet the 2.5% threshold criteria, a case where the model missed its mark. Again, accuracy is table stakes.
Of the remaining 71, only 16 got a recommendation from the hematologist to start anticoagulation. Why not more? Well, the model identified some of the high-risk kids on the weekend, and it seems that the study team did not contact treatment teams during that time. That may account for about 40% of these cases. The remainder had some contraindication to anticoagulation.
Most tellingly, of the 16 who did get a recommendation to start anticoagulation, the recommendation was followed in only seven patients.
This is the gap between accurate prediction and the ability to change outcomes for patients. A prediction is useless if it is wrong, for sure. But it’s also useless if you don’t tell anyone about it. It’s useless if you tell someone but they can’t do anything about it. And it’s useless if they could do something about it but choose not to.
That’s the gulf that these models need to cross at this point. So, the next time some slick company tells you how accurate their AI model is, ask them if accuracy is really the most important thing. If they say, “Well, yes, of course,” then tell them about Cassandra.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Today I’m going to talk to you about a study at the cutting edge of modern medicine, one that uses an artificial intelligence (AI) model to guide care. But before I do, I need to take you back to the late Bronze Age, to a city located on the coast of what is now Turkey.
Troy’s towering walls made it seem unassailable, but that would not stop the Achaeans and their fleet of black ships from making landfall, and, after a siege, destroying the city. The destruction of Troy, as told in the Iliad and the Aeneid, was foretold by Cassandra, the daughter of King Priam and Priestess of Troy.
Cassandra had been given the gift of prophecy by the god Apollo in exchange for her favors. But after the gift was bestowed, she rejected the bright god and, in his rage, he added a curse to her blessing: that no one would ever believe her prophecies.
Thus it was that when her brother Paris set off to Sparta to abduct Helen, she warned him that his actions would lead to the downfall of their great city. He, of course, ignored her.
And you know the rest of the story.
Why am I telling you the story of Cassandra of Troy when we’re supposed to be talking about AI in medicine? Because AI has a major Cassandra problem.
The recent history of AI, and particularly the subset of AI known as machine learning in medicine, has been characterized by an accuracy arms race.
The electronic health record allows for the collection of volumes of data orders of magnitude greater than what we have ever been able to collect before. And all that data can be crunched by various algorithms to make predictions about, well, anything – whether a patient will be transferred to the intensive care unit, whether a GI bleed will need an intervention, whether someone will die in the next year.
Studies in this area tend to rely on retrospective datasets, and as time has gone on, better algorithms and more data have led to better and better predictions. In some simpler cases, machine-learning models have achieved near-perfect accuracy – Cassandra-level accuracy – as in the reading of chest x-rays for pneumonia, for example.
But as Cassandra teaches us, even perfect prediction is useless if no one believes you, if they don’t change their behavior. And this is the central problem of AI in medicine today. Many people are focusing on accuracy of the prediction but have forgotten that high accuracy is just table stakes for an AI model to be useful. It has to not only be accurate, but its use also has to change outcomes for patients. We need to be able to save Troy.
The best way to determine whether an AI model will help patients is to treat a model like we treat a new medication and evaluate it through a randomized trial. That’s what researchers, led by Shannon Walker of Vanderbilt University, Nashville, Tenn., did in a paper appearing in JAMA Network Open.
The model in question was one that predicted venous thromboembolism – blood clots – in hospitalized children. The model took in a variety of data points from the health record: a history of blood clot, history of cancer, presence of a central line, a variety of lab values. And the predictive model was very good – maybe not Cassandra good, but it achieved an AUC of 0.90, which means it had very high accuracy.
But again, accuracy is just table stakes.
The authors deployed the model in the live health record and recorded the results. For half of the kids, that was all that happened; no one actually saw the predictions. For those randomized to the intervention, the hematology team would be notified when the risk for clot was calculated to be greater than 2.5%. The hematology team would then contact the primary team to discuss prophylactic anticoagulation.
This is an elegant approach.
Let’s start with those table stakes – accuracy. The predictions were, by and large, pretty accurate in this trial. Of the 135 kids who developed blood clots, 121 had been flagged by the model in advance. That’s about 90%. The model flagged about 10% of kids who didn’t get a blood clot as well, but that’s not entirely surprising since the threshold for flagging was a 2.5% risk.
Given that the model preidentified almost every kid who would go on to develop a blood clot, it would make sense that kids randomized to the intervention would do better; after all, Cassandra was calling out her warnings.
But those kids didn’t do better. The rate of blood clot was no different between the group that used the accurate prediction model and the group that did not.
Why? Why does the use of an accurate model not necessarily improve outcomes?
First of all, a warning must lead to some change in management. Indeed, the kids in the intervention group were more likely to receive anticoagulation, but barely so. There were lots of reasons for this: physician preference, imminent discharge, active bleeding, and so on.
But let’s take a look at the 77 kids in the intervention arm who developed blood clots, because I think this is an instructive analysis.
Six of them did not meet the 2.5% threshold criteria, a case where the model missed its mark. Again, accuracy is table stakes.
Of the remaining 71, only 16 got a recommendation from the hematologist to start anticoagulation. Why not more? Well, the model identified some of the high-risk kids on the weekend, and it seems that the study team did not contact treatment teams during that time. That may account for about 40% of these cases. The remainder had some contraindication to anticoagulation.
Most tellingly, of the 16 who did get a recommendation to start anticoagulation, the recommendation was followed in only seven patients.
This is the gap between accurate prediction and the ability to change outcomes for patients. A prediction is useless if it is wrong, for sure. But it’s also useless if you don’t tell anyone about it. It’s useless if you tell someone but they can’t do anything about it. And it’s useless if they could do something about it but choose not to.
That’s the gulf that these models need to cross at this point. So, the next time some slick company tells you how accurate their AI model is, ask them if accuracy is really the most important thing. If they say, “Well, yes, of course,” then tell them about Cassandra.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Every click you make, the EHR is watching you
This transcript has been edited for clarity.
When I close my eyes and imagine what it is I do for a living, I see a computer screen.
I’m primarily a clinical researcher, so much of what I do is looking at statistical software, or, more recently, writing grant applications. But even when I think of my clinical duties, I see that computer screen.
The reason? The electronic health record (EHR) – the hot, beating heart of medical care in the modern era. Our most powerful tool and our greatest enemy.
The EHR records everything – not just the vital signs and lab values of our patients, not just our notes and billing codes. Everything. Every interaction we have is tracked and can be analyzed. The EHR is basically Sting in the song “Every Breath You Take.” Every click you make, it is watching you.
Researchers are leveraging that panopticon to give insight into something we don’t talk about frequently: the issue of racial bias in medicine. Is our true nature revealed by our interactions with the EHR?
We’re talking about this study in JAMA Network Open.
Researchers leveraged huge amounts of EHR data from two big academic medical centers, Vanderbilt University Medical Center and Northwestern University Medical Center. All told, there are data from nearly 250,000 hospitalizations here.
The researchers created a metric for EHR engagement. Basically, they summed the amount of clicks and other EHR interactions that occurred during the hospitalization, divided by the length of stay in days, to create a sort of average “engagement per day” metric. This number was categorized into four groups: low engagement, medium engagement, high engagement, and very high engagement.
What factors would predict higher engagement? Well, , except among Black patients who actually got a bit more engagement.
So, right away we need to be concerned about the obvious implications. Less engagement with the EHR may mean lower-quality care, right? Less attention to medical issues. And if that differs systematically by race, that’s a problem.
But we need to be careful here, because engagement in the health record is not random. Many factors would lead you to spend more time in one patient’s chart vs. another. Medical complexity is the most obvious one. The authors did their best to account for this, adjusting for patients’ age, sex, insurance status, comorbidity score, and social deprivation index based on their ZIP code. But notably, they did not account for the acuity of illness during the hospitalization. If individuals identifying as a minority were, all else being equal, less likely to be severely ill by the time they were hospitalized, you might see results like this.
The authors also restrict their analysis to individuals who were discharged alive. I’m not entirely clear why they made this choice. Most people don’t die in the hospital; the inpatient mortality rate at most centers is 1%-1.5%. But excluding those patients could potentially bias these results, especially if race is, all else being equal, a predictor of inpatient mortality, as some studies have shown.
But the truth is, these data aren’t coming out of nowhere; they don’t exist in a vacuum. Numerous studies demonstrate different intensity of care among minority vs. nonminority individuals. There is this study, which shows that minority populations are less likely to be placed on the liver transplant waitlist.
There is this study, which found that minority kids with type 1 diabetes were less likely to get insulin pumps than were their White counterparts. And this one, which showed that kids with acute appendicitis were less likely to get pain-control medications if they were Black.
This study shows that although life expectancy decreased across all races during the pandemic, it decreased the most among minority populations.
This list goes on. It’s why the CDC has called racism a “fundamental cause of ... disease.”
So, yes, it is clear that there are racial disparities in health care outcomes. It is clear that there are racial disparities in treatments. It is also clear that virtually every physician believes they deliver equitable care. Somewhere, this disconnect arises. Could the actions we take in the EHR reveal the unconscious biases we have? Does the all-seeing eye of the EHR see not only into our brains but into our hearts? And if it can, are we ready to confront what it sees?
F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
When I close my eyes and imagine what it is I do for a living, I see a computer screen.
I’m primarily a clinical researcher, so much of what I do is looking at statistical software, or, more recently, writing grant applications. But even when I think of my clinical duties, I see that computer screen.
The reason? The electronic health record (EHR) – the hot, beating heart of medical care in the modern era. Our most powerful tool and our greatest enemy.
The EHR records everything – not just the vital signs and lab values of our patients, not just our notes and billing codes. Everything. Every interaction we have is tracked and can be analyzed. The EHR is basically Sting in the song “Every Breath You Take.” Every click you make, it is watching you.
Researchers are leveraging that panopticon to give insight into something we don’t talk about frequently: the issue of racial bias in medicine. Is our true nature revealed by our interactions with the EHR?
We’re talking about this study in JAMA Network Open.
Researchers leveraged huge amounts of EHR data from two big academic medical centers, Vanderbilt University Medical Center and Northwestern University Medical Center. All told, there are data from nearly 250,000 hospitalizations here.
The researchers created a metric for EHR engagement. Basically, they summed the amount of clicks and other EHR interactions that occurred during the hospitalization, divided by the length of stay in days, to create a sort of average “engagement per day” metric. This number was categorized into four groups: low engagement, medium engagement, high engagement, and very high engagement.
What factors would predict higher engagement? Well, , except among Black patients who actually got a bit more engagement.
So, right away we need to be concerned about the obvious implications. Less engagement with the EHR may mean lower-quality care, right? Less attention to medical issues. And if that differs systematically by race, that’s a problem.
But we need to be careful here, because engagement in the health record is not random. Many factors would lead you to spend more time in one patient’s chart vs. another. Medical complexity is the most obvious one. The authors did their best to account for this, adjusting for patients’ age, sex, insurance status, comorbidity score, and social deprivation index based on their ZIP code. But notably, they did not account for the acuity of illness during the hospitalization. If individuals identifying as a minority were, all else being equal, less likely to be severely ill by the time they were hospitalized, you might see results like this.
The authors also restrict their analysis to individuals who were discharged alive. I’m not entirely clear why they made this choice. Most people don’t die in the hospital; the inpatient mortality rate at most centers is 1%-1.5%. But excluding those patients could potentially bias these results, especially if race is, all else being equal, a predictor of inpatient mortality, as some studies have shown.
But the truth is, these data aren’t coming out of nowhere; they don’t exist in a vacuum. Numerous studies demonstrate different intensity of care among minority vs. nonminority individuals. There is this study, which shows that minority populations are less likely to be placed on the liver transplant waitlist.
There is this study, which found that minority kids with type 1 diabetes were less likely to get insulin pumps than were their White counterparts. And this one, which showed that kids with acute appendicitis were less likely to get pain-control medications if they were Black.
This study shows that although life expectancy decreased across all races during the pandemic, it decreased the most among minority populations.
This list goes on. It’s why the CDC has called racism a “fundamental cause of ... disease.”
So, yes, it is clear that there are racial disparities in health care outcomes. It is clear that there are racial disparities in treatments. It is also clear that virtually every physician believes they deliver equitable care. Somewhere, this disconnect arises. Could the actions we take in the EHR reveal the unconscious biases we have? Does the all-seeing eye of the EHR see not only into our brains but into our hearts? And if it can, are we ready to confront what it sees?
F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
When I close my eyes and imagine what it is I do for a living, I see a computer screen.
I’m primarily a clinical researcher, so much of what I do is looking at statistical software, or, more recently, writing grant applications. But even when I think of my clinical duties, I see that computer screen.
The reason? The electronic health record (EHR) – the hot, beating heart of medical care in the modern era. Our most powerful tool and our greatest enemy.
The EHR records everything – not just the vital signs and lab values of our patients, not just our notes and billing codes. Everything. Every interaction we have is tracked and can be analyzed. The EHR is basically Sting in the song “Every Breath You Take.” Every click you make, it is watching you.
Researchers are leveraging that panopticon to give insight into something we don’t talk about frequently: the issue of racial bias in medicine. Is our true nature revealed by our interactions with the EHR?
We’re talking about this study in JAMA Network Open.
Researchers leveraged huge amounts of EHR data from two big academic medical centers, Vanderbilt University Medical Center and Northwestern University Medical Center. All told, there are data from nearly 250,000 hospitalizations here.
The researchers created a metric for EHR engagement. Basically, they summed the amount of clicks and other EHR interactions that occurred during the hospitalization, divided by the length of stay in days, to create a sort of average “engagement per day” metric. This number was categorized into four groups: low engagement, medium engagement, high engagement, and very high engagement.
What factors would predict higher engagement? Well, , except among Black patients who actually got a bit more engagement.
So, right away we need to be concerned about the obvious implications. Less engagement with the EHR may mean lower-quality care, right? Less attention to medical issues. And if that differs systematically by race, that’s a problem.
But we need to be careful here, because engagement in the health record is not random. Many factors would lead you to spend more time in one patient’s chart vs. another. Medical complexity is the most obvious one. The authors did their best to account for this, adjusting for patients’ age, sex, insurance status, comorbidity score, and social deprivation index based on their ZIP code. But notably, they did not account for the acuity of illness during the hospitalization. If individuals identifying as a minority were, all else being equal, less likely to be severely ill by the time they were hospitalized, you might see results like this.
The authors also restrict their analysis to individuals who were discharged alive. I’m not entirely clear why they made this choice. Most people don’t die in the hospital; the inpatient mortality rate at most centers is 1%-1.5%. But excluding those patients could potentially bias these results, especially if race is, all else being equal, a predictor of inpatient mortality, as some studies have shown.
But the truth is, these data aren’t coming out of nowhere; they don’t exist in a vacuum. Numerous studies demonstrate different intensity of care among minority vs. nonminority individuals. There is this study, which shows that minority populations are less likely to be placed on the liver transplant waitlist.
There is this study, which found that minority kids with type 1 diabetes were less likely to get insulin pumps than were their White counterparts. And this one, which showed that kids with acute appendicitis were less likely to get pain-control medications if they were Black.
This study shows that although life expectancy decreased across all races during the pandemic, it decreased the most among minority populations.
This list goes on. It’s why the CDC has called racism a “fundamental cause of ... disease.”
So, yes, it is clear that there are racial disparities in health care outcomes. It is clear that there are racial disparities in treatments. It is also clear that virtually every physician believes they deliver equitable care. Somewhere, this disconnect arises. Could the actions we take in the EHR reveal the unconscious biases we have? Does the all-seeing eye of the EHR see not only into our brains but into our hearts? And if it can, are we ready to confront what it sees?
F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
The surprising link between loneliness and Parkinson’s disease
This transcript has been edited for clarity.
On May 3, 2023, Surgeon General Vivek Murthy issued an advisory raising an alarm about what he called an “epidemic of loneliness” in the United States.
Now, I am not saying that Vivek Murthy read my book, “How Medicine Works and When It Doesn’t” – released in January and available in bookstores now – where, in chapter 11, I call attention to the problem of loneliness and its relationship to the exponential rise in deaths of despair. But Vivek, if you did, let me know. I could use the publicity.
No, of course the idea that loneliness is a public health issue is not new, but I’m glad to see it finally getting attention. At this point, studies have linked loneliness to heart disease, stroke, dementia, and premature death.
The UK Biobank is really a treasure trove of data for epidemiologists. I must see three to four studies a week coming out of this mega-dataset. This one, appearing in JAMA Neurology, caught my eye for its focus specifically on loneliness as a risk factor – something I’m hoping to see more of in the future.
The study examines data from just under 500,000 individuals in the United Kingdom who answered a survey including the question “Do you often feel lonely?” between 2006 and 2010; 18.4% of people answered yes. Individuals’ electronic health record data were then monitored over time to see who would get a new diagnosis code consistent with Parkinson’s disease. Through 2021, 2822 people did – that’s just over half a percent.
So, now we do the statistics thing. Of the nonlonely folks, 2,273 went on to develop Parkinson’s disease. Of those who said they often feel lonely, 549 people did. The raw numbers here, to be honest, aren’t that compelling. Lonely people had an absolute risk for Parkinson’s disease about 0.03% higher than that of nonlonely people. Put another way, you’d need to take over 3,000 lonely souls and make them not lonely to prevent 1 case of Parkinson’s disease.
Still, the costs of loneliness are not measured exclusively in Parkinson’s disease, and I would argue that the real risks here come from other sources: alcohol abuse, drug abuse, and suicide. Nevertheless, the weak but significant association with Parkinson’s disease reminds us that loneliness is a neurologic phenomenon. There is something about social connection that affects our brain in a way that is not just spiritual; it is actually biological.
Of course, people who say they are often lonely are different in other ways from people who report not being lonely. Lonely people, in this dataset, were younger, more likely to be female, less likely to have a college degree, in worse physical health, and engaged in more high-risk health behaviors like smoking.
The authors adjusted for all of these factors and found that, on the relative scale, lonely people were still about 20%-30% more likely to develop Parkinson’s disease.
So, what do we do about this? There is no pill for loneliness, and God help us if there ever is. Recognizing the problem is a good start. But there are some policy things we can do to reduce loneliness. We can invest in public spaces that bring people together – parks, museums, libraries – and public transportation. We can deal with tech companies that are so optimized at capturing our attention that we cease to engage with other humans. And, individually, we can just reach out a bit more. We’ve spent the past few pandemic years with our attention focused sharply inward. It’s time to look out again.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
On May 3, 2023, Surgeon General Vivek Murthy issued an advisory raising an alarm about what he called an “epidemic of loneliness” in the United States.
Now, I am not saying that Vivek Murthy read my book, “How Medicine Works and When It Doesn’t” – released in January and available in bookstores now – where, in chapter 11, I call attention to the problem of loneliness and its relationship to the exponential rise in deaths of despair. But Vivek, if you did, let me know. I could use the publicity.
No, of course the idea that loneliness is a public health issue is not new, but I’m glad to see it finally getting attention. At this point, studies have linked loneliness to heart disease, stroke, dementia, and premature death.
The UK Biobank is really a treasure trove of data for epidemiologists. I must see three to four studies a week coming out of this mega-dataset. This one, appearing in JAMA Neurology, caught my eye for its focus specifically on loneliness as a risk factor – something I’m hoping to see more of in the future.
The study examines data from just under 500,000 individuals in the United Kingdom who answered a survey including the question “Do you often feel lonely?” between 2006 and 2010; 18.4% of people answered yes. Individuals’ electronic health record data were then monitored over time to see who would get a new diagnosis code consistent with Parkinson’s disease. Through 2021, 2822 people did – that’s just over half a percent.
So, now we do the statistics thing. Of the nonlonely folks, 2,273 went on to develop Parkinson’s disease. Of those who said they often feel lonely, 549 people did. The raw numbers here, to be honest, aren’t that compelling. Lonely people had an absolute risk for Parkinson’s disease about 0.03% higher than that of nonlonely people. Put another way, you’d need to take over 3,000 lonely souls and make them not lonely to prevent 1 case of Parkinson’s disease.
Still, the costs of loneliness are not measured exclusively in Parkinson’s disease, and I would argue that the real risks here come from other sources: alcohol abuse, drug abuse, and suicide. Nevertheless, the weak but significant association with Parkinson’s disease reminds us that loneliness is a neurologic phenomenon. There is something about social connection that affects our brain in a way that is not just spiritual; it is actually biological.
Of course, people who say they are often lonely are different in other ways from people who report not being lonely. Lonely people, in this dataset, were younger, more likely to be female, less likely to have a college degree, in worse physical health, and engaged in more high-risk health behaviors like smoking.
The authors adjusted for all of these factors and found that, on the relative scale, lonely people were still about 20%-30% more likely to develop Parkinson’s disease.
So, what do we do about this? There is no pill for loneliness, and God help us if there ever is. Recognizing the problem is a good start. But there are some policy things we can do to reduce loneliness. We can invest in public spaces that bring people together – parks, museums, libraries – and public transportation. We can deal with tech companies that are so optimized at capturing our attention that we cease to engage with other humans. And, individually, we can just reach out a bit more. We’ve spent the past few pandemic years with our attention focused sharply inward. It’s time to look out again.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
On May 3, 2023, Surgeon General Vivek Murthy issued an advisory raising an alarm about what he called an “epidemic of loneliness” in the United States.
Now, I am not saying that Vivek Murthy read my book, “How Medicine Works and When It Doesn’t” – released in January and available in bookstores now – where, in chapter 11, I call attention to the problem of loneliness and its relationship to the exponential rise in deaths of despair. But Vivek, if you did, let me know. I could use the publicity.
No, of course the idea that loneliness is a public health issue is not new, but I’m glad to see it finally getting attention. At this point, studies have linked loneliness to heart disease, stroke, dementia, and premature death.
The UK Biobank is really a treasure trove of data for epidemiologists. I must see three to four studies a week coming out of this mega-dataset. This one, appearing in JAMA Neurology, caught my eye for its focus specifically on loneliness as a risk factor – something I’m hoping to see more of in the future.
The study examines data from just under 500,000 individuals in the United Kingdom who answered a survey including the question “Do you often feel lonely?” between 2006 and 2010; 18.4% of people answered yes. Individuals’ electronic health record data were then monitored over time to see who would get a new diagnosis code consistent with Parkinson’s disease. Through 2021, 2822 people did – that’s just over half a percent.
So, now we do the statistics thing. Of the nonlonely folks, 2,273 went on to develop Parkinson’s disease. Of those who said they often feel lonely, 549 people did. The raw numbers here, to be honest, aren’t that compelling. Lonely people had an absolute risk for Parkinson’s disease about 0.03% higher than that of nonlonely people. Put another way, you’d need to take over 3,000 lonely souls and make them not lonely to prevent 1 case of Parkinson’s disease.
Still, the costs of loneliness are not measured exclusively in Parkinson’s disease, and I would argue that the real risks here come from other sources: alcohol abuse, drug abuse, and suicide. Nevertheless, the weak but significant association with Parkinson’s disease reminds us that loneliness is a neurologic phenomenon. There is something about social connection that affects our brain in a way that is not just spiritual; it is actually biological.
Of course, people who say they are often lonely are different in other ways from people who report not being lonely. Lonely people, in this dataset, were younger, more likely to be female, less likely to have a college degree, in worse physical health, and engaged in more high-risk health behaviors like smoking.
The authors adjusted for all of these factors and found that, on the relative scale, lonely people were still about 20%-30% more likely to develop Parkinson’s disease.
So, what do we do about this? There is no pill for loneliness, and God help us if there ever is. Recognizing the problem is a good start. But there are some policy things we can do to reduce loneliness. We can invest in public spaces that bring people together – parks, museums, libraries – and public transportation. We can deal with tech companies that are so optimized at capturing our attention that we cease to engage with other humans. And, individually, we can just reach out a bit more. We’ve spent the past few pandemic years with our attention focused sharply inward. It’s time to look out again.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Overburdened: Health care workers more likely to die by suicide
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study.
If you run into a health care provider these days and ask, “How are you doing?” you’re likely to get a response like this one: “You know, hanging in there.” You smile and move on. But it may be time to go a step further. If you ask that next question – “No, really, how are you doing?” Well, you might need to carve out some time.
It’s been a rough few years for those of us in the health care professions. Our lives, dominated by COVID-related concerns at home, were equally dominated by COVID concerns at work. On the job, there were fewer and fewer of us around as exploitation and COVID-related stressors led doctors, nurses, and others to leave the profession entirely or take early retirement. Even now, I’m not sure we’ve recovered. Staffing in the hospitals is still a huge problem, and the persistence of impersonal meetings via teleconference – which not only prevent any sort of human connection but, audaciously, run from one into another without a break – robs us of even the subtle joy of walking from one hallway to another for 5 minutes of reflection before sitting down to view the next hastily cobbled together PowerPoint.
I’m speaking in generalities, of course.
I’m talking about how bad things are now because, in truth, they’ve never been great. And that may be why health care workers – people with jobs focused on serving others – are nevertheless at substantially increased risk for suicide.
Analyses through the years have shown that physicians tend to have higher rates of death from suicide than the general population. There are reasons for this that may not entirely be because of work-related stress. Doctors’ suicide attempts are more often lethal – we know what is likely to work, after all.
And, according to this paper in JAMA, it is those people who may be suffering most of all.
The study is a nationally representative sample based on the 2008 American Community Survey. Records were linked to the National Death Index through 2019.
Survey respondents were classified into five categories of health care worker, as you can see here. And 1,666,000 non–health care workers served as the control group.
Let’s take a look at the numbers.
I’m showing you age- and sex-standardized rates of death from suicide, starting with non–health care workers. In this study, physicians have similar rates of death from suicide to the general population. Nurses have higher rates, but health care support workers – nurses’ aides, home health aides – have rates nearly twice that of the general population.
Only social and behavioral health workers had rates lower than those in the general population, perhaps because they know how to access life-saving resources.
Of course, these groups differ in a lot of ways – education and income, for example. But even after adjustment for these factors as well as for sex, race, and marital status, the results persist. The only group with even a trend toward lower suicide rates are social and behavioral health workers.
There has been much hand-wringing about rates of physician suicide in the past. It is still a very real problem. But this paper finally highlights that there is a lot more to the health care profession than physicians. It’s time we acknowledge and support the people in our profession who seem to be suffering more than any of us: the aides, the techs, the support staff – the overworked and underpaid who have to deal with all the stresses that physicians like me face and then some.
There’s more to suicide risk than just your job; I know that. Family matters. Relationships matter. Medical and psychiatric illnesses matter. But to ignore this problem when it is right here, in our own house so to speak, can’t continue.
Might I suggest we start by asking someone in our profession – whether doctor, nurse, aide, or tech – how they are doing. How they are really doing. And when we are done listening, we use what we hear to advocate for real change.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study.
If you run into a health care provider these days and ask, “How are you doing?” you’re likely to get a response like this one: “You know, hanging in there.” You smile and move on. But it may be time to go a step further. If you ask that next question – “No, really, how are you doing?” Well, you might need to carve out some time.
It’s been a rough few years for those of us in the health care professions. Our lives, dominated by COVID-related concerns at home, were equally dominated by COVID concerns at work. On the job, there were fewer and fewer of us around as exploitation and COVID-related stressors led doctors, nurses, and others to leave the profession entirely or take early retirement. Even now, I’m not sure we’ve recovered. Staffing in the hospitals is still a huge problem, and the persistence of impersonal meetings via teleconference – which not only prevent any sort of human connection but, audaciously, run from one into another without a break – robs us of even the subtle joy of walking from one hallway to another for 5 minutes of reflection before sitting down to view the next hastily cobbled together PowerPoint.
I’m speaking in generalities, of course.
I’m talking about how bad things are now because, in truth, they’ve never been great. And that may be why health care workers – people with jobs focused on serving others – are nevertheless at substantially increased risk for suicide.
Analyses through the years have shown that physicians tend to have higher rates of death from suicide than the general population. There are reasons for this that may not entirely be because of work-related stress. Doctors’ suicide attempts are more often lethal – we know what is likely to work, after all.
And, according to this paper in JAMA, it is those people who may be suffering most of all.
The study is a nationally representative sample based on the 2008 American Community Survey. Records were linked to the National Death Index through 2019.
Survey respondents were classified into five categories of health care worker, as you can see here. And 1,666,000 non–health care workers served as the control group.
Let’s take a look at the numbers.
I’m showing you age- and sex-standardized rates of death from suicide, starting with non–health care workers. In this study, physicians have similar rates of death from suicide to the general population. Nurses have higher rates, but health care support workers – nurses’ aides, home health aides – have rates nearly twice that of the general population.
Only social and behavioral health workers had rates lower than those in the general population, perhaps because they know how to access life-saving resources.
Of course, these groups differ in a lot of ways – education and income, for example. But even after adjustment for these factors as well as for sex, race, and marital status, the results persist. The only group with even a trend toward lower suicide rates are social and behavioral health workers.
There has been much hand-wringing about rates of physician suicide in the past. It is still a very real problem. But this paper finally highlights that there is a lot more to the health care profession than physicians. It’s time we acknowledge and support the people in our profession who seem to be suffering more than any of us: the aides, the techs, the support staff – the overworked and underpaid who have to deal with all the stresses that physicians like me face and then some.
There’s more to suicide risk than just your job; I know that. Family matters. Relationships matter. Medical and psychiatric illnesses matter. But to ignore this problem when it is right here, in our own house so to speak, can’t continue.
Might I suggest we start by asking someone in our profession – whether doctor, nurse, aide, or tech – how they are doing. How they are really doing. And when we are done listening, we use what we hear to advocate for real change.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study.
If you run into a health care provider these days and ask, “How are you doing?” you’re likely to get a response like this one: “You know, hanging in there.” You smile and move on. But it may be time to go a step further. If you ask that next question – “No, really, how are you doing?” Well, you might need to carve out some time.
It’s been a rough few years for those of us in the health care professions. Our lives, dominated by COVID-related concerns at home, were equally dominated by COVID concerns at work. On the job, there were fewer and fewer of us around as exploitation and COVID-related stressors led doctors, nurses, and others to leave the profession entirely or take early retirement. Even now, I’m not sure we’ve recovered. Staffing in the hospitals is still a huge problem, and the persistence of impersonal meetings via teleconference – which not only prevent any sort of human connection but, audaciously, run from one into another without a break – robs us of even the subtle joy of walking from one hallway to another for 5 minutes of reflection before sitting down to view the next hastily cobbled together PowerPoint.
I’m speaking in generalities, of course.
I’m talking about how bad things are now because, in truth, they’ve never been great. And that may be why health care workers – people with jobs focused on serving others – are nevertheless at substantially increased risk for suicide.
Analyses through the years have shown that physicians tend to have higher rates of death from suicide than the general population. There are reasons for this that may not entirely be because of work-related stress. Doctors’ suicide attempts are more often lethal – we know what is likely to work, after all.
And, according to this paper in JAMA, it is those people who may be suffering most of all.
The study is a nationally representative sample based on the 2008 American Community Survey. Records were linked to the National Death Index through 2019.
Survey respondents were classified into five categories of health care worker, as you can see here. And 1,666,000 non–health care workers served as the control group.
Let’s take a look at the numbers.
I’m showing you age- and sex-standardized rates of death from suicide, starting with non–health care workers. In this study, physicians have similar rates of death from suicide to the general population. Nurses have higher rates, but health care support workers – nurses’ aides, home health aides – have rates nearly twice that of the general population.
Only social and behavioral health workers had rates lower than those in the general population, perhaps because they know how to access life-saving resources.
Of course, these groups differ in a lot of ways – education and income, for example. But even after adjustment for these factors as well as for sex, race, and marital status, the results persist. The only group with even a trend toward lower suicide rates are social and behavioral health workers.
There has been much hand-wringing about rates of physician suicide in the past. It is still a very real problem. But this paper finally highlights that there is a lot more to the health care profession than physicians. It’s time we acknowledge and support the people in our profession who seem to be suffering more than any of us: the aides, the techs, the support staff – the overworked and underpaid who have to deal with all the stresses that physicians like me face and then some.
There’s more to suicide risk than just your job; I know that. Family matters. Relationships matter. Medical and psychiatric illnesses matter. But to ignore this problem when it is right here, in our own house so to speak, can’t continue.
Might I suggest we start by asking someone in our profession – whether doctor, nurse, aide, or tech – how they are doing. How they are really doing. And when we are done listening, we use what we hear to advocate for real change.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Laboratory testing: No doctor required?
This transcript has been edited for clarity.
Let’s assume, for the sake of argument, that I am a healthy 43-year old man. Nevertheless, I am interested in getting my vitamin D level checked. My primary care doc says it’s unnecessary, but that doesn’t matter because a variety of direct-to-consumer testing companies will do it without a doctor’s prescription – for a fee of course.
Is that okay? Should I be able to get the test?
What if instead of my vitamin D level, I want to test my testosterone level, or my PSA, or my cadmium level, or my Lyme disease antibodies, or even have a full-body MRI scan?
These questions are becoming more and more common, because the direct-to-consumer testing market is exploding.
We’re talking about direct-to-consumer testing, thanks to this paper: Policies of US Companies Offering Direct-to-Consumer Laboratory Tests, appearing in JAMA Internal Medicine, which characterizes the testing practices of direct-to-consumer testing companies.
But before we get to the study, a word on this market. Direct-to-consumer lab testing is projected to be a $2 billion industry by 2025, and lab testing megacorporations Quest Diagnostics and Labcorp are both jumping headlong into this space.
Why is this happening? A couple of reasons, I think. First, the increasing cost of health care has led payers to place significant restrictions on what tests can be ordered and under what circumstances. Physicians are all too familiar with the “prior authorization” system that seeks to limit even the tests we think would benefit our patients.
Frustrated with such a system, it’s no wonder that patients are increasingly deciding to go it on their own. Sure, insurance won’t cover these tests, but the prices are transparent and competition actually keeps them somewhat reasonable. So, is this a win-win? Shouldn’t we allow people to get the tests they want, at least if they are willing to pay for it?
Of course, it’s not quite that simple. If the tests are normal, or negative, then sure – no harm, no foul. But when they are positive, everything changes. What happens when the PSA test I got myself via a direct-to-consumer testing company comes back elevated? Well, at that point, I am right back into the traditional mode of medicine – seeing my doctor, probably getting repeat testing, biopsies, etc., – and some payer will be on the hook for that, which is to say that all of us will be on the hook for that.
One other reason direct-to-consumer testing is getting more popular is a more difficult-to-characterize phenomenon which I might call postpandemic individualism. I’ve seen this across several domains, but I think in some ways the pandemic led people to focus more attention on themselves, perhaps because we were so isolated from each other. Optimizing health through data – whether using a fitness tracking watch, meticulously counting macronutrient intake, or ordering your own lab tests – may be a form of exerting control over a universe that feels increasingly chaotic. But what do I know? I’m not a psychologist.
The study characterizes a total of 21 direct-to-consumer testing companies. They offer a variety of services, as you can see here, with the majority in the endocrine space: thyroid, diabetes, men’s and women’s health. A smattering of companies offer more esoteric testing, such as heavy metals and Lyme disease.
Who’s in charge of all this? It’s fairly regulated, actually, but perhaps not in the way you think. The FDA uses its CLIA authority to ensure that these tests are accurate. The FTC ensures that the companies do not engage in false advertising. But no one is minding the store as to whether the tests are actually beneficial either to an individual or to society.
The 21 companies varied dramatically in regard to how they handle communicating the risks and results of these tests. All of them had a disclaimer that the information does not represent comprehensive medical advice. Fine. But a minority acknowledged any risks or limitations of the tests. Less than half had a statement of HIPAA compliance. And 17 out of 21 provided no information as to whether customers could request their data to be deleted, while 18 out of 21 stated that there could be follow-up for abnormal results, but often it was unclear exactly how that would work.
So, let’s circle back to the first question: Should a healthy person be able to get a laboratory test simply because they want to? The libertarians among us would argue certainly yes, though perhaps without thinking through the societal implications of abnormal results. The evidence-based medicine folks will, accurately, state that there are no clinical trials to suggest that screening healthy people with tests like these has any benefit.
But we should be cautious here. This question is scienceable; you could design a trial to test whether screening healthy 43-year-olds for testosterone level led to significant improvements in overall mortality. It would just take a few million people and about 40 years of follow-up.
And even if it didn’t help, we let people throw their money away on useless things all the time. The only difference between someone spending money on a useless test or on a useless dietary supplement is that someone has to deal with the result.
So, can you do this right? Can you make a direct-to-consumer testing company that is not essentially a free-rider on the rest of the health care ecosystem?
I think there are ways. You’d need physicians involved at all stages to help interpret the testing and guide next steps. You’d need some transparent guidelines, written in language that patients can understand, for what will happen given any conceivable result – and what costs those results might lead to for them and their insurance company. Most important, you’d need longitudinal follow-up and the ability to recommend changes, retest in the future, and potentially address the cost implications of the downstream findings. In the end, it starts to sound very much like a doctor’s office.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Let’s assume, for the sake of argument, that I am a healthy 43-year old man. Nevertheless, I am interested in getting my vitamin D level checked. My primary care doc says it’s unnecessary, but that doesn’t matter because a variety of direct-to-consumer testing companies will do it without a doctor’s prescription – for a fee of course.
Is that okay? Should I be able to get the test?
What if instead of my vitamin D level, I want to test my testosterone level, or my PSA, or my cadmium level, or my Lyme disease antibodies, or even have a full-body MRI scan?
These questions are becoming more and more common, because the direct-to-consumer testing market is exploding.
We’re talking about direct-to-consumer testing, thanks to this paper: Policies of US Companies Offering Direct-to-Consumer Laboratory Tests, appearing in JAMA Internal Medicine, which characterizes the testing practices of direct-to-consumer testing companies.
But before we get to the study, a word on this market. Direct-to-consumer lab testing is projected to be a $2 billion industry by 2025, and lab testing megacorporations Quest Diagnostics and Labcorp are both jumping headlong into this space.
Why is this happening? A couple of reasons, I think. First, the increasing cost of health care has led payers to place significant restrictions on what tests can be ordered and under what circumstances. Physicians are all too familiar with the “prior authorization” system that seeks to limit even the tests we think would benefit our patients.
Frustrated with such a system, it’s no wonder that patients are increasingly deciding to go it on their own. Sure, insurance won’t cover these tests, but the prices are transparent and competition actually keeps them somewhat reasonable. So, is this a win-win? Shouldn’t we allow people to get the tests they want, at least if they are willing to pay for it?
Of course, it’s not quite that simple. If the tests are normal, or negative, then sure – no harm, no foul. But when they are positive, everything changes. What happens when the PSA test I got myself via a direct-to-consumer testing company comes back elevated? Well, at that point, I am right back into the traditional mode of medicine – seeing my doctor, probably getting repeat testing, biopsies, etc., – and some payer will be on the hook for that, which is to say that all of us will be on the hook for that.
One other reason direct-to-consumer testing is getting more popular is a more difficult-to-characterize phenomenon which I might call postpandemic individualism. I’ve seen this across several domains, but I think in some ways the pandemic led people to focus more attention on themselves, perhaps because we were so isolated from each other. Optimizing health through data – whether using a fitness tracking watch, meticulously counting macronutrient intake, or ordering your own lab tests – may be a form of exerting control over a universe that feels increasingly chaotic. But what do I know? I’m not a psychologist.
The study characterizes a total of 21 direct-to-consumer testing companies. They offer a variety of services, as you can see here, with the majority in the endocrine space: thyroid, diabetes, men’s and women’s health. A smattering of companies offer more esoteric testing, such as heavy metals and Lyme disease.
Who’s in charge of all this? It’s fairly regulated, actually, but perhaps not in the way you think. The FDA uses its CLIA authority to ensure that these tests are accurate. The FTC ensures that the companies do not engage in false advertising. But no one is minding the store as to whether the tests are actually beneficial either to an individual or to society.
The 21 companies varied dramatically in regard to how they handle communicating the risks and results of these tests. All of them had a disclaimer that the information does not represent comprehensive medical advice. Fine. But a minority acknowledged any risks or limitations of the tests. Less than half had a statement of HIPAA compliance. And 17 out of 21 provided no information as to whether customers could request their data to be deleted, while 18 out of 21 stated that there could be follow-up for abnormal results, but often it was unclear exactly how that would work.
So, let’s circle back to the first question: Should a healthy person be able to get a laboratory test simply because they want to? The libertarians among us would argue certainly yes, though perhaps without thinking through the societal implications of abnormal results. The evidence-based medicine folks will, accurately, state that there are no clinical trials to suggest that screening healthy people with tests like these has any benefit.
But we should be cautious here. This question is scienceable; you could design a trial to test whether screening healthy 43-year-olds for testosterone level led to significant improvements in overall mortality. It would just take a few million people and about 40 years of follow-up.
And even if it didn’t help, we let people throw their money away on useless things all the time. The only difference between someone spending money on a useless test or on a useless dietary supplement is that someone has to deal with the result.
So, can you do this right? Can you make a direct-to-consumer testing company that is not essentially a free-rider on the rest of the health care ecosystem?
I think there are ways. You’d need physicians involved at all stages to help interpret the testing and guide next steps. You’d need some transparent guidelines, written in language that patients can understand, for what will happen given any conceivable result – and what costs those results might lead to for them and their insurance company. Most important, you’d need longitudinal follow-up and the ability to recommend changes, retest in the future, and potentially address the cost implications of the downstream findings. In the end, it starts to sound very much like a doctor’s office.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Let’s assume, for the sake of argument, that I am a healthy 43-year old man. Nevertheless, I am interested in getting my vitamin D level checked. My primary care doc says it’s unnecessary, but that doesn’t matter because a variety of direct-to-consumer testing companies will do it without a doctor’s prescription – for a fee of course.
Is that okay? Should I be able to get the test?
What if instead of my vitamin D level, I want to test my testosterone level, or my PSA, or my cadmium level, or my Lyme disease antibodies, or even have a full-body MRI scan?
These questions are becoming more and more common, because the direct-to-consumer testing market is exploding.
We’re talking about direct-to-consumer testing, thanks to this paper: Policies of US Companies Offering Direct-to-Consumer Laboratory Tests, appearing in JAMA Internal Medicine, which characterizes the testing practices of direct-to-consumer testing companies.
But before we get to the study, a word on this market. Direct-to-consumer lab testing is projected to be a $2 billion industry by 2025, and lab testing megacorporations Quest Diagnostics and Labcorp are both jumping headlong into this space.
Why is this happening? A couple of reasons, I think. First, the increasing cost of health care has led payers to place significant restrictions on what tests can be ordered and under what circumstances. Physicians are all too familiar with the “prior authorization” system that seeks to limit even the tests we think would benefit our patients.
Frustrated with such a system, it’s no wonder that patients are increasingly deciding to go it on their own. Sure, insurance won’t cover these tests, but the prices are transparent and competition actually keeps them somewhat reasonable. So, is this a win-win? Shouldn’t we allow people to get the tests they want, at least if they are willing to pay for it?
Of course, it’s not quite that simple. If the tests are normal, or negative, then sure – no harm, no foul. But when they are positive, everything changes. What happens when the PSA test I got myself via a direct-to-consumer testing company comes back elevated? Well, at that point, I am right back into the traditional mode of medicine – seeing my doctor, probably getting repeat testing, biopsies, etc., – and some payer will be on the hook for that, which is to say that all of us will be on the hook for that.
One other reason direct-to-consumer testing is getting more popular is a more difficult-to-characterize phenomenon which I might call postpandemic individualism. I’ve seen this across several domains, but I think in some ways the pandemic led people to focus more attention on themselves, perhaps because we were so isolated from each other. Optimizing health through data – whether using a fitness tracking watch, meticulously counting macronutrient intake, or ordering your own lab tests – may be a form of exerting control over a universe that feels increasingly chaotic. But what do I know? I’m not a psychologist.
The study characterizes a total of 21 direct-to-consumer testing companies. They offer a variety of services, as you can see here, with the majority in the endocrine space: thyroid, diabetes, men’s and women’s health. A smattering of companies offer more esoteric testing, such as heavy metals and Lyme disease.
Who’s in charge of all this? It’s fairly regulated, actually, but perhaps not in the way you think. The FDA uses its CLIA authority to ensure that these tests are accurate. The FTC ensures that the companies do not engage in false advertising. But no one is minding the store as to whether the tests are actually beneficial either to an individual or to society.
The 21 companies varied dramatically in regard to how they handle communicating the risks and results of these tests. All of them had a disclaimer that the information does not represent comprehensive medical advice. Fine. But a minority acknowledged any risks or limitations of the tests. Less than half had a statement of HIPAA compliance. And 17 out of 21 provided no information as to whether customers could request their data to be deleted, while 18 out of 21 stated that there could be follow-up for abnormal results, but often it was unclear exactly how that would work.
So, let’s circle back to the first question: Should a healthy person be able to get a laboratory test simply because they want to? The libertarians among us would argue certainly yes, though perhaps without thinking through the societal implications of abnormal results. The evidence-based medicine folks will, accurately, state that there are no clinical trials to suggest that screening healthy people with tests like these has any benefit.
But we should be cautious here. This question is scienceable; you could design a trial to test whether screening healthy 43-year-olds for testosterone level led to significant improvements in overall mortality. It would just take a few million people and about 40 years of follow-up.
And even if it didn’t help, we let people throw their money away on useless things all the time. The only difference between someone spending money on a useless test or on a useless dietary supplement is that someone has to deal with the result.
So, can you do this right? Can you make a direct-to-consumer testing company that is not essentially a free-rider on the rest of the health care ecosystem?
I think there are ways. You’d need physicians involved at all stages to help interpret the testing and guide next steps. You’d need some transparent guidelines, written in language that patients can understand, for what will happen given any conceivable result – and what costs those results might lead to for them and their insurance company. Most important, you’d need longitudinal follow-up and the ability to recommend changes, retest in the future, and potentially address the cost implications of the downstream findings. In the end, it starts to sound very much like a doctor’s office.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Bad blood: Could brain bleeds be contagious?
This transcript has been edited for clarity.
How do you tell if a condition is caused by an infection?
It seems like an obvious question, right? In the post–van Leeuwenhoek era we can look at whatever part of the body is diseased under a microscope and see microbes – you know, the usual suspects.
Except when we can’t. And there are plenty of cases where we can’t: where the microbe is too small to be seen without more advanced imaging techniques, like with viruses; or when the pathogen is sparsely populated or hard to culture, like Mycobacterium.
Finding out that a condition is the result of an infection is not only an exercise for 19th century physicians. After all, it was 2008 when Barry Marshall and Robin Warren won their Nobel Prize for proving that stomach ulcers, long thought to be due to “stress,” were actually caused by a tiny microbe called Helicobacter pylori.
And this week, we are looking at a study which, once again, begins to suggest that a condition thought to be more or less random – cerebral amyloid angiopathy – may actually be the result of an infectious disease.
We’re talking about this paper, appearing in JAMA, which is just a great example of old-fashioned shoe-leather epidemiology. But let’s get up to speed on cerebral amyloid angiopathy (CAA) first.
CAA is characterized by the deposition of amyloid protein in the brain. While there are some genetic causes, they are quite rare, and most cases are thought to be idiopathic. Recent analyses suggest that somewhere between 5% and 7% of cognitively normal older adults have CAA, but the rate is much higher among those with intracerebral hemorrhage – brain bleeds. In fact, CAA is the second-most common cause of bleeding in the brain, second only to severe hypertension.
An article in Nature highlights cases that seemed to develop after the administration of cadaveric pituitary hormone.
Other studies have shown potential transmission via dura mater grafts and neurosurgical instruments. But despite those clues, no infectious organism has been identified. Some have suggested that the long latent period and difficulty of finding a responsible microbe points to a prion-like disease not yet known. But these studies are more or less case series. The new JAMA paper gives us, if not a smoking gun, a pretty decent set of fingerprints.
Here’s the idea: If CAA is caused by some infectious agent, it may be transmitted in the blood. We know that a decent percentage of people who have spontaneous brain bleeds have CAA. If those people donated blood in the past, maybe the people who received that blood would be at risk for brain bleeds too.
Of course, to really test that hypothesis, you’d need to know who every blood donor in a country was and every person who received that blood and all their subsequent diagnoses for basically their entire lives. No one has that kind of data, right?
Well, if you’ve been watching this space, you’ll know that a few countries do. Enter Sweden and Denmark, with their national electronic health record that captures all of this information, and much more, on every single person who lives or has lived in those countries since before 1970. Unbelievable.
So that’s exactly what the researchers, led by Jingchen Zhao at Karolinska (Sweden) University, did. They identified roughly 760,000 individuals in Sweden and 330,000 people in Denmark who had received a blood transfusion between 1970 and 2017.
Of course, most of those blood donors – 99% of them, actually – never went on to have any bleeding in the brain. It is a rare thing, fortunately.
But some of the donors did, on average within about 5 years of the time they donated blood. The researchers characterized each donor as either never having a brain bleed, having a single bleed, or having multiple bleeds. The latter is most strongly associated with CAA.
The big question: Would recipients who got blood from individuals who later on had brain bleeds, have brain bleeds themselves?
The answer is yes, though with an asterisk. You can see the results here. The risk of recipients having a brain bleed was lowest if the blood they received was from people who never had a brain bleed, higher if the individual had a single brain bleed, and highest if they got blood from a donor who would go on to have multiple brain bleeds.
All in all, individuals who received blood from someone who would later have multiple hemorrhages were three times more likely to themselves develop bleeds themselves. It’s fairly compelling evidence of a transmissible agent.
Of course, there are some potential confounders to consider here. Whose blood you get is not totally random. If, for example, people with type O blood are just more likely to have brain bleeds, then you could get results like this, as type O tends to donate to type O and both groups would have higher risk after donation. But the authors adjusted for blood type. They also adjusted for number of transfusions, calendar year, age, sex, and indication for transfusion.
Perhaps most compelling, and most clever, is that they used ischemic stroke as a negative control. Would people who received blood from someone who later had an ischemic stroke themselves be more likely to go on to have an ischemic stroke? No signal at all. It does not appear that there is a transmissible agent associated with ischemic stroke – only the brain bleeds.
I know what you’re thinking. What’s the agent? What’s the microbe, or virus, or prion, or toxin? The study gives us no insight there. These nationwide databases are awesome but they can only do so much. Because of the vagaries of medical coding and the difficulty of making the CAA diagnosis, the authors are using brain bleeds as a proxy here; we don’t even know for sure whether these were CAA-associated brain bleeds.
It’s also worth noting that there’s little we can do about this. None of the blood donors in this study had a brain bleed prior to donation; it’s not like we could screen people out of donating in the future. We have no test for whatever this agent is, if it even exists, nor do we have a potential treatment. Fortunately, whatever it is, it is extremely rare.
Still, this paper feels like a shot across the bow. At this point, the probability has shifted strongly away from CAA being a purely random disease and toward it being an infectious one. It may be time to round up some of the unusual suspects.
Dr. F. Perry Wilson is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
How do you tell if a condition is caused by an infection?
It seems like an obvious question, right? In the post–van Leeuwenhoek era we can look at whatever part of the body is diseased under a microscope and see microbes – you know, the usual suspects.
Except when we can’t. And there are plenty of cases where we can’t: where the microbe is too small to be seen without more advanced imaging techniques, like with viruses; or when the pathogen is sparsely populated or hard to culture, like Mycobacterium.
Finding out that a condition is the result of an infection is not only an exercise for 19th century physicians. After all, it was 2008 when Barry Marshall and Robin Warren won their Nobel Prize for proving that stomach ulcers, long thought to be due to “stress,” were actually caused by a tiny microbe called Helicobacter pylori.
And this week, we are looking at a study which, once again, begins to suggest that a condition thought to be more or less random – cerebral amyloid angiopathy – may actually be the result of an infectious disease.
We’re talking about this paper, appearing in JAMA, which is just a great example of old-fashioned shoe-leather epidemiology. But let’s get up to speed on cerebral amyloid angiopathy (CAA) first.
CAA is characterized by the deposition of amyloid protein in the brain. While there are some genetic causes, they are quite rare, and most cases are thought to be idiopathic. Recent analyses suggest that somewhere between 5% and 7% of cognitively normal older adults have CAA, but the rate is much higher among those with intracerebral hemorrhage – brain bleeds. In fact, CAA is the second-most common cause of bleeding in the brain, second only to severe hypertension.
An article in Nature highlights cases that seemed to develop after the administration of cadaveric pituitary hormone.
Other studies have shown potential transmission via dura mater grafts and neurosurgical instruments. But despite those clues, no infectious organism has been identified. Some have suggested that the long latent period and difficulty of finding a responsible microbe points to a prion-like disease not yet known. But these studies are more or less case series. The new JAMA paper gives us, if not a smoking gun, a pretty decent set of fingerprints.
Here’s the idea: If CAA is caused by some infectious agent, it may be transmitted in the blood. We know that a decent percentage of people who have spontaneous brain bleeds have CAA. If those people donated blood in the past, maybe the people who received that blood would be at risk for brain bleeds too.
Of course, to really test that hypothesis, you’d need to know who every blood donor in a country was and every person who received that blood and all their subsequent diagnoses for basically their entire lives. No one has that kind of data, right?
Well, if you’ve been watching this space, you’ll know that a few countries do. Enter Sweden and Denmark, with their national electronic health record that captures all of this information, and much more, on every single person who lives or has lived in those countries since before 1970. Unbelievable.
So that’s exactly what the researchers, led by Jingchen Zhao at Karolinska (Sweden) University, did. They identified roughly 760,000 individuals in Sweden and 330,000 people in Denmark who had received a blood transfusion between 1970 and 2017.
Of course, most of those blood donors – 99% of them, actually – never went on to have any bleeding in the brain. It is a rare thing, fortunately.
But some of the donors did, on average within about 5 years of the time they donated blood. The researchers characterized each donor as either never having a brain bleed, having a single bleed, or having multiple bleeds. The latter is most strongly associated with CAA.
The big question: Would recipients who got blood from individuals who later on had brain bleeds, have brain bleeds themselves?
The answer is yes, though with an asterisk. You can see the results here. The risk of recipients having a brain bleed was lowest if the blood they received was from people who never had a brain bleed, higher if the individual had a single brain bleed, and highest if they got blood from a donor who would go on to have multiple brain bleeds.
All in all, individuals who received blood from someone who would later have multiple hemorrhages were three times more likely to themselves develop bleeds themselves. It’s fairly compelling evidence of a transmissible agent.
Of course, there are some potential confounders to consider here. Whose blood you get is not totally random. If, for example, people with type O blood are just more likely to have brain bleeds, then you could get results like this, as type O tends to donate to type O and both groups would have higher risk after donation. But the authors adjusted for blood type. They also adjusted for number of transfusions, calendar year, age, sex, and indication for transfusion.
Perhaps most compelling, and most clever, is that they used ischemic stroke as a negative control. Would people who received blood from someone who later had an ischemic stroke themselves be more likely to go on to have an ischemic stroke? No signal at all. It does not appear that there is a transmissible agent associated with ischemic stroke – only the brain bleeds.
I know what you’re thinking. What’s the agent? What’s the microbe, or virus, or prion, or toxin? The study gives us no insight there. These nationwide databases are awesome but they can only do so much. Because of the vagaries of medical coding and the difficulty of making the CAA diagnosis, the authors are using brain bleeds as a proxy here; we don’t even know for sure whether these were CAA-associated brain bleeds.
It’s also worth noting that there’s little we can do about this. None of the blood donors in this study had a brain bleed prior to donation; it’s not like we could screen people out of donating in the future. We have no test for whatever this agent is, if it even exists, nor do we have a potential treatment. Fortunately, whatever it is, it is extremely rare.
Still, this paper feels like a shot across the bow. At this point, the probability has shifted strongly away from CAA being a purely random disease and toward it being an infectious one. It may be time to round up some of the unusual suspects.
Dr. F. Perry Wilson is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
How do you tell if a condition is caused by an infection?
It seems like an obvious question, right? In the post–van Leeuwenhoek era we can look at whatever part of the body is diseased under a microscope and see microbes – you know, the usual suspects.
Except when we can’t. And there are plenty of cases where we can’t: where the microbe is too small to be seen without more advanced imaging techniques, like with viruses; or when the pathogen is sparsely populated or hard to culture, like Mycobacterium.
Finding out that a condition is the result of an infection is not only an exercise for 19th century physicians. After all, it was 2008 when Barry Marshall and Robin Warren won their Nobel Prize for proving that stomach ulcers, long thought to be due to “stress,” were actually caused by a tiny microbe called Helicobacter pylori.
And this week, we are looking at a study which, once again, begins to suggest that a condition thought to be more or less random – cerebral amyloid angiopathy – may actually be the result of an infectious disease.
We’re talking about this paper, appearing in JAMA, which is just a great example of old-fashioned shoe-leather epidemiology. But let’s get up to speed on cerebral amyloid angiopathy (CAA) first.
CAA is characterized by the deposition of amyloid protein in the brain. While there are some genetic causes, they are quite rare, and most cases are thought to be idiopathic. Recent analyses suggest that somewhere between 5% and 7% of cognitively normal older adults have CAA, but the rate is much higher among those with intracerebral hemorrhage – brain bleeds. In fact, CAA is the second-most common cause of bleeding in the brain, second only to severe hypertension.
An article in Nature highlights cases that seemed to develop after the administration of cadaveric pituitary hormone.
Other studies have shown potential transmission via dura mater grafts and neurosurgical instruments. But despite those clues, no infectious organism has been identified. Some have suggested that the long latent period and difficulty of finding a responsible microbe points to a prion-like disease not yet known. But these studies are more or less case series. The new JAMA paper gives us, if not a smoking gun, a pretty decent set of fingerprints.
Here’s the idea: If CAA is caused by some infectious agent, it may be transmitted in the blood. We know that a decent percentage of people who have spontaneous brain bleeds have CAA. If those people donated blood in the past, maybe the people who received that blood would be at risk for brain bleeds too.
Of course, to really test that hypothesis, you’d need to know who every blood donor in a country was and every person who received that blood and all their subsequent diagnoses for basically their entire lives. No one has that kind of data, right?
Well, if you’ve been watching this space, you’ll know that a few countries do. Enter Sweden and Denmark, with their national electronic health record that captures all of this information, and much more, on every single person who lives or has lived in those countries since before 1970. Unbelievable.
So that’s exactly what the researchers, led by Jingchen Zhao at Karolinska (Sweden) University, did. They identified roughly 760,000 individuals in Sweden and 330,000 people in Denmark who had received a blood transfusion between 1970 and 2017.
Of course, most of those blood donors – 99% of them, actually – never went on to have any bleeding in the brain. It is a rare thing, fortunately.
But some of the donors did, on average within about 5 years of the time they donated blood. The researchers characterized each donor as either never having a brain bleed, having a single bleed, or having multiple bleeds. The latter is most strongly associated with CAA.
The big question: Would recipients who got blood from individuals who later on had brain bleeds, have brain bleeds themselves?
The answer is yes, though with an asterisk. You can see the results here. The risk of recipients having a brain bleed was lowest if the blood they received was from people who never had a brain bleed, higher if the individual had a single brain bleed, and highest if they got blood from a donor who would go on to have multiple brain bleeds.
All in all, individuals who received blood from someone who would later have multiple hemorrhages were three times more likely to themselves develop bleeds themselves. It’s fairly compelling evidence of a transmissible agent.
Of course, there are some potential confounders to consider here. Whose blood you get is not totally random. If, for example, people with type O blood are just more likely to have brain bleeds, then you could get results like this, as type O tends to donate to type O and both groups would have higher risk after donation. But the authors adjusted for blood type. They also adjusted for number of transfusions, calendar year, age, sex, and indication for transfusion.
Perhaps most compelling, and most clever, is that they used ischemic stroke as a negative control. Would people who received blood from someone who later had an ischemic stroke themselves be more likely to go on to have an ischemic stroke? No signal at all. It does not appear that there is a transmissible agent associated with ischemic stroke – only the brain bleeds.
I know what you’re thinking. What’s the agent? What’s the microbe, or virus, or prion, or toxin? The study gives us no insight there. These nationwide databases are awesome but they can only do so much. Because of the vagaries of medical coding and the difficulty of making the CAA diagnosis, the authors are using brain bleeds as a proxy here; we don’t even know for sure whether these were CAA-associated brain bleeds.
It’s also worth noting that there’s little we can do about this. None of the blood donors in this study had a brain bleed prior to donation; it’s not like we could screen people out of donating in the future. We have no test for whatever this agent is, if it even exists, nor do we have a potential treatment. Fortunately, whatever it is, it is extremely rare.
Still, this paper feels like a shot across the bow. At this point, the probability has shifted strongly away from CAA being a purely random disease and toward it being an infectious one. It may be time to round up some of the unusual suspects.
Dr. F. Perry Wilson is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.