Coronavirus antibody studies and what they allegedly show have triggered fierce debates, further confusing public understanding. ProPublica’s health reporter Caroline Chen is here to offer some clarity around these crucial surveys.
ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.
In the past two weeks, researchers across America have begun announcing results from studies showing that there have been many more coronavirus infections in their communities than were previously recorded.
Findings have come in from Santa Clara County, California, as well as Los Angeles, New York, Chelsea, Massachusetts, and Miami-Dade County, Florida. The debates began immediately. What did the study results actually mean? If more people were infected than previously known, did that mean the death rate is actually lower than previously thought? Is the coronavirus actually more like the flu, after all? And are we close to “herd immunity,” meaning enough people are infected that the virus won’t spread easily anymore?
These studies all were based on antibody tests, which are diagnostics that can look in a person’s blood and see if there is evidence of prior infection. In the past month, as these tests have reached the market, researchers have launched large-scale studies, known as sero-surveys (sero is short for serology, the study of blood serum). By running these surveys, scientists are finally able to start estimating how many people have been infected, which can give us information about how deadly the disease is and where the disease was most concentrated geographically.
It’s a pity that some of the first of these studies conducted in the U.S. have been dogged by controversy. Within hours of results being announced, researchers started to pick apart the studies’ methodologies, arguing over whether the surveys were well designed, which added to confusion over whether the results could be trusted. While the scrutiny is well warranted, my fear is that the public might lose trust in future survey results, when in fact, antibody studies are going to be critically important in helping us better understand the coronavirus and how to fight this pandemic.
But I’m hopeful, because there are many such surveys in the works that can learn from the shortcomings of the initial attempts. We’re all going to be hearing far more about antibody tests and surveys — and maybe even participating in them — in the coming months. So here’s a primer on what they do, how they should be properly wielded and how you, a critical reader (or journalist), can interpret a study that’s hot off the presses.
Antibody studies can be used to answer more questions than you might think.
When the coronavirus emerged, the first type of diagnostic that scientists raced to produce was a test that could detect an active infection. Those tests will continue to be necessary, but they can only catch the virus red-handed. Once a patient has recovered, that kind of test won’t come back positive anymore.
In the U.S., especially at the start of the outbreak, there was a paucity of such tests — and even now, there aren’t enough to test every person who has only mild symptoms, let alone identify people who are asymptomatic carriers.
So that means the case counts that we see reported every day are certainly an undercount. The question is: how much of an undercount? The only way to know is to test a random sample of a given population and see who has antibodies — proteins in the blood that indicate past infections.
Once you know the percentage of people who have been infected (the fancy word for that is “sero-prevalence”), then you can calculate what’s known as the “infection fatality rate.”
Bear with me here as we wade through some jargon, because I think it’s worth your time. So far, the death rates you’ve seen in headlines have largely been case fatality rates, which are the number of deaths reported divided by the number of cases confirmed with a diagnostic test. That’s all we’ve been able to report so far. You should expect the case fatality rate to be higher than the infection fatality rate, because there are way more people infected than people who have been able to be tested. We’ll get back to what we’re learning about the infection fatality rate in available studies later on, but for now, just know that this is one reason why sero-surveys are so useful to us.
But there’s more you can learn from a sero-survey. Dan Larremore, an assistant professor at the University of Colorado Boulder, whose recent work has focused on designing antibody surveys, said you can use sero-surveys to find out if there are certain neighborhoods that have been harder hit than others. Or, by surveying specific populations, you can study questions like: “How much transmission is there within a household?” or “What’s the role of kids in all of this?”
Setting up a sero-survey correctly means you need to test a random population — easier said than done.
For now, most of the studies being set up around the country, as well as nationwide ones being conducted by the National Institutes of Health and the Centers for Disease Control and Prevention, are focused on asking the most basic question, which is, what percentage of a given population has been infected?
To do that properly, researchers need to test a random sample of the population. One of the main criticisms of some recent studies is that their results could have been biased because of how participants were recruited. Take, for example, the study conducted in Santa Clara, California. Researchers at Stanford put out ads on Facebook, asking people to volunteer to participate.
“The problem is there are people who will think, ‘Oh, yeah, I had this nasty flu, or cough, or whatever, and I think I had it.’ And if you said to them, ‘Would you like to get tested?’ They would say, ‘Abso-frickin-lutely!’” said Marm Kilpatrick, a professor at the University of California at Santa Cruz who studies infectious diseases. Conversely, people who felt totally healthy could be less inclined to participate. “So there’s a differential excitement to go get tested, and if that leads to the first group being at a higher chance of being participants in the study, then you’ve just totally blown your estimates.”
Contrary to Kilpatrick’s concern, Dr. Jay Bhattacharya, senior author of the Santa Clara study, said in an email that while “volunteer bias is certainly a potential problem in any survey that recruits participants the way we did … in our study, the evidence points in the direction of healthy volunteer bias” because people in “wealthier and healthier” ZIP codes signed up faster. Bhattacharya said his team made adjustments in its calculations to represent the county properly by ZIP code, race and sex, and thinks it is “still likely underestimating prevalence because of healthy volunteer bias.”
In New York City, researchers tested shoppers at grocery stores and big box stores. That method is still not perfectly random: You’re only testing the subset of people who are out shopping in person. “You’re not sampling people who are too old, or high risk, who don’t want to shop for themselves,” Larremore said. “You’re also sampling predominantly from people who are old enough to go shopping, or who feel that they may have been infected and think they’re safe enough.”
The ideal way to conduct a sero-survey, according to Natalie Dean, assistant professor of biostatistics at the University of Florida, would be to randomly select addresses from a database of the population you want to survey and then send a team of researchers door to door to collect samples. The World Health Organization’s guidance for such studies recommends inviting all people who live in the household to participate in the study, including children.
But sending teams door to door is labor intensive and potentially also a contagion risk, not to mention that people staying home may not want to let someone in at this time. So around the country, different cities and states are trying different methods. Miami-Dade researchers partnered with Florida Power & Light to randomly generate phone numbers and invite people to come to 10 drive-thru testing locations. Preliminary results released on April 24, based on two weeks of testing and about 1,400 participants, estimated that about 6% of the Miami-Dade population had antibodies.
The county plans to keep running the survey on an ongoing basis. “Repeated cross-sectional studies — where they’re repeating it every week — are valuable, even if there’s some sort of bias, because you can look at trends,” Dean said.
Larremore is looking into a finger-prick test that captures a drop of blood on a special type of paper, which could potentially be mailed to participants in a sero-survey being planned in Colorado. The dried blood could then later be analyzed for antibodies back in the lab. If this works, Larremore said, that could further help to reduce bias, because people could participate from the safety of their homes.
Test accuracy can skew results in some pretty surprising ways.
Another key question for any sero-survey is how accurate the test was. Tons of antibody tests have hit the market over the past few weeks, and their accuracy is still being scrutinized. Not all tests have the same degree of accuracy.
Even a test that is very good can give out more false positives than true positives when the prevalence of a disease is very low in a population.
Let’s say you’re running a sero-survey among 1,000 people and only 4% of the population is actually infected. Presume the test correctly identifies positives 100% of the time, meaning it is 100% “sensitive” in scientific parlance.
There are 1,000 people in your sero-survey.
With a 4% infection rate, the test would accurately identify those 40 people who are positive.
But say the test is 95% “specific,” meaning that it returns false positives 5% of the time. Then among the 960 people who are truly negative, 48 people would get a false positive.
In this scenario, more people would get a false positive result than a true positive.
So when you are running a sero-survey in a community where a small percentage of the population has been infected, you have to worry about many of your positive results being false positives, explained Andrew Gelman, a professor of statistics at Columbia University.
You can have more confidence in the signal you’re getting when there’s a higher percentage of the population that’s been infected, as in a situation like New York City, because the number of true positives would drown out a smaller number of false positives, Gelman said.
Unfortunately, New York didn’t actually share much information on how accurate its tests were when Gov. Andrew Cuomo first announced the findings of its study on April 23, so the experts I called said they didn’t have much to scrutinize. “My confidence is in the Wadsworth lab test,” health commissioner Dr. Howard Zucker said at the press briefing the following day, referring to New York state’s public health lab, “which has unbelievable sensitivity and specificity.”
Forget the headlines, your city is nowhere near herd immunity.
As more of these studies read out in the future, there are probably going to be a lot of headlines that say: “study finds [X] times more people in [CITY/STATE] infected than confirmed case counts,” or more vaguely, “Coronavirus infections more common than previously thought.”
These headlines may be accurate, but that does not mean that your city or state is close to “herd immunity,” which is when the vast majority of a given population have been infected. In such situations, the virus has a hard time infecting the remaining people, because there aren’t enough carriers to reach them.
In order to achieve herd immunity, scientists say that a community would need to have at least 60% of its population infected. That’s the lowest estimate I’ve been told. Other scientists have told me 80% to 90%. The reason this percentage isn’t precisely known is because it depends on things like exactly how contagious the virus is and also whether people who have been infected are immune forever, or if they lose immunity after a while, which researchers also are furiously working to figure out.
None of the studies I’ve seen so far have reported a number anywhere near that high. The highest rate I’ve seen is in Chelsea, Massachusetts, the epicenter of the coronavirus outbreak in that state. Researchers at Massachusetts General Hospital tested 200 pedestrians and found about a third had antibodies.
The other way to achieve herd immunity is via a vaccine, which is far safer and doesn’t involve millions of people getting sick. But developing vaccines is a slow process, so achieving herd immunity that way won’t happen any time soon.
There are two types of death rates. Most people are mixing them up.
Another thing I’ve seen some people say, when some of the study results came out, is that the coronavirus is far less deadly than we thought it was.
A columnist wrote that antibody testing “proves we’ve been had!” adding: “We’ve been told that the true death rate is 7.4% in New York. … We were told that this was worse than the flu. … But none of these ‘truths’ turns out to be so. The death rate in New York State isn’t 7.4%, it is actually 0.75%.”
This columnist is mixing up the case fatality rate and the infection fatality rate. There has never been an abundance of diagnostic tests in New York, which means mostly very sick patients are the ones who’ve been tested. As of April 24, according to the State Department of Health, 282,143 people had tested positive, and 16,599 of those people had died. That translates to a case fatality rate of 5.9%.
(As a side note, there are many reasons why the case fatality rate is a very squishy estimate. The denominator depends both on how many tests are available and how many people are seeking testing. The numerator is also shaky — for one, many people are dying at home without getting tested, and the extent to which deaths are undercounted is still unknown. Moreover, we don’t yet know the outcome, whether recovery or death, for many patients that are identified as positive.)
On April 23, Cuomo announced preliminary data from the state’s sero-survey, saying that 13.9% of state residents had tested positive for antibodies. In New York City, it was about 21%. The state is continuing to test residents in order to generate an ongoing series of “snapshots” of the levels of infection. Cuomo had updated numbers by April 27 showing huge regional variation.
Kilpatrick, from UC Santa Cruz, said that if the estimates from New York stand up to scrutiny, the infection fatality rate in New York City would be approximately 0.8%.
He told me that is not very surprising, because scientists have been able to get some estimates of infection fatality rates using data from enclosed populations where nearly everyone got tested — on cruise ships. Epidemiologists at the London School of Hygiene and Tropical Medicine, for example, analyzed data from the Diamond Princess, the ill-fated ship on which more than 700 passengers got infected. Researchers adjusted for the fact that cruise passengers are older than average and estimated the coronavirus’ infection fatality ratio as 0.6%.
Remember, the IFR is not inherent to the virus — how old and healthy your population is and how many ICU beds were available for patients also will affect this number for your region.
Stop comparing this to the flu. Without a coronavirus vaccine, we are far more vulnerable.
Now let’s talk about the flu. Comparisons to the flu keep coming back like a many-headed hydra, and they roared back last week with a vengeance.
The estimates I’ve seen for influenza IFR range from about 0.14% on the upper end to 0.04% on the lower end. So if the IFR for this coronavirus ends up being around 0.5%, that’s still many times worse than the flu.
But that’s not the main problem. At the end of the day, wherever the coronavirus fatality rate ends up, it doesn’t change the fact that we don’t have any immunity to the virus, which is a critical factor in why we’ve had to behave differently in our response to it.
Marc Lipsitch, head of the Harvard T.H. Chan School of Public Health’s Center for Communicable Disease Dynamics, has estimated that ultimately 20% to 60% of the population could be infected with COVID-19. By comparison, because of immunity provided by flu shots, only about 10% to 20% of the population gets sick with influenza every year, according to Kilpatrick.
Kilpatrick sketched out what this meant: “If it’s five times deadlier than the seasonal flu, and three times as many people are going to get it, that means we’re going to get 15 times as many deaths. And 15 times 30,000, which is the middle-of-the-road kind of a seasonal flu year, that’s 450,000 deaths — about half a million deaths — that’s a pretty big, scary number, I think.”
There are additional reasons why comparing the flu to the coronavirus isn’t apples to apples. We’re two to three months into the coronavirus pandemic in the U.S. By comparison, the typical flu season lasts many months. So comparing current deaths from the coronavirus to a complete flu season doesn’t make sense.
Dean, from the University of Florida, also notes that the discussion about exactly how deadly COVID-19 is doesn’t change the reality of how many people have died. While it’s important that New York City’s sero-survey has helped to quantify the number of people who have had mild infections, that doesn’t change the fact that, as of April 27, about one out of every 500 residents of New York City has died from this virus. (This includes deaths that New York City deems likely to be due to COVID-19, despite not having a lab-confirmed test.)
“We know that this disease can completely decimate health care systems, and that’s important to keep in mind in terms of how we respond,” Dean said.
Antibody tests aren’t ready to be used to issue “immunity passports.”
As antibody tests become more widely available, there’ll naturally be a temptation to start using the tests for ourselves on an individual basis, to determine if we’re immune and can go about our lives, free of the paranoia and fear that have been plaguing us for the past two months.
But it’s too early for that. Besides the issue of potential false positives, scientists haven’t yet figured out exactly what level of protection an individual has after being infected and whether the protection lasts forever (like with chickenpox) or wanes after a while. The World Health Organization issued a scientific brief last week warning that detection of antibodies alone shouldn’t serve as a basis for an “immunity passport” allowing an individual to assume they are totally protected from reinfection.
So for now, the antibody tests are best used in these population-wide surveys, to better understand the spread of the disease, how it’s being transmitted and regional infection fatality rates.
There are far more surveys to come. You could be part of one.
Sero-surveys have only just begun. Many of those that are soon launching appear to be robust and thoughtfully designed, such as in Indiana, where the State Department of Health has said it will test at least 20,000 Hoosiers in four phases over the next year. Participants would be randomly selected, by invitation only, “to ensure that the sampling is representative of the population,” the department said.
While everyone is eager to know the results of these studies, many researchers I spoke to also said they hoped that there could be a better balance reached between sharing results quickly and publishing full information. So far, while key findings from the studies done in Los Angeles and New York State have been announced, their authors haven’t yet published many details about their methods.
“I think there should be more pushback when people are not providing their methodology,” said Dean, from the University of Florida. “They shouldn’t be running to the press. You should explain what you did. How do we know what you did, if it’s credible or not?”
For all the criticism that the Santa Clara study has received, Larremore says he’s “thankful that the researchers put the preprint out there, so the community could help them correct it.” (A preprint is a draft research paper, shared publicly before it has been peer-reviewed and published in a scientific journal.)
Bhattacharya, the author, said his team “received hundreds of constructive comments on our preprint from scholars around the world” and is now updating its paper. The new version will be “substantially better as a result of this worldwide peer-review.”
Overall, he said, “the open science model has really worked well.”
It’s always easier to criticize studies than to run them. Just a few weeks ago, in the U.S., we had no antibody survey results to look at, and now we have some data. I’m hopeful that as more and more studies are done, researchers will be able to discard bad data, confirm good information, start to track trends and gather intel on this virus, so we are better equipped to make wise, evidence-based decisions on how to fight the disease at local and state levels, as a country and as a global community.
Republished with permission.