Regulatory Considerations for SARS-CoV-2 Serology Test Developers
Given the considerable recent focus on serology assays for the detection of IgG and IgM antibodies to SARS-CoV-2, we wanted to highlight a few areas for consideration. Serology testing can, in theory, provide important information on the extent of the presence of antibodies to the virus in the general population to help support policies regarding how soon, and to what extent, Federal, state and local governments can reduce shelter-in-place and social distancing requirements. While it is still unclear if the presence of antibodies conveys immunity, that is the working premise based on experiences with similar viruses.
In order to provide pertinent information to support policy development, the serology tests must be accurate. IVD tests have two components of accuracy – sensitivity and specificity. These values are typically obtained from a clinical study or method comparison study where the IVD test result is compared to the value obtained from a gold standard method or clinical ground truth.
Sensitivity is a measurement of how often a true positive sample will test positive, and specificity is a measurement of how often a true negative sample will test negative. A typical way to look at sensitivity and specificity is in a 2×2 table:
|Reference (gold standard) method|
|IVD test under evaluation:||Positive IVD result||True positive (A)||False positive (B)|
|Negative IVD result||False negative (C)||True Negative (D)|
Sensitivity = A/(A+C) (true positive test results divided by all positive patient samples)
Specificity = D/(B+D) (true negative test results divided by all negative patient samples)
If a serology test gives a false negative, it means that a sero-positive patient came out with a negative result. That’s suboptimal, but it would only under-estimate the “good news” that the patient has the anti-SARS-CoV-2 antibodies. However, poor specificity is very problematic for a serology test, because it means that the patient could get a false-positive result, and therefore would have a false sense of security that they are actually immune to the virus. That’s bad, but it is actually worse than that when “prevalence” is low.
Prevalence measures how frequently the antibodies appear in the population. Let’s say that a test is 95% specific, so for a total of every 20 true negatives there is 1 false positive result. If the prevalence of the antibody in the population is 5% (1 true positive per 20 true negatives), then 50% of the time, a positive result will actually be a false positive. So if you test 20 people, there is an equal likelihood that 1 person may demonstrate a false positive as there is that they will test positive due to seroprevalence. This example value of 50% is called the positive predictive value (PPV) and depends on the prevalence. If the prevalence is 1%, the PPV will be only 20% (5 false positives per 1 true positive). Since the seroprevalence of COVID-19 antibodies is currently unknown, we don’t know how good the PPV is for a particular serology test, even if the specificity has been measured in a clinical study.
FDA has now authorized six serology tests under Emergency Use Authorization (as IVD test kits, this does not count the myriad of Laboratory Developed Tests that have registered with FDA). In order to provide transparency on test performance, FDA has published the sensitivity and specificity data for each of the authorized test kits, as well as the PPV calculated at 5% prevalence (https://www.fda.gov/medical-devices/emergency-situations-medical-devices/eua-authorized-serology-test-performance).
FDA is also concerned about the lower-bound of the 95% confidence interval for sensitivity and specificity determinations for serology tests. Why? Because the confidence interval is a measurement of how reliable the sensitivity and specificity determinations are. A study design with only 10 patient samples will have much wider confidence intervals (and the lower-bound will be lower) than a study design with 100 patient samples. This is just simple statistics—and common sense—that a more substantive study provides greater confidence in the result. Although there is a tendency for sponsors to ask “what is good enough for an EUA?” maybe the question should be “what is good enough to prove that my test is reliable”?
For further information on serology testing, see CDC’s web page, “Test for Past Infection (Antibody Test)” at https://www.cdc.gov/coronavirus/2019-ncov/testing/serology-overview.html.
In these current times, the FDA has demonstrated increased flexibility to approve Emergency Use Authorizations for medical devices and IVDs that support the COVID-19 pandemic response. This has led to some misperception that quality and regulatory requirements have been broadly waived. In actuality, these EUAs are being approved under existing regulations, and the broader FDA regulatory and quality requirements still remain in effect. If you require assistance in meeting any of these requirements, ASELL’s team of quality, regulatory and other experts is ready to assist. Please contact us at email@example.com.