What Impact is Diagnostic Uncertainty Having on the COVID-19 Pandemic?

27th October, 2020

Testing is seen as a critical component of any strategy to control and suppress the coronavirus pandemic, with much focus on the number of tests being performed. In a recent paper in Plos ONE, we explored what impact uncertainty in testing can have on the pandemic. In our paper, we explained why uncertainty about the test—the diagnostic uncertainty—is something that should not be ignored when using an epidemiological model. In this blog post I will explain why this is so, and why policy planners cannot afford to ignore the implications of the paper.

For COVID there are two main test types that are used within the UK government’s testing strategy. The first, and most important of these, is the “have you got it” RT-PCR test that is used to detect an active infection. These tests work by detecting the genetic material of the SARS-COV-2 virus from a nasal swab taken from the patient. PCR testing is highly effective; in theory these tests are able to detect a single virus on the swab. At the time of writing, these tests are freely available for anyone who self-reports that they have any symptom of the disease.

The other test that is available is the “have you had it” antibody test that can be used to determine whether someone has been exposed to the disease in the past. These tests detect the presence of antibodies in a person’s blood. Within the UK these tests are only available to workers in care professions . Antibody tests are crucial for scientific surveys about the spread of the virus. Earlier in the pandemic, antibody testing was hailed as a ‘game-changing’ test that could allow people to know whether they had some immunity to the virus and could therefore receive an “immunity passport” allowing them to be exempt from lockdown and social-distancing measures. Immunity passporting as a concept has begun to disappear from the public narrative as there is still uncertainty about the extent to which antibodies confer immunity.

There are two main characteristics that are useful when evaluating how good a test is. The first is the sensitivity, which is the proportion of people who actually have the disease who test positive. The second is the specificity, the proportion of people who do not have the disease who test negative.

For PCR tests the UK Government currently assumes that the active virus tests have a sensitivity and specificity of at least 95% . For example, this means that if 100 people with the disease were tested, up to 5 on average may receive an incorrect result. These estimates are made in perfect laboratory conditions, so the operational sensitivity and specificity may be lower. There are currently no estimates for the operational sensitivity or specificity for COVID PCR testing within the UK.

It may seem confusing to some that a test that can detect a single virus on a swab could possibly give incorrect results. However there are numerous reasons why this may happen: reactions with none SARS-CoV-2 genetic material could result in a false positive; poorly performed swaps can lead to false negatives. Incorrect handling of samples or incorrect use of reagents could lead to both false positives and false negatives. False negatives can also occur if a person was only recently infected.

Image source: Unsplash.com

The final metrics to consider are the positive and negative predictive values (PPV and NPV) of the test. The PPV tells us how likely it is that someone who tested positive actually has the disease, and similarly, the NPV tells us how likely it is that someone who tested negative does not have the disease. The PPV and NPV depend heavily on the prevalence of the disease within the population that is being tested, and this can lead to some surprising results.

For example, in Liverpool at the start of October 2020, the estimated prevalence of the disease is around 600 infections per 100,000 people . Therefore, if we were to randomly select 1,000 people from within the city to be tested as part of a doorstep screening programme, as is happening elsewhere within the country , then we would expect that about 6 people will have COVID and 994 will not. If we were to test these 1000 people with a test that is 99% sensitive and 99% specific, then it is likely that all of the 6 who are infected will test positive. However, of the 994 people who are not infected, 9 will falsely test positive. Therefore, over half of the 15 people who receive a positive result do not actually have the disease. And this is for a test that is at the higher end of the government's assumptions. If the tests had sensitivity and specificity equal to 95%, it is likely that we would get 56 positive results of which only 10% actually have COVID.

"Poorly targeted antibody testing could cause more harm than good as it risks giving susceptible people overconfidence in their viral status"

In such cases there is clearly a question about how a positive result should be interpreted. For example, HIV screening programmes often have low PPV, and there have been suicide cases after people received positive results through screening programmes, even though it was known in advance that a positive was more likely to be a false positive than a true positive. Although suicides are, hopefully, unlikely after a positive COVID result, a positive result still means that the individual and all their close contacts need to self-isolate for 2 weeks. A positive result could also close a business, send a whole school year group home, or cancel a sports fixture, amongst other societal impacts. Due to these impacts, some suggest that positive results from individuals who are asymptomatic or have had no known contact with an infectious individual should be considered suspect until they receive a second test.

In areas where the prevalence is low it may be the case that small spikes in cases could be the result of false positives. This problem is potentially exacerbated by increased testing in localities in response to small increases in positive tests. Policy decisions that depend on small changes in the number of positive tests may, therefore, be flawed.

In a test and isolate system, false negative results can also impact how the disease propagates. It has been suggested that mass testing hundreds of thousands of individuals everyday could help to control the virus by only requiring those with a positive result to self-isolate . However, our analysis found that such an approach would be ineffective at containing the virus due to those that receive false negative results . Similarly, using a negative result to justify ending a self-isolation period after international travel or close contact with an infected person could exacerbate transmission of the virus. This is particularly the case for the latter example as false negatives are known to occur during the incubation period of the virus.

Under an immunity passport regime based on antibody testing, we found that if the prevalence of antibodies is low—a reasonable assumption at this stage of the pandemic—it is unlikely antibody testing at any scale will justify the end of current social distancing measures. Poorly targeted antibody testing could cause more harm than good as it risks giving susceptible people overconfidence in their viral status, meaning that they might act in more risky manner, increasing their chance of infection, than they would if they didn’t know their viral status.

In our paper, we explored what impact diagnostic uncertainty would have on the spread of the virus. We concluded that for testing to be used to relax lockdown measures, the capacity needs to be sufficiently large but also well targeted to be effective. A well targeted test would be focused on people with symptoms or people who have been in contact with known infections. This could be achieved through an effective contact tracing programme. Untargeted mass screening at any capacity would be ineffectual and may prolong the necessary implementation of lockdown measures.

Read the full article here

Related Posts