The importance of testing and contact tracing to slow the spread of the novel coronavirus and resume normal life is now well established. The difference between the communities that do it and the ones that don’t is disturbingly grim (see, e.g., South Korea versus Italy). In a large population like the U.S., contact tracing and alerts will have to be done in an automated way with the help of mobile service providers’ geolocation data. The intensive use of data in South Korea has led many commenters to claim that the strategy that’s been so effective there cannot be replicated in western countries with strong privacy laws.
Descriptively, it’s probably true that privacy law and instincts in the US and EU will hinder virus surveillance.
The European Commission’s recent guidance on GDPR’s application to the COVID-19 crisis left a hurdle for member states. EU countries would have to introduce new legislation in order to use telecommunications data to do contact tracing, and the legislation would be reviewable by the European Court of Human Rights. No member states have done this, even though nearly all of them have instituted lock-down measures.
Even Germany, which has announced the rollout of a cellphone tracking and alert app has decided to make the use of the app voluntary. This system will only be effective if enough people opt into it. (One study suggests the minimum participation rate would have to be “near universal,” so this does not bode well.)
And in the U.S., privacy advocacy groups like EPIC are already gearing up to challenge the collection of cellphone data by federal and state governments based on recent Fourth Amendment precedent finding that individuals have a reasonable expectation of privacy in cell phone location data.
And nearly every opinion piece I read from public health experts promoting contact tracing ends with some obligatory handwringing about the privacy and ethical implications. Research universities and units of government that are comfortable advocating for draconian measures of social distancing and isolation find it necessary to stall and consult their IRBs and privacy officers before pursuing options that involve data surveillance.
While ethicists and privacy scholars certainly have something to teach regulators during a pandemic, the Coronavirus has something to teach us in return. It has thrown harsh light on the drawbacks and absurdities of rigid individual control over personal data.
Objections to surveillance lose their moral and logical bearings when the alternatives are out-of-control disease or mass lockdowns. Compared to those, mass surveillance is the most liberty-preserving option. Thus, instead of reflexively trotting out privacy and ethics arguments, we should take the opportunity to understand the order of operations—to know which rights and liberties are more vital than privacy so that we know when and why expectations in privacy need to bend. All but the most privacy-sensitive would count health and the liberty to leave one’s house among the most basic human interests, so the COVID-19 lockdowns are testing some of the practices and assumptions that are baked into our privacy laws.
At the highest level of abstraction, the pandemic should remind us that privacy is, ultimately, an instrumental right. It is meant to achieve certain social goals in fairness, safety, and autonomy. It is not an end in itself.
When privacy is cloaked in the language of fundamental human rights, its instrumental function is obscured. Like other liberties in movement and commerce, conceiving of privacy as something that is under each individual’s control is a useful rule-of-thumb when it doesn’t conflict too much with other people’s interests. But the COVID-19 crisis shows that there are circumstances under which privacy as an individual right frustrates the very values in fairness, autonomy, and physical security that it is supposed to support. Privacy authorities and experts at every level need to be as clear and blunt as the experts supporting mass lockdowns: the government can do this, it will have to rely on industry, and we will work through the fallout and secondary problems when people stop dying.
At a minimum epidemiologists and cellphone service providers should be able to rely on implied consent to data-sharing, just as the tort system allows doctors to presume consent for emergency surgery when a patient’s wishes cannot be observed in time. Geoffrey Manne suggested this in an earlier TOTM post about the allocation of information and medical resources:
But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.
Indeed, we should go further than this. There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy (e.g. New York’s requirement that doctors notify their patient’s partners about a positive HIV test if their patient fails to do so), this same logic applies at scale to the collection and analysis of data during a pandemic.
Another reason consent is inappropriate at this time is that it mars quantitative studies with selection bias. Medical reporting on the transmission and mortality of COVID-19 has had to rely much too heavily on data coming out of the Diamond Princess cruise ship because for a long time it was the only random sample—the only time that everybody was screened.
The United States has done a particularly poor job tracking the spread of the virus because faced with a shortage of tests, the CDC compounded our problems by denying those tests to anybody that didn’t meet specific criteria (a set of symptoms and either recent travel or known exposure to a confirmed case.) These criteria all but guaranteed that our data would suggest coughs and fevers are necessary conditions for coronavirus, and it delayed our recognition of community spread. If we are able to do antibody testing in the near future to understand who has had the virus in the past, that data would be most useful over swaths of people who have not self-selected into a testing facility.
If consent is not an appropriate concept for privacy during a pandemic, might there be a defect in its theory even outside of crisis time? I have argued in the past that privacy should be understood as a collective interest in risk management, like negligence law, rather than a property-style right. The public health response to COVID-19 helps illustrate why this is so. The right to privacy is different from other liberties because it directly conflicts with another fundamental right: namely, the right to access information and knowledge. One person’s objection to contact tracing (or any other collection and distribution of data) necessarily conflicts with another’s interest in knowing who was in that person’s proximity during a critical period.
This puts privacy on very different footing from other rights, like the right to free movement. Generally, my right to travel in public space does not have to interfere with other people’s rights. It may interfere if, for example, I drive on the wrong side of the street, but the conflict is not inevitable. With a few restrictions and rules of coordination, there is ample opportunity for people to enjoy public spaces the way they want without forcing policymakers to decide between competing uses. Thus, when we suspend the right to free movement in unusual times like today, when one person’s movement in public space does cause significant detriment to others, we can have confidence that the liberty can be restored when the threat has subsided.
Privacy, by contrast, is inevitably at odds with a demonstrable desire by another person or firm to access information that they find valuable. Perhaps this is the reason that ethicists and regulators find it difficult to overcome privacy objections: when public health experts insist that privacy is conflicting with valuable information flows, a privacy advocate can say “yes, exactly.”
We can improve on the theoretical underpinnings of privacy law by embracing the fact that privacy is instrumental—a means (sometimes an effective one) to achieve other ends. If we are trying to achieve certain goals through its use—goals in equity, fairness, and autonomy—we should increase our effort to understand what types of uses of data implicate those outcomes. Fortunately, that work is already advancing at a fast clip in debates about socially responsible AI.The next step would be to assess whether individual control tends to support the good uses and reduce the bad uses. If our policies can ensure that machine learning applications are sufficiently “fair,” and if we can agree on what fairness entails, lawmakers can begin the fruitful and necessary work of shifting privacy law away from prohibitions on data collection and sharing and toward limits on its use in the areas where individual control is counter-productive.