By Sean McDonald and Bianca Wylie
We’re months into a pandemic and governments around the world are at different stages of their learning and response. Canadian governments, having failed as many others did, on containment of the virus are now faced with a different kind of policy landscape – one that demands they create a strategy to best manage the time between today and the widespread distribution of a vaccine. As Dr. Anthony Fauci recently mused, when speaking about the broad range of impacts of the virus: “Where is it going to end? We’re still at the beginning of really understanding.”
Failure to contain the virus at the start of the pandemic is not an occurrence to brush aside, from a policy perspective, and something that demands concurrent preparation and capacity building. The impacts Canada is currently experiencing as a result of that failure will provide a remarkable lesson in the necessity of pandemic preparedness as a political issue for the future. But moving forward from this failure and into the day-to-day management of community transmission means mostly one thing: a well-resourced and supported testing and tracing strategy. This is in addition to having plentiful stocks of personal protective equipment (PPE), strong guidance on public health norms (masks, hand-washing, gatherings, etc.), and social safety nets for people who need support to participate in social distancing and self-quarantine.
As reporting in Canada has shown, similar to other countries, there are clear institutional locations that should be receiving significant and sustained attention: long-term care homes, prisons, factories and worker dormitories. There are ways to use the pre-existing health infrastructures across the country to target and proactively test residents, particularly those at higher risk due to a range of factors, including anti-Black racism, as well as other racialized communities, such as the Filipino community. Beyond these proactive support approaches, programs such as CERB and rent relief are two of several programs that ensure people have adequate financial support to be able to stay home and voluntarily stay safe.
Setting aside this mix of targeted and proactive strategies to test and support priority communities, we’re left with questions about how to best manage the test and trace strategy for the broad general public. To Ontario’s credit, as an example, it’s made testing available to all. But without an infrastructure to promote, support, and enact that capacity, which Ontario is currently under-using, it can’t hit its stride in efficacy, particularly as provinces are making different calls about what reopening and deconfinement paths look like. As we’ve previously written, there should be much more public and accessible policy work available to Canadians about the targets and triggers for lockdown and the end of lockdown – these are still sorely missing. Despite the absence of this particular policy work, what else should be happening beyond the general and voluntary public health advice about masks, hand-washing, gathering, and in addition to the economic relief programs?
Testing and Tracing
Making testing broadly available to all is laudable and should be the norm across the country. From there, the results of testing become an input to a constant response activity – tracing. As many have learnt about recently, manual contact tracing is a dependendable public health approach to keeping track of people that may have come into contact with someone that has recently tested positive for COVID-19. It’s labour-intensive work but it’s effective.
Unsurprisingly, since the early days of the pandemic, technology solutions have been invoked to support tracing. One of the most common technologies deployed to date for this purpose is digital contact tracing apps. These are mobile phone apps that are supposed to use different mobile phone protocols to alert you if you come in contact with someone that has tested positive for COVID-19. Before getting further into this conversation, a quick recap on two key facts about contact tracing apps’ use to date:
- So far, the heads of every major app-based contact tracing deployment that has come forward, including Singapore, Iceland, and South Korea – have all said the apps played a small role, if any. Public health officials in Israel have raised concerns about negative effects on response efforts and Australia and the United Kingdom have been beset by a number of stumbling blocks. Said a different way, even if a government does a good job of building and deploying app inside of strong health systems, which isn’t guaranteed, there’s no indication it’s worth the effort or cost.
- At a more basic level, we simply don’t have enough clarity about the transmission model of COVID-19 to give people reliable indicators of risk. The WHO, just this week, had to walk back a statement on asymptomatic transmission, which it estimates at 16% of cases. Similarly, a number of trusted public health authorities initially suggested that mask usage wasn’t important, whereas recent science suggests that mask usage played a critical part of recovery efforts. In other words, we don’t have the information necessary to build accurate models about transmission so any notification of ‘exposure’ is based on an experimental, low-likelihood indicator of infection (at best).
That may be fine from a typical app but it’s worth questioning whether the same standards apply to a government app. Deploying a technology isn’t easy in the best of circumstances. Here, without clear, non-technological safeguards, there’s very little likelihood that an app would get to a high enough adoption rate to be useful. So far, no scaled deployment has gotten past 40% user adoption, suggesting that any voluntary app can only deliver notifications about a subset of potential risks because an inadequate number of people are using it. Moving from a voluntary app to compelling or mandating people to use the technology, whether by public authority, operating system update, or through non-governmental actors like employers, opens up a whole range of new issues.
Despite this context, in Canada, multiple governments appear to be planning to launch a voluntary COVID-19 ‘exposure notification’ app – a largely experimental technology intervention – into a public health emergency. First, an important distinction is necessary: contact tracing apps raise a number of trust issues because they share your location data, by design, with public health authorities and, for some, researchers. Exposure notification apps are different, basically, in that they’re usually not reporting information to any central authority and only send you notifications if you’ve been in some pattern of app-measured proximity to someone who (verifiably) reports that they’ve tested positive.
If governments make participation in the use of these apps optional, then the legal basis for their analysis is more subjective. Typically, we apply higher ethical standards to publicly funded work that involves a justifiable risk to people’s well-being – and some of the ways that we justify important, potentially dangerous, risk (like vaccine trials) is by taking the absolute minimum number of risks possible. The terms law uses to describe the process of defining how justifiable taking a risk with someone’s life are: “necessity” and “proportionality.” Essentially, we allow experimentation on people when it is necessary to solve an important problem, the methods of the experiment are proportional to the needs of the experiment (and the problem it solves) and the patient provides ongoing, informed consent. While there’s a lot of variety in implementation, these are the standards most democracies apply to experimentation.
There is, of course, the argument that private sector actors launch untested technology products all the time – which is true, but more of a product of the regulatory immaturity of the industry than a desirable public outcome. Private sector markets put the onus on customers to understand the risks created by products. Here we see the government doing the same thing. We all recognize, by now, that technologies – especially those meant to direct care during a pandemic – affect people’s lives. We should all be asking what standards of care, proof of efficacy, or accountability we expect from these technologies, particularly those developed by government. By making the use of these apps voluntary, governments may argue that because use is optional, the relationship should be governed by the Terms of Service of the application, which, of course, opens the door to more private law models of liability, like tort. In reality, it’s unlikely that anything around the proposed notification app will rise to the level of legal liability – but it’s worth considering – and even applying, the baseline ethical standards we would use if this were happening in a lab first.
These apps are ultimately part of government guidance. As such, we thought we’d start by applying the standard that is used to measure both human subjects research ethics and due process checks on emergency powers – ‘necessity and proportionality’. When people trust others with their basic rights, risks are measured by whether they are because (a) the solution is a required element of an important response function; and (b) the solution is proportional to the problem it is trying to solve.
Necessity
As is often the case with new technology, COVID-19 response technologies have largely been framed around their theoretical potential as opposed to their proven impact – what they might be able to do rather than what they actually do. The public conversation about these apps skipped the efficacy part and charged ahead into the intricacies of privacy engineering or security standards. This apparent disregard for the “why and what,” and over-focus on the “how” is somewhat unusual in the policy space. There are a lot of important, and well-meaning professional conversations to have about the technology – but not in this order (disregarding efficacy) and not with the assumption that any amount of privacy engineering can protect against abuse and a range of harms.
It may be that the urgent and challenging nature of this pandemic is prompting us to consider new technologies in ways we usually wouldn’t. But while the COVID-19 pandemic is unique in scale, it is not unprecedented. Nor is the need to develop guidance for novel responses to public health emergencies. There is already a significant amount of ethical, legal, and operational guidance around deploying technologies (biometrics, data modelling, drones, etc.) in emergencies from the humanitarian response community (for additional examples of such guidance, see also this and this). Much of this guidance places core aspects of governance, transparency, and accountability to the people they serve as core requirements for maintaining legitimacy, and almost all of them wrestle with how difficult it can be to use technology to deliver on those requirements because they take humans out of the equation.
Treating notification exposure experiments as an inevitability is particularly bizarre given where we are in the timeline of the pandemic because of the amount of historical data we have available about their use to date. They haven’t generally proven useful, and now, in the absence of adequate public information being provided about them, a new report suggests people would welcome the technology. While Canadians may have an openness to experimentation, without adequate information it is not surprising that people are willing to try anything given the climate of fear that a pandemic creates. Many of us are anxious for schools and businesses to re-open and jobs to come back safely. The ethics of this kind of public survey deserves a separate treatment entirely – it’s a common issue when doing public polling about new technologies, and frankly, about consent in general. How much information does one need to make an informed decision about the use of these apps?
In summary, it’s hard to understand how a COVID-19 app fares in a clear necessity analysis: you can’t “need” something that doesn’t have any practical, mathematical, or scientific basis for working. That said, because Canadian governments’ appear to be leaning towards making this optional, as opposed to requiring people to use it, they don’t have to worry about justifying necessity, unless someone asks why they prioritized this intervention over others. The bigger issue, as a focus on public health outcomes and ethics would suggest, is: if it’s not necessary, is it at least proportional?
Proportionality
Canadian governments appear to be taking the admirable steps of being clear that whatever they release will focus exclusively on helping people who have been exposed to people that have tested positive for COVID-19. That, on its face, seems pretty low-risk, certainly within the broader context of the various technologies that governments around the world are deploying.
Giving people health warnings is a serious, regulated practice and most of the government-proposed technologies take the warning, and the importance of a contextually relevant way to act on that warning, seriously. It is almost impossible to contain the problems that a technology can cause, within a technology. It’s equally difficult to ensure that notifications respect the humanity of people during sensitive moments, like receiving health notifications, or in ensuring that warnings aren’t used against people. At a practical level, it’s nearly impossible to predict the proportionality of any intervention – especially one with individual and public health impacts – based on the technology alone, context is a major factor.
Second-Order Impacts, Or: What New Problems Might These Apps Unleash?
One argument you may have heard for these apps goes something like: “well, we’re in a bad situation, so how bad an idea is it to at least try them? What’s the harm in that?” Part of this argument looks at the poor track record of the technology to date and says it’s all part of lessons learned, that these learnings can be applied to doing better now that it’s our turn. Taking this line of thinking a little further, as with most of the issues we are grappling with, we have pre-existing methods of thinking to use for guidance.
Governmental abuse hasn’t been the defining problem in most places these apps have been deployed. But the inevitable concern with any government institution-sponsored technology is that it implicitly condones a practice. It enables it and legitimizes it, as well as the actors involved in it. In Canada, this is the inference that Shopify and Google and Apple have some role to play in our public health policy response because they’ve decided they want to do so.
Beyond government apps, there have been a wide range of companies, criminals, and even public health responders who have launched their own contact tracing tools and platforms. And, while there is some scrutiny of those tools or applications as part of the approval process for being hosted in popular application stores, it doesn’t factor in any of the local or contextual factors that ultimately determine harms. In other words, app store reviews are not built for, or sufficient to, understand the proportionality of the apps they host.
Governments should take the proportionality and second-order impacts of technologies they deploy much more seriously than app stores – and need to launch policy frameworks alongside their public health interventions that recognize and respect the likely impacts of their work. What happens, for example, if employers develop and mandate their own ‘exposure notification’ app? What happens if insurers begin integrating biosurveillance of employees into their rate structures, as Canadians return to work? This is already happening all over the world – and as various Canadian governments join the app store, how will they protect users from the practices we know cause harm in the real world?
Conclusion
Canada is, in many ways, fortunate to be having these conversations. They build on the hard-won experience of a number of national experiments that have jumped out early and aggressively only to fail. We can say that because the people running these public experiments have said as much, and because learning from experimentation is core to the ethos of science, technology, and just about every other mature field.
Take, for example, the Australian contact tracing app – COVIDSafe – which had identical intentions, but was plagued by a range of practical problems on deployment. Or, the Austrian Red Cross’s contact tracing application, which caused significant political and popular blowback, halting adoption and preventing effectiveness – despite coming from the second most trusted brand in Austria. The public policy challenge, in many instances, is getting governments to acknowledge that they are, in fact, conducting an experiment and asking for consideration of the predictable, second-order impacts of their work.
The idea that these apps should be used in the absence of a well-funded traditional public health response shows how far techno-solutionist hype can go. It is difficult – if not impossible – to convincingly answer the question of “why” someone should download an exposure notification app at their leisure, given the technology’s demonstrated lack of efficacy. The silver lining, if there is one, is that there is available testing and, in most places, available care. In this instance we feel that it is important to ask “old” questions of new technologies when assessing their potential. Asking “why” means interrogating the necessity and proportionality of new tools and policy interventions.
As we continue to react to the pandemic in ways that protect the public interest, we need policies that reflect what we know will work, not what we hope will work. If the foundational questions of necessity and proportionality raised in this piece aren’t well considered prior to the elective deployment of an app they don’t just threaten to reduce the impact of the app, they actually create new and additional problems, often in ways that cost important institutions legitimacy when we need them most. Though the idea may be to iterate and change how an app works as technology impacts are understood, this practice threatens to destabilize the most fundamental element of having success in communications and people following policy directives, particularly if they are voluntary – public trust.
“black white negative” by foundin_a_attic is licensed under CC BY 2.0