Contact tracing apps work by sending a signal to other devices in your proximity. Then, when someone tests positive for COVID-19, the information is shared with the people who were recently in contact with that person.
It’s a fairly straightforward principle, but things get tricky when you start considering user privacy and technical hurdles.
The basic principle
The basic technology behind contact tracing apps relies on Bluetooth. Think of it as a beacon, sending out signals all around it. Other devices pick up this signal and the contact is stored somewhere (whether on the users’ devices or on an external database — we’ll get to that in a bit).
Then, if someone tests positive for COVID-19, this triggers an alert that gets sent to all the users in the system who were near that person. People who get the notification then trace their own symptoms and are prioritized for testing if they develop symptoms.
Of course, the beacon signal is more complex than a “Hey, this is me!” identifier — it can be anonymized and covered under a layer of secure cryptography and there are many layers of how the data distribution and privacy is ensured — but this signal is at the core of contact tracing apps.
Different types of contact tracing apps
There are dozens of these types of apps (either in development or already developed and in use) with various characteristics. The main differences stem from how much data is tracked and how much privacy is ensured.
For instance, there are centralized and decentralized apps. Centralized apps, such as the ones used in Israel and the NHS app proposed in the UK, track the data on an external server. A third party, such as a tech company or health experts, is given access to this data. This offers a good way to trace COVID-19 spread, but then again, you are offering your data to a third party. Few people would be comfortable sharing their data in this way, though it would presumably only be temporary, and the discomfort may be outweighed by the importance of fighting the spread of the disease.
Meanwhile, decentralized apps (such as the one in development by Apple and Google) would leave users in charge of their data. The beacon signal would be anonymized and secured through cryptography, and the alerts would be sent automatically. The advantage is that user privacy is protected as the data is anonymized, but authorities don’t have access to the underlying information.
The apps typically send notifications when someone who has been in proximity to you tests positive for COVID-19, but they can differ in what happens after that. Some can stop there, while others can ask you to input your own symptoms day by day — this has been enforced in a few Asian countries, for instance. Those deemed to be at high risk could be told to stay at home, while others could continue to live outside of a lockdown. Again, this is more useful in terms of tracking the disease and more treacherous in terms of individual freedom. This is the general tug-of-war of tracing apps: privacy vs. usefulness.
Privacy concerns
The European Union, the body of countries with the most stringent privacy rules, has issued its own guidance toolbox about what contact tracing apps should and shouldn’t do.
Any app of this sort should be:
- fully compliant with data protection and privacy rules;
- implemented in close coordination with, and approved by, public health authorities;
- installed voluntarily, and dismantled as soon as no longer needed.
- aim to exploit the latest privacy-enhancing technological solutions. Likely to be based on Bluetooth proximity technology, they do not enable tracking of people’s locations.
- based on anonymised data: alert people who have been in proximity for a certain duration to an infected person to get tested or self-isolate, without revealing the identity of the people infected.
- interoperable that citizens are protected even when they cross borders.
- anchored in accepted epidemiological guidance, and reflect best practice on cybersecurity, and accessibility.
- secure and effective.
Welcoming the toolbox, Commissioner for Internal Market Thierry Breton said:
“Contact tracing apps to limit the spread of coronavirus can be useful, especially as part of Member States’ exit strategies. However, strong privacy safeguards are a prerequisite for the uptake of these apps, and therefore their usefulness. While we should be innovative and make the best use of technology in fighting the pandemic, we will not compromise on our values and privacy requirements.”
Anything less than this would severely compromise our privacy, which could lead to a slippery slope.
Uncertainty
We’ve never been faced with anything like this before, and we’ve never had such tools at our disposal — which is to say, no one really knows just how effective contact tracing apps will be. Researchers at the University of Oxford’s Big Data Institute have previously suggested even if false and missed alerts were common, the spread of the virus would still be slowed and people would have to spend less time in quarantine. Existing science, while not very broad, seems to suggest that these apps can help.
But they won’t save us on their own.
Having humans do manual contact tracing is still considered the golden standard, but that would be almost impossible at this scale. However, there could be some marriage of the two approaches. Backed by human contact tracers, apps would help us identify infection chains, direct tests more strategically, and ensure that the viral spread doesn’t go out of control anytime soon — hopefully.
We’re still going nowhere without mass testing capacity and social distancing measures. We’re still defenseless for now, as there is no vaccine or effective treatment. We still have months (or years) ahead of us before we overcome this pandemic. Contact tracing apps can definitely help, but they need to be a part of a larger, realistic, and effective strategy. Otherwise, we’ll just be stuck in an endless lockdown-or-uncontrolled-outbreak cycle.