The Flaws of Bluetooth Contact Tracing Technology for COVID-19 Containment

29 Apr 2020

The Flaws of Bluetooth Contact Tracing Technology for COVID-19 Containment

Google and Apple have recently partnered together to help battle COVID-19 spread with contact tracing technology.

The on-screen goal of this alliance is to protect people from the virus and hopefully bring society back into the motion sooner rather than later. 

The means to achieve that ambitious goal are fairly simple – leverage Bluetooth technology to alert smartphone users about the status of the person they shared the same space with, and encourage exposed people to self-quarantine, this way stopping the further spread.

Both companies are looking to implement application programming interfaces (APIs) that would enable the interoperability between iOS and Android devices that have downloaded the verified applications from their respective markets. 

The biggest goal would be to build a Bluetooth-based contact tracing platform into the underlying platforms.

While this idea could be perceived sufficient enough to spark a discussion about the efficient use of technology for virus containment, there’s little proof and a lot of doubts that this will work as designed, not saying anything about the user privacy and security implications a possible solution would present.

Reviewing the Processes of Contact Tracing Apps

The first use of Bluetooth for COVID-19 containment practices is credited to an app called Trace Together, an open-source project that would be a likely prototype of the project Google and Apple are working on. 

The application’s work model can be described in a few steps:

  1. Phones that have the app and are close to each other for some time (indicating a human contact) exchange Bluetooth signals. 
  2. Both phones log transmitted and received codes, or as they are also called, temporary contact numbers, TCNs. 
  3. If the user has contacted the virus, they can request a unique confirmation code from their healthcare provider, so the app could upload the logs to a public database.
  4. The server would send that data to every phone in the system, and the apps would check for codes they have collected when interacting with other devices over the last two weeks.
  5. If there’s a match in the records, the user who interacted with an infected person gets a notification alert.

If we go from step one, we have to assume that Bluetooth has to be turned on all the time to transmit and receive codes. As developers state, the information about the interaction is stored on the phones for 21 days. 

They also note that location data is not collected. But there are a couple of details that are either overlooked or ignored on purpose, that have to be brought to the spotlight.

Bluetooth Is an Inconsistent Proximity Detecting Tool

Bluetooth can facilitate the exchange of TCNs on the distance far exceeding the recommended six feet if there’s no interference in the channel of the exchange, which is the air. 

The ping could happen between two devices belonging to the neighbors separated by a wall, or a floor, sometimes even a street. However, the problem of the RSSI, the strength of the signal you receive, could also mean that there would be no ping between two contacts if there's some interference in their “radio channel”. 

The real world where people and objects move constantly makes air a “dynamic channel”, where the soundwaves are modified by any simple movement.

This phenomenon can be explained with an example of the sound fading when you put a pillow on the speaker of your device. Even though you remain at the same distance from your phone, the sound is different. 

This important detail is often overlooked, as application developers don’t take into account the channel’s physical properties and the environment where devices interact. 

People could meet in a grocery shop, a parking lot, or any other half-public place, where the soundwaves would be affected way more. Additionally, if both users keep their phones in their back pockets or their bags, there would be a good chance of an app failing to calculate the real distance, thinking the distance is good enough, thus giving people a false sense of security.

And as we all know by now, coronavirus can stay in the system for a long time, and sometimes people don’t have any symptoms that would indicate the infection. This way, infected people wouldn’t know about their health status, and would not request that confirmation code to trigger an alert. 

Plus we have to remember that this is a very elusive virus that can spread without people standing close to each other, but by other means, such as interacting with surfaces and objects and then touching our faces. 

This is yet another issue Bluetooth tracing technology wouldn’t be able to solve.

The Human Factor and Its Influence on the Tracing Apps

Let’s look at the list once again.

The effectiveness of the app procedure can be derailed at step number three when a user decides not to disclose their infection for personal reasons, or even at step number zero, if there wouldn’t be enough people to provide a sufficient amount of data.

In Singapore it is required by the law to assist the health ministry in mapping out the past movements and providing timelines and location collected by the apps, but what would happen if the same system would be introduced on a larger scale in the United States, for example? Or other country that wouldn’t even ask people to “opt-in”? 

For more examples of COVID-19 containment tactics, you can look over the blog post called “Is Governmental Surveillance for COVID-19 Containment a Step Too Far?”.

Other contact tracing apps have their specifications at what should be considered a “contact event”. One app could consider a contact event to happen only when two devices are within the close range for around fifteen minutes. There’s yet another question, why wouldn't a three-minute chat be considered a “contact event”? What would be a logical time limit for contact logging? Harvesting the IDs of all the pedestrians you have walked by seems to be an overkill.

Besides that point, self-reporting can get complicated with factors that don’t include the responsibility of the person. For example, South Korea residents have developed a habit of scouring the internet for information about people who tested positive, only to berate them online

People at bulk don’t have the intelligence to recognize the contribution of people who inform healthcare about their health status, and their behavior creates stigma and social alienation.

For the system to work as intended the following factors should also be considered:

  • Everyone should have a smartphone. Even though there’s approximately 3.5 billion smartphone users around the world, that number is not good enough;
  • Everyone should download the app, as even the most advanced technology will fail if the adoption rate is not good enough. While Google-Apple project would likely become very popular, the number of users still wouldn’t be good enough;
  • The legislation systems have to be transparent about data collection and sharing. So far the acceptance has not been decisive, as ICO (UK’s privacy regulator) warned us about a potential misuse of contact tracing apps;
  • The number of supplies and healthcare professionals ready to evaluate the condition of the people would have to be increased significantly, not only to cut down the number of false self reports but to resolve some of the logistics issues caused by the pandemic;
  • The flood of apps mimicking the mass platform would also require constant and diligent monitoring. There will be a lot of unverified apps spoofing a more popular platform. 

The chances of all the aforementioned points coming together are extremely slim, and the consensus can be drawn out fairly easily.

Google and Apple, who lead the technological battle against the virus, can be viewed as problematic heroes who try to help, but ultimately create other issues. We’re on the verge of the biggest experiment ever, and the companies leading it don’t have the best track record of securing user data. 

Notable Bluetooth Vulnerabilities in Android and iOS Devices

Besides the unlikelihood of mass implementation of the designated apps, Bluetooth technology as a whole is far from being secure.

For example, Bluetooth generating random number strings instead of broadcasting a permanent Media Access Control (MAC) address of the device does not always make its use anonymous, as there’s a way to identify the user by extracting identifying tokens from the payload of advertising messages

Those tokens stay static for long enough to become an additional factor for user identification. 

This vulnerability concerns Microsoft and Apple devices.

When it comes to Androids, we should highlight the recently patched BlueFrag CVE that allowed the attackers to steal data from nearby phones and deliver malware. The details about BlueFrag can be found in a recent ERNW Insinuator blog post.

To summarize, Bluetooth is not an answer to containing the spread of the novelty virus, but an exploitable platform that enables not only tracking, but data theft and malware infection. All platforms have vulnerabilities, and Bluetooth can be exploited in a multitude of ways. 

Stay safe and responsible!

subscribe to our blog