The privacy problem with health-related apps is linked to insecure coding

  • 17 August 2021
The privacy problem with health-related apps is linked to insecure coding

In his next column for Digital Health, Davey Winder, explores the privacy issues surrounding health-related apps.

A study published in the British Medical Journal has served to confirm an inconvenient truth: mobile health apps may not be as private as you think. I’m not convinced that’s the biggest issue with mobile health apps, truth be told.

47% of apps analysed didn’t comply with their own privacy policy

The cross sectional study, authored by Gioacchino Tangari, Muhammad Ikram, Kiran Ijaz, Mohamed Ali Kaafar and Shlomo Berkovsky, set itself the objective of analysing what user data is collected by health and fitness related apps on Google Play and thus reveal any associated risks to privacy.

The researchers performed their in-depth analysis on a total of 15,838 global apps from the Australian store with a 8,468 non-health apps used for the baseline comparison. Of these, the vast majority (88%) were using either cookies or some form of tracking identifier relating to user activity, and 28% of the apps didn’t have any privacy policy. Of those that did, only 47% of the apps complied with that policy.

What sort of data are we talking about here? The usual device identifiers and cookies plus contact information mostly. The kind of thing that’s used for tracking and profiling by advertisers, in other words. The researchers concluded that when compared with the baseline the mobile health ones “included fewer data collection operations in their code, transmitted fewer user data, and showed a reduced penetration of third party services.” Which is good news. Digging into the data further it became clear that health and fitness apps were more data-hungry than medical apps and more likely to share this data, with “integration of adverts and tracking services” more pronounced.

Most users are ill-equipped to make informed choices

Tom Davison, the technical director at mobile device security specialists Lookout, says that while apps do make use of the “robust permissions models” provided by both Apple and Google, “in order to use an app, users effectively have no choice but to accept permissions and agree to terms and conditions.”

This is as it’s always been, of course, and the decision is ultimately that of the user. But is that decision based on an understanding of the choices offered? Davison argues that the “awareness of users about how they are trading data for functionality remains woefully low.”

I’m inclined to agree, historically speaking, but the privacy labels introduced by Apple for iPhone and iPad users, at least, have gone some way to bringing clarity to what data collected is used to track you, is linked to you and not linked to you. These labels provide users with the opportunity to opt for a less intrusive app before downloading. Android users are still waiting for this transparency nod, with apps on the Google Play store requiring the user to click through links to see the details.

Then there are the cookie notices when you start using an app or visit a site which are a different kettle of fishy smells altogether. Most are so convoluted in their nature that far from clarifying anything they almost seem, and I’m shocked I tell you, designed to direct the user to click ‘accept all’ and move on.

“Most users are not equipped or prepared to sift through the legalese to fully understand the trade-offs,” Davison says, “and other than by reading these lengthy privacy policies, users have very few ways to validate how apps access, store, transmit, secure or share data.”

A Google spokesperson told The Guardian newspaper, “Google Play developer policies are designed to protect users and keep them safe. When violations are found, we take action. We are reviewing the report.”

Privacy policies are the least of your mobile health app worries

OK, I lied: I’m not shocked at all about seeming attempts to obfuscate the whole data collection and usage process when it comes to health-related apps. I’m not actually convinced this is the biggest problem faced by users of them either, and here’s why.

That same study concluded that 23% of the data being transmitted was done so using insecure communication protocols, HTTP rather than HTTPS. That’s the first cybersecurity red flag for me. Others come from an earlier report, published by Which? at the start of the year. This also looked at health and fitness apps, and services, but from a security as well as privacy perspective.

The Which? investigation found everything from apps that allowed the weakest of passwords, passwords stored unencrypted on the device itself, “more cookies than a bakery” in many cases and uncertainty amongst lawyers if they were General Data Protection Regulation (GDPR) compliant, at least in spirit. If you thought the red flags were flying already, it gets worse.

Insecure APIs at the heart of the problem

Alissa Knight, a well-respected security researcher and industry analyst, has authored a report published by mobile security specialists Approov, that exposed application program interface (API) hacking risks to all 30 of the popular mobile health apps investigated. Thirty apps that, the report suggests, have exposed more than 20 million users to potential attacks from cybercriminals.

The biggest problem appears to be the use of ‘hardcoded’ API keys that were present in 77% of the apps. Look, any hardcoded credentials are a bad thing: it doesn’t take a security genius to realise that embedding such things into app source code could end badly. Properly securing API access is essential when you are talking about apps that handle sensitive data; data such as healthcare information such as access to patient records and imaging, for example.

In 100% of the apps that were investigated during this research, the API endpoints were vulnerable to one well known attack type, BOLA. Broken Object Level Authorisation attacks are a favourite way to gain unauthorised access to data. The Open Web Application Security Project (OWASP) has put this at the top of its ‘Most Critical API Security Risks’ list, for example.

Bug-free code is all but impossible, so where do we go from here?

I’m all too aware that writing bug-free code is, frankly, impossible. However, I recently wrote an article for The Register based around a report that found 81% of developers knowingly released vulnerable apps.

Of course, not all app vulnerabilities are equal and many will be of such low real-world risk of exploitation that it’s determined not to be a priority. Insecure APIs in health-related apps do not, I would argue, fall into this category.

So what’s the solution? Good question and please feel free to leave your answers in the comments below. What I do know is that security has to be better integrated into app development culture and developers given both the tools and the ‘agency’ to produce code that is as secure as it can be. The alternative is yet more health-related data breach headlines…

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Patient groups oppose NHSE plans for unified clinical registry platform

Patient groups oppose NHSE plans for unified clinical registry platform

Patient groups for people with blood disorders have raised concerns about NHS England plans to combine clinical registries in a single platform.
Harnessing AI and cybersecurity to transform healthcare in the UK

Harnessing AI and cybersecurity to transform healthcare in the UK

The UK healthcare sector is in a transformative era, driven by advancements in artificial intelligence (AI). AI has the potential to revolutionise healthcare by improving…
Junior doctors break strike to assist at sites hit by cyber attack

Junior doctors break strike to assist at sites hit by cyber attack

Guy’s and St Thomas’ and King’s College Hospital NHS Foundation Trusts continue to experience major disruption following the cyber attack on Synnovis.