Mental health data: it’s not so private after all
Third-party tracking cookies can be found on a large number of mental health-related web pages, the data from which can be taken and used to inform targeted advertising. So, how much of our personal health data really stays private – and how can we stop it from falling into the wrong hands? Chloe Kent reports.
or years, retailers have been storing data about consumers in order to provide them with targeted offers and advertisements. By tracking the demographic, likes and dislikes of an individual, manufacturers can make sure their products are seen by the audience most likely to make a purchase. If advertising algorithms can work out that you have recently bought a house, for example, you may then start to receive targeted advertising for paints and other DIY tools.
Sometimes, however, targeted advertising can get more personal – and more sinister.
US retailer Target infamously ‘knew’ a teenage girl was pregnant before her own family did, when it sent a batch of coupons to her home address for baby clothes and cots. Her father, enraged that a major retailer might encourage a teenager to have a child of their own, complained to the family’s local store. Days later, he found himself forced to apologise, when he learned that his daughter was indeed expecting a baby.
Target assigned every customer a guest ID number, tied to their credit card, name or email address, and had 25 products that it used to assign each customer a ‘pregnancy prediction’ score. The teenager had bought enough of said products that the company automatically supplied her with vouchers for things her baby would soon need.
Cut to 2020, and companies aren’t just tracking your purchases, but some of the most personal data it can access from your web searches – data concerning mental health.
Analysis from UK information services company Rightly has found that of the 51 most popular web pages related to depression listed on Google UK, 86% contained third-party trackers for marketing purposes. These included the website of popular online counselling service Better Help, and the Priory Group’s webpage on depression.
Why do advertisers want this kind of information?
If a person’s web browser can be linked to online searches about depression, advertisers can target them more strategically. These wouldn’t just be adverts for private counselling providers or wellness apps such as Headspace or Calm, although these do appear with increasing regularity after a few anxiety or depression related Google searches. A person who is struggling with their mental health may feel bad about their appearance, for example, and therefore be targeted by adverts for cosmetics and fashion companies.
This happens because platforms such as Google, Facebook and Amazon allow third-party advertisers to target people who exhibit certain kinds of online behaviour on their domains. If a user visits a few depression-related web pages that track their information, this can then be passed onto third-party advertisers, who will bid to show the type of targeted ad a depressed person may be more likely to click on when they later use Facebook.
Rightly content executive Klara Lee says: “If advertisers understand exactly what you feel and when you feel it then they can target you a lot more accurately, because they can understand when you’re in a vulnerable state of mind.”
Shouldn’t data protection laws stop this from happening?
The introduction of the general data protection regulation (GDPR) in 2018 was supposed to tighten private companies’ access to personal information about individuals in the EU, but loopholes can allow this sensitive kind of data to be mined regardless.
Lee says: “Although it’s illegal to share mental health data for advertising purposes, Googling ‘what is depression’ or ‘am I depressed’, doesn’t necessarily mean you are ill. It can be legally argued that therefore this doesn’t necessarily count as your mental health data. It’s not as clear cut as something like records from a therapist.”
It’s true that web history doesn’t constitute a medical diagnosis, so advertising companies can legally argue that they’re not strictly trading in health data when they track this kind of information. Regardless of this fact, analysis by Privacy International has found that many websites process personal mental health data like this in a way that is neither transparent or fair, lacking a lawful basis under GDPR on multiple grounds.
Under Articles 13 and 14 of the GDPR, an individual is entitled to know the purpose for which their personal data will be used and who the recipients have been or will be. Despite this, cookie consent popups are usually opaque to the average web user and lack meaningful explanation for how information about them will be used. This is despite Article 12 of GDPR, which requires this information to be provided in a concise, transparent and easily accessible format.
Article 5(1)(a) of GDPR also requires personal data to be processed lawfully, while Article 6 sets out an extensive list of legal reasons personal data can be processed.
“Given the overlap with ePrivacy laws,” reads the Privacy International report, “the only basis applicable to the sharing of this data with third parties is consent, and in the case that it is special category personal data under Article 9(1) of the GDPR, i.e. personal data revealing data concerning health, this would be explicit consent.
“However, in this research, we found that many mental health websites don’t provide users with a genuine or free choice. This is particularly concerning since visiting health-related websites can reveal very specific information about the user’s health. Such is the case when taking a depression test online.”
It is all well and good putting regulations like GDPR in place, but it can all feel rather meaningless when their implementation is so lacking.
It's not just mental health data that’s up for grabs
As well as webpages visited, this kind of data can also be mined from mobile phone apps. Research from Privacy International, published in September last year, found that several popular menstrual tracking apps share user data with Facebook via the social network’s software development kit (SDK).
The SDK allows software developers to receive analytics on which aspects of their app are most popular with users, as well as allowing them to log in with their Facebook account instead of signing up directly with the app developer. More crucially, however, Facebook may also use SDK data to provide individual users with personalised adverts.
While Facebook is only able to do to with the user’s consent, said consent is usually buried deep in a terms and conditions page that most users are unlikely to read.
“In the US, it’s estimated that an average person’s personal data is worth $0.10,” says Lee. “But a pregnant woman is $1.50.”
It’s Target-gate all over again. Pregnant people drastically change their spending habits in the months before their child is born, so information about whether or not someone has a baby on the way, is immensely valuable to advertisers. If a user doesn’t register their period, it could be a sign that there’s a moneymaking opportunity coming their way as journalist Tania Shadwell found out.
Shadwell wasn’t pregnant – she’d just forgotten to log her most recent period. Seemingly without warning, she suddenly found herself bombarded by mother and baby ads. When she later corrected the app, the ads stopped.
Because I had forgotten to log a cycle, the app likely concluded I was pregnant and began communicating the information to third party apps and algorithms— Talia Shadwell (@TaliaShadwell) November 3, 2019
Alongside missed periods and other potential pregnancy indicators, many of these apps record details on mood, general physical health, sexual activity, alcohol and drug consumption, sleep duration and weight.
Can any of this be avoided?
The Trump administration has said that as part of any post-Brexit trade agreement, it wants unrestricted access to the 55 million health records in the UK, which it estimates have a total value of £10bn a year. So, while it’s hard to put a price on this more covert health data market, it’s safe to say that it’s likely to be rather high.
But how do consumers avoid having their personal data catalogued in this way? When there are third-party cookies at seemingly every turn, it can feel almost impossible to prevent tech giants from building an intimate portrait of one’s personal wellbeing – but there are steps people can take.
“There are definitely things that you can do to protect your data from being used against you, although none of them are fool proof,” says Lee. “You could turn off ad tracking in your browser or download an ad blocker. You could use a privacy-focused browser like Mozilla Firefox or Brave.
“If you don’t really want the hassle of having to switch browsers or anything, you could open a different tab called DuckDuckGo – that’s like a different search engine and it’s really privacy-focused, so if you want to make a few more sensitive searchers then you can use DuckDuckGo, and then just return to Google for regular use.”
Many consumers feel that, on principle, they shouldn’t have to put this much effort into guarding their personal health information like this – third parties simply shouldn’t be tracking it in the first place.
But like it or not, this is the quandary that modern internet users find themselves in. Do nothing, and risk having hugely personal information used against you to sell consumer goods – or fight a losing battle to keep this data private.
Share this article