You are now in the main content area

How mental health apps are exploiting user data LAUREN DWYER

How Mental Health Apps are Exploiting User Data

Lauren Dwyer

While smartphones have long been touted as devices that make our mental health worse, app developers, particularly in the world of self help and behaviour change, have done everything in their power to change that narrative. Apps such as Calm, external link, Headspace, external link (which even has music led by John Legend because why not), Moodpath, external link, and well over 10,000 others have brought the power of mental health treatment directly to the customer with free* and paid versions of their apps that put full cost in-person therapy to shame.

(*free as in “we’ll take all of your data, your hopes, and maybe even your dreams, and sell them to the highest bidder”)

Unlike in person or even tele-health therapy, these “self serve” mental health apps offer users “cheap”, easily accessible therapeutic services for a range of mental health disorders. Through self-reported symptoms, location tracking, screen activity analysis, and motion monitoring, these apps paint a picture of the user’s mental state so that they may then work towards recovery. These apps require users to consent to data collection ranging from location tracking and collection of audio and video to reading sensitive log data, external link, external link. Users are often encouraged to share their progress and mental health status across social aspects of the apps, as well as other social media sites, reinforcing a culture of data sharing and allowing for additional parties to access the collected data.

Data collection in mental health apps is often concealed from the user through a lack of accessibility via high reading-levels or through intentionally excluding links, external link, external link for both privacy policies and terms of agreement documentation. Even when the policies ARE included, what person actually has the time to read 48 pages of legal jargon before signing up for help with their panic attacks, let's be real here. The problem with this is that these documents actually state that data collected is both extensive and personal and is often sold to third party and governmental organizations. The potential for consumer exploitation is set in the context of patient willingness to consent to the collection of personal and private data due to a perceived need and a desire to ease the symptoms of mental illness. Given the therapeutic nature of mental health apps, this can leave potential users feeling as if they have no choice but to consent to invasive data collection methods or shell out hundreds of dollars for in person therapy, or, in many cases, forgo getting help at all. 

What data is being collected? 

Results from separate 2019 studies found that mental health applications were collecting users’ personal and personally identifiable information, external link, external link including ‘dangerous’ permissions, external link, external link. These apps were found to be doing so under the reasoning of contacting the user, improving app performance, and providing the services advertised by the app despite data collection occurring outside of these base needs. These permissions include , external link, external linkeverything from users’ names and logged data, to location services, audio and video monitoring and access, contact data, and the ability to read, modify and delete user USB and SD card data. 

How and when data collection and sharing occurs is often left out of privacy policies in mental health apps. Users may know that their personal data is being collected, however the exact details of how the data is obtained, be it through use of the app or from passive collection, is not detailed. The location of retained data, the duration for which it is held, and the parties to which it is shared is often left out as well, with policies detailing only that the data has been collected and will be shared with unnamed third parties at some point in the future. 

So you have my data, so what? 

So where does all this data go? In an analysis of the top 100 mental health applications offered by the Apple App Store and Google Play store, Robillard, external link, external link found that over two-thirds of app privacy policies informed users that their data may be shared with third parties. This finding is echoed in other studies, external link, external link who went further to show that data was shared with third parties, even without the presence of a privacy policy depicting the details of these transactions. Sharing with a third party may at first feel like no big deal, especially if you already have a social media presence or a Google account. Companies already have all our information anyway, right? The problems start to come up when we look deeper at real world implications, when that data gets sold to hiring companies or government agencies, and suddenly the mood tracker that is meant to help users prevents them from getting a job or immigrating to a new country. 

Third party access is not the only user concern for data sharing — the invasion of privacy can come from the community of users themselves. One particularly prominent case of user data exploitation with real-world implications is the Radar app by the Samaritans charity, external link, external link. This Twitter app enabled users to monitor the accounts of their friends for distressing messages, notifying users when someone they followed posted a message that the app deemed to suggest depressive or suicidal attitudes. The last thing I need is Bob from accounting sending out alarm bells because I tweeted a Taylor Swift break up lyric. This app has since been disbanded, citing invasion of privacy and public outcry as some of the main reasons. Samaritans Radar issued an apology, external link in November 2014.

Silver linings 

So why use these apps at all? Despite 1 in 5 adults suffering from mental illness in their lifetime, psychological health care services often have long wait times and high associated costs. One of the factors users perceive as being beneficial is increased access to mental health information, external link. For vulnerable populations, even inquiring into available mental health services can be met with stigmatization. These populations in particular can benefit from apps that maintain levels of privacy adequate enough to minimize the chance for negative stigmatization. 

Mental health apps are further perceived to reduce barriers to accessing common mental health services, particularly when it comes to users’ wallets. A single one hour session with a registered psychologist or social worker in North America can cost between $50 (on a limited, sliding, needs-based scale) and $240, external link. Users from lower socio-economic brackets who cannot afford even the baseline therapy costs on a sliding scale can turn to mental health apps as providers of knowledge and treatment. 

Not only are mental health apps easier on the wallet, they also offer relief from egregiously long wait times, external link. Wait lists for registered mental health services can have times of over a year in Canada. This is particularly troublesome for individuals in crisis who may not be able to afford admittance into a psychiatric hospital. Mental health applications offer users immediate assistance that can be crucial to those who cannot wait. 

Another barrier these apps can overcome is distance, with some users accessing mental health services from remote locations that may have reduced or low quality health care services available to them. 

Despite these benefits all sounding great (who wouldn’t want affordable, accessible, and instantaneous mental health care?!), the actual benefits mental health apps provide don’t quite live up to expectations. In a study, external link focused on apps used to manage symptoms of depression, it was found that apps had a low level of adherence to the basic, evidenced based principles of cognitive behavioural therapy or behavioural analysis. 

This isn’t to say that the apps are completely ineffective. A 2018 study, external link on the effectiveness of internet and mobile mental health interventions (IMIs) found that, when compared to individuals who received no treatment, IMIs demonstrated a high level of efficacy. When compared to in person treatment programs however, there is little evidence for or against IMIs as effective methods of treatment, and no sufficient long-term IMI studies have been completed. 

Given the therapeutic nature of mental health apps, this can leave potential users feeling as if they have no choice but to consent to invasive data collection methods. These apps have risen to fill the gaps left by the current western model of healthcare, and though they offer solutions to some symptoms they do not address systemic problems. In catering to the needs of the vulnerable populations who have slipped through the cracks of the healthcare system, mental health apps have found a way to commodify and commercialize mental illness, exploiting user data to turn a profit.

  

Social Sciences and Humanities Research Council

The Explanatory Journalism Project is supported in part by funding from the Social Sciences and Humanities Research Council.