Health and wellness apps are becoming increasingly popular. In fact, in 2020, health and fitness app downloads rose by 30%. However, it isn’t just the physical health market that app developers are cornering. The use of mental health apps is also on the rise.
Many users were concerned about booking physical appointments during the Covid-19 pandemic, so many turned to apps instead. Apps like Happify, Woebot, and Moodfit provide users with on-demand advice, mental health exercises, and more.
While this development is in many ways a positive step, with people taking control of their mental, as well as physical health, there are potential pitfalls. Most significant are the ethical, compliance, and security issues of handling emotional data.
Understanding the Space
According to the American Psychological Association, between 10,000 and 20,000 mental health apps are available to download. This figure is staggering, considering the sensitive nature of the content and services they provide and handle.
Of course, with such a large number of apps available, inevitably, a proportion don’t hit the standards consumers would expect. The first issue is legitimacy.
The mental health app market is notoriously difficult to govern. Why? Firstly, there are no standardized regulatory procedures to manage them, and secondly, the sheer volume of apps makes it nearly impossible to police.
Some apps conceived by underqualified developers don’t even work. Others can be plain dangerous, especially when providing unregulated advice for severe mental health conditions, such as bipolar disorder.
Security Flaws
The other major concern with mental health apps is data protection. Many mental health apps don’t include sufficient security measures to prevent data breaches. And, there are a series of troubling examples of where this lack of security has had disastrous consequences.
One of the most significant data breaches occurred at the Finnish psychotherapy start-up Vastaamo. The company failed to secure its servers effectively, and hackers stole and published sensitive notes and other personal information from 40,000 patients.
Incredibly, they could gain access via the open internet, and there were no firewalls in place or even password protection. This was a well-publicized case, but recent research from Australia’s Cyber Security Cooperative Research Centre shows the next major attack could be around the corner.
The team analyzed 27 mental health apps, each with at least 100,000 users and a four-star rating on Google. They found that each of the apps tested should be considered high-risk.
Is Emotional Data Analytics Ethical?
Emotional data is the bread and butter of mental health apps. Not only is this information used to enable the app to make the decisions required for it to function, but it dives revenue too.
Emotional data is data gathered that enables applications to provide content that suits a user’s needs, preferences, and even desires. There is a strong ethical argument against monetizing the emotional data captured via mental health apps, but regardless, this data is used to optimize ads and boost profits.
Sometimes, even third parties receive emotional data via mental health apps. And, although not illegal—in most cases, it is often far from transparent. Many users are unaware that apps use their data for these purposes.
The ethical question here is about predictive analytics. When mental health apps use historical, emotional data to speculate what might encourage a user to spend their money, many will perceive this as exploiting the vulnerable.
Of course, numerous platforms use emotional data for predictive analysis. Take Facebook. However, there is a powerfully ethical case for regulatory intervention when it comes to the particularly sensitive information sharded confidentially in a mental health app.
Wrap Up
In the end, the arguments surrounding mental health apps come down to three keywords: transparency, competency, and customer. There is no denying the benefits of mental health apps to users, particularly at a time when access to health services is, at best, difficult for many people.
However, only professionals with the right knowledge to provide the services they do should attempt to launch a mental health app. If the development team doesn’t have this knowledge, they need to onboard someone that does. Beyond this, security must take a front seat at every stage of development.
Finally, suppose an app operates using a free model funded by advertising. In that case, users must have the information they need to decide whether or not they are comfortable sharing so much data. Even paid-for apps that distribute user data must abide by full transparency.