mental health apps

Are You Keeping User Data Safe in Mental Health Apps?

MARLENE MAHEU

November 22, 2022 | Reading Time: 4 Minutes
560

Please support Telehealth.org’s ability to deliver helpful news, opinions, and analyses by turning off your ad blocker. How

Mental health apps use various forms of technology to serve people with mental health concerns or those in emergency and crisis situations. For example, bots and chat features often invite users to enter sensitive information, trusting that the mental health app is secure and that their data will be confidential. However, data protection is not always guaranteed on mental health apps. User confidentiality can be breached, and data is often misused, according to researchers Gooding and Kariotis (2022). Their recent Scientific American article proposes that standards of confidentiality and ethical handling of healthcare information should be applied to mental health apps.

Data Misuse and Confidentiality in Mental Health Apps and Other Online Public Resources

An example of the complications that can involve mental health apps allegedly involves Crisis Text Line, a nonprofit organization for people in mental health crises. One of the biggest in the industry, Crisis Text Line, has received funding from some of the largest players in Silicon Valley. However, was reported in January of 2022 by Politico that the Crisis Text Line was sharing data gathered from users and providing this data to its for-profit company Loris AI, a company that uses Artificial Intelligence to create customer service products with chatbots.

The Politico article explains that the two companies were financially and otherwise entwined, but both companies deny engaging in unethical behavior.

  • Whether true or not, such allegations should give licensed therapists pause enough to conduct a detailed online search into the financial dealings of companies that own or operate mental health apps before encouraging the consumer public to get involved.
  • To assist clinicians in being better informed of the financial dealings behind the scenes, Telehealth.org is preparing to launch a free “Telehealth Financial Newsletter” starting January 1. Interested parties can register for the weekly email newsletter now

Other Potentially Problematic Databases

Information can be leaked in many ways. In yet another alleged digital data-sharing story gone bad, U.S. border authorities purportedly denied entry to a group of Canadian citizens because they had a history of suicide attempts. This noncriminal mental health information was obtained by law enforcement through a police database. The reports led the Office of the Privacy Commissioner of Canada to clarify that such database use is unacceptable.

CyberCrime

Aside from intentional, problematic sharing of databases created by mental health apps and other groups, cybercriminals lurk in the shadows, waiting for slip-ups where a database is exposed. Such privacy violations can range in severity from hiring a single programmer without due screening and access precautions to foreign cyber-attacks that amount to cyberwarfare against the United States.

While difficult for practitioners to identify, due diligence on the part of clinicians working with consumers always involves searching for information not only about the platform but also about the companies involved in creating and funding the mental health app or platform.

Privacy Research Involving Privacy in Mental Health Apps

Researchers Gooding and Kariotis (2022) surveyed 132 research studies testing automation technologies (e.g., chatbots) in online mental health apps and sites. It was found that 85% of these studies did not discuss how technologies could be used in ways that violate confidentiality or other negative practices.

For instance,

  • Of the companies surveyed, 53 used publically visible social media data to identify a person’s mental health diagnosis. Known as “scraping” social media in the industry, this common digital practice involves using health-related information from large social media sites such as Facebook, Twitter, Instagram, or TikTok without user consent.
  • None of the studies discussed the possible discrimination that users could experience if these data were released and made public, and
  • Few studies (3%) mentioned the input of the people—or the users—who will suffer the consequences of data-sharing and other confidentiality issues with mental health apps.

Proposed Solutions to Keep Mental Health Apps Safe for Users

Gooding and Kariotis (2022) proposed that AI developers of mental health apps must focus on the long-term consequences and potential harm to people who use these technologies. Issues of concern include how user data is being handled and shared and what the next steps should be if confidentiality is breached. They also recommend that institutional review board members, those who allow scholarly studies to be conducted, and journal editors require this information to be included before research articles are conducted or published. Additionally, standards that promote lived experience in mental health research should be adopted immediately. 

Practitioners may want to consider taking a few minutes of clinical time with clients and patients to advise that caution is to be taken everywhere online.

  • Clients and patients sharing personal stories of mental health challenges may be encouraged to delete such references from their profiles.
  • Better yet, they may be advised to avoid making such references to their health altogether.
  • Another general safety suggestion they may want to give is to avoid completing any form with unrequired information. Details of one’s hobbies, favorite books, favorite pass times or similar details are best left unmentioned.
  • Use passwords everywhere, with double authentication where possible, and change them regularly. This safety measure and other phishing prevention suggestions cannot be stressed enough, even if such precautions are inconvenient.

Data misuse can have grave consequences, including opening avenues for discrimination, biased and unethical decision-making about individuals and communities, and effects on health insurance costs, among other concerns. Regulations prioritizing user rights are critical to ensuring that mental health apps exist to serve the user and improve health and quality of life.

Other Telehealth.org-Related Articles

HIPAA Compliant Cybersecurity for Professionals

Must-know information about how to protect your telehealth practice from a ransomware attack. Operate w/ EYES WIDE OPEN.

Disclaimer: Telehealth.org offers information as educational material designed to inform you of issues, products, or services potentially of interest. We cannot and do not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action. Also, the views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using these brands unless contained in an ad. Some of Telehealth.org’s blog content is generated with the assistance of ChatGPT. We do not and cannot offer legal, ethical, billing technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Telehealth.org Privacy Policy and Terms and Conditions.

Please share your thoughts in the comment box below.

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Anonymous
Anonymous
10 months ago

Nice article.

Anonymous
Anonymous
9 months ago

Good Thoughts.

Register for Free

Receive Any of Our 57 FREE Newsletters!

REGISTER

Most Popular Blog Topics

You May Also Like…