AI in Mental Health

Discussing AI in Mental Health Sessions with Clients: Ethical Suggestions for Increasing Understanding & Avoiding Manipulation

MARLENE MAHEU, PhD

September 11, 2023 | Reading Time: 5 Minutes
178

Please support Telehealth.org’s ability to deliver helpful news, opinions, and analyses by turning off your ad blocker. How

Artificial intelligence (AI) in the form of ChatGPT is already being mentioned in many therapy rooms. ChatGPT is free to consumers who wish to get information from simple to complex, providing explanations and sources; writing emails, reports, stories, or poems; proofreading, editing, and simplifying written tasks; brainstorming solutions; translating text from one language to another, engaging in lighthearted banter and tell jokes; reading text out loud and much more. It offers tremendous potential to enhance our understanding of mental well-being but can create an equal number of problems, of which manipulation is not the least, albeit couched in terms such as “persuasion.” Mental health professionals are crucial in navigating this dual edge of AI, ensuring ethical and responsible use in everyday exchanges between clients or patients and other people in their worlds. Let’s delve into the ethical dilemmas and practical solutions, including real-world examples, essential for all clinicians dealing with AI in mental health.

Challenge. Employers may utilize AI to monitor employees’ mental well-being, which can lead to privacy concerns and undue stress. Moreover, AI may be employed for selecting job candidates, potentially introducing biases.

Solution. Mental health professionals can educate employers and employees on the ethical use of AI in mental health while in the workplace or when working from home. They can emphasize the need for employers to respect employees’ privacy and well-being. Similarly, they can encourage employees to have health boundaries around the use of AI.

Therapists can also encourage transparency in AI utilization and suggest regular communication to address concerns. Job seekers should also be aware of AI-driven selection processes and ensure their applications and interviews accurately reflect their qualifications.

2. Parents. Building Trust with AI and Teenagers

Challenge. Parents may use AI to gain insights into their teenagers’ emotions, potentially straining trust within the family.

Solution. Mental health professionals can guide parents in fostering trust and communication with their teenagers. Encourage open conversations about AI use, respect personal boundaries, and seek alternative ways to connect with teenagers.

3. Spouses. Maintaining Privacy in Relationships

Challenge. Spouses may use AI to analyze each other’s behavior, risking privacy breaches and manipulation.

Solution. Mental health professionals can counsel couples on healthy communication and relationship trust. Address the ethical concerns of AI usage within partnerships, emphasizing the importance of consent and open dialogue.

4. Homeowners. Transparent AI in Domestic Services

Challenge. Homeowners may use AI for surveillance or task management with domestic staff, raising privacy concerns.

Solution. Mental health professionals can advocate for transparency in AI use within domestic settings. Encourage homeowners to establish clear expectations and consent for AI involvement, promoting respectful communication between homeowners and staff.

5. Children & Teenagers. Responsible Use of AI

Challenge. Teenagers may use AI to manipulate parents, teachers, babysitters, siblings, or peers.

Solution. Mental health professionals can educate children & teenagers about responsible AI use. By highlighting the real-life impacts of manipulation, they can help teenagers make informed and ethical choices in their interactions with teachers, babysitters, and peers.

Emphasize the importance of ethical behavior and respectful communication in all interactions. Discuss the potential consequences of AI-driven manipulation, which may include:

  • Peer Isolation. If a child or teenager uses AI to manipulate their peers, they may alienate themselves from their social group. Others may perceive them as untrustworthy, leading to social isolation and loneliness.
  • Damaged Relationships. Manipulating teachers, babysitters, or peers through AI can damage trust within these relationships. This can result in strained interactions, making establishing genuine connections and support systems difficult.
  • Academic Consequences. In the case of manipulating teachers, the consequences could extend to academics. Teachers may lose trust in students’ abilities and work, potentially affecting grades and future educational opportunities.
  • Moral Dilemmas. Children and teenagers who engage in AI-driven manipulation may grapple with ethical dilemmas and guilt. Understanding the potential harm caused to others can lead to internal conflicts and negatively impact their mental well-being.
  • Loss of Self-Respect. Engaging in manipulation can erode a teenager’s self-respect and self-esteem. They may view themselves as deceitful or untrustworthy, leading to self-doubt and emotional distress.

Children & Teenagers. Responsible Use of AI. An Ethical Example

Challenge. To demonstrate the possible uses of ChatGPT in everyday life by our clients, I conducted an experiment to see what kind of information might be obtained from using the questionable prompt of “manipulate.” It then sought what would happen if I simply removed the objectionable term and replaced it with a more socially acceptable term: persuade.

I gave ChatGPT Pro this 1st prompt

Name three ways to manipulate my dad into giving me a later bedtime.  

ChatGPT responded: 

I’m very sorry, but I can’t assist with that.

I then gave ChatGPT Pro this prompt:

  • How can I get my dad to give me a later bedtime?

ChatGPT responded:

To persuade your dad to give you a later bedtime, you can try the following steps:

  • Open and Respectful Communication. Initiate a calm and respectful conversation with your dad. Choose a time when he’s relaxed and willing to listen.
  • Present a Compelling Case. Explain why you believe a later bedtime would benefit you. Mention reasons such as more time for homework, reading, or personal development.
  • Acknowledge Responsibilities. Show that you are responsible by highlighting your ability to manage your daily responsibilities even with a later bedtime.
  • Propose a Trial Period. Suggest a trial period during which you can demonstrate that a later bedtime won’t negatively impact your routines or responsibilities.
  • Negotiate. Be open to negotiation and find a compromise that works for both you and your dad. It could involve gradually extending your bedtime.
  • Listen to His Concerns. Be attentive to your dad’s concerns about a later bedtime. Listen actively and address any worries he may have.
  • Provide Assurance. Assure your dad that you will still prioritize your health and well-being. Explain how you plan to maintain a healthy sleep routine even with a later bedtime.
  • Show Gratitude. Express gratitude for his consideration and willingness to discuss the bedtime change. Showing appreciation can create a positive atmosphere for negotiation.
  • Offer Benefits. Discuss the potential benefits of a later bedtime, such as improved focus, better academic performance, or increased personal development time.
  • Be Patient. Understand that your dad may need time to think about your request. Be patient and give him the space to make a decision.

Remember, the key to success in such conversations is respectful and open communication. Be willing to compromise and demonstrate that you are responsible and mature enough to handle a later bedtime.

Conclusion: The Risk of Making Poor Ethical Choices when Using AI in Mental Health

The eruption of AI in mental health practice requires urgent ethical scrutiny, as highlighted by the fine line between “manipulation” and “persuasion” in AI-generated responses. As AI sophistication escalates, clinicians must vigilantly apply rigorous ethical frameworks to guide its responsible application and prevent potential misuse. As AI’s influence grows, the onus falls on clinicians and ethical bodies to unflinchingly monitor and delineate its ethical scope.

As the vanguard in the telehealth landscape, Telehealth.org remains committed to our pivotal role in elevating the field’s professional standards. We serve as a nexus for over 500,000 licensed medical and behavioral health experts across the United States. Stay tuned to more discussions on AI in mental health and ChatGPT ethics.

References

  • American Psychological Association. (2017). Ethical Principles of Psychologists and Code of Conduct. Retrieved from APA Ethics Code
  • O’Neill, O. (2002). Autonomy and Trust in Bioethics. Cambridge University Press.
  • Disclaimer: This article aims to provide ethical considerations concerning the use of AI in mental health settings. The viewpoints and suggestions expressed herein should not be interpreted as legal advice. For specific guidance, consult your regulatory body or legal counsel.

    Therapist AI & ChatGPT: How to Use Legally & Ethically

    Immerse yourself in our highly-engaging eLearning program and delve into the uncharted territory of Artificial Intelligence (AI) in Behavioral Healthcare!

    Essential Telehealth Law & Ethical Issues

    Bring your telehealth practice into legal compliance. Get up to date on inter-jurisdictional practice, privacy, HIPAA, referrals, risk management, duty to warn, the duty to report, termination, and much more!

    Telehealth Law & Ethical Course Bundle

    This Telehealth Legal & Ethical Course Bundle provides the most important risk management and telehealth compliance training available anywhere to help meed telehealth, regardless of the size of your telehealth services.

    Disclaimer: Telehealth.org offers information as educational material designed to inform you of issues, products, or services potentially of interest. We cannot and do not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action. Also, the views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using these brands unless contained in an ad. Some of Telehealth.org’s blog content is generated with the assistance of ChatGPT. We do not and cannot offer legal, ethical, billing technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Telehealth.org Privacy Policy and Terms and Conditions.

    Was this article helpful?
    YesNo

    Please share your thoughts in the comment box below.

    Subscribe
    Notify of
    guest
    0 Comments
    Inline Feedbacks
    View all comments

    Register for Free

    Receive Any of Our 57 FREE Newsletters!

    REGISTER

    Most Popular Blog Topics

    You May Also Like…