AI and Mental Health

AI and Mental Health: Is It A Game-Changer for YOUR Practice?

MARLENE MAHEU, PhD

September 13, 2023 | Reading Time: 6 Minutes
977

Please support Telehealth.org’s ability to deliver helpful news, opinions, and analyses by turning off your ad blocker. How

Not sure what AI or ChatGPT can do for you? You have plenty of excellent company. Artificial Intelligence, including platforms like ChatGPT, presents many opportunities to simplify your professional life, better serve the people who rely on you for care, amplify your contributions to the field – and, let’s say it, increase profits. In the following article, we will delve into a few practical applications AI can serve for clinicians in behavioral health care. For each category of potential services, I’ll outline the ethical and legal considerations of venturing through this technological frontier. This article will address three of the functions currently available to clinicians. Those available to large enterprise systems will be reviewed in a later article. Please comment below if you are aware of anything I may have missed.

1. Information Retrieval and Research via AI and Mental Health Care

Programs like Elicit and Claude can provide advanced research capabilities that exceed traditional methods. For example, AI at Elicit can extract information from up to 100 papers and present the information in a structured table. It can find scientific papers on a question or topic and organize the data into a table. It can also discover concepts across papers to develop a table of concepts synthesized from the findings.

Ethical Considerations: Ethical research practices must still apply, ensuring the information retrieved is evidence-based, peer-reviewed, and sensitive to privacy regulations such as HIPAA. Issues of ChatGPT copyright ownership must also be considered.

2. Personalized Case Analysis, Diagnosis & Treatment Plans: AI’s Tailored Approach

Here’s a more detailed explanation of how psychotherapists can use AI:

  • Data Collection and Analysis
    • Collecting extensive patient data, including medical history, psychological assessments, previous treatment experiences, medication history, and demographics.
    • Using natural language processing (NLP) to extract relevant information from clinical notes, interviews, and questionnaires.
    • Using AI to incorporate structured data such as diagnostic codes (ICD-10) and desired treatment outcomes.
    • Creating tailored treatment strategies based on individual patient data to optimize therapy approaches and medication regimens.
  • Ethical Considerations.
    • In no instances should currently available ChatGPT models be used verbatim, that is, used “as is” from ChatGPT, as the sole source for rendering diagnoses. AI’s role in healthcare should be limited to augmenting clinical diagnostic procedures, particularly for complex diagnostics that demand a multi-faceted approach involving human expertise and machine learning algorithms. If in doubt as to the appropriateness of using ChatGPT diagnosis in your clinical care or research, you are encouraged to seek the direction of your state or national associations and the legal advice of a qualified attorney. A Telehealth.org CME and CE course is also available to give guidance.
    • Ensuring responsible use of ChatGPT diagnosis requires ongoing caution and scrutiny to mitigate its inherent biases and limitations. Ethical deployment in healthcare must remain a priority, with human judgment and robust regulatory guidelines at its core. 
    • See my demonstration article on the ethics of using ChatGPT-4 for diagnostic brainstorming and possible treatment plans.
  • Developing Early Diagnostic Possibilities Based on Clinical Guidelines and Other Consensus Documents
    • These chatbots can be given established clinical guidelines or consensus documents and asked to adjust the treatment plans to be in compliance with the guidelines. Clinical guidelines are publically available from the leading national professional associations, the scientific literature accessed through low-cost or free journal cataloging websites, and specialty organizations formed to address particular disorders relevant organizations.
    • You can also “brainstorm” with chatbots to explore alternative diagnoses and identify which additional information to collect to arrive at a definitive diagnosis.
  • Ethical Considerations. All protected health information (PHI) must be meticulously removed before uploading any prompts. Plus, full transparency must be given to clients and patients regarding AI’s role in their diagnosis. Attention to the strong biases inherent to AI must be given to ensure that AI doesn’t perpetuate existing healthcare inequalities. HIPAA privacy and copyright laws must also be followed. These requirements take time and attention. Practitioners are strongly advised only to attempt these activities after due training.
    • Programs like OpenAI, Bard, Monica, and others can analyze and identify behavioral health issues and potential diagnoses from an input ranging in size from short behavioral descriptions to longitudinal patient data sets.
    • They can query for signs of substance use, self-harm, mania, depression, suicidality, etc.
    • They can incorporate extensive patient data, including medical history, psychological assessments, and patient demographics.
    • They can extract relevant information from clinical notes, interviews, and questionnaires.
    • They can be instructed to incorporate structured data such as diagnostic codes (ICD-10), medication history, and desired treatment outcomes.
  • Personalized Treatment Plans
    • These chatbots can develop tailored treatment plans to meet individual patient needs, considering diagnoses, client or patient preferences, comorbidities, and responses to previous treatments.
  • Ethical Considerations: Legal and ethical standards for patient privacy, autonomy, and informed consent must be upheld. Free ChatGPT systems often publicly announce in their Terms and Conditions files that they own all information entered into their systems. Some ChatGPT programs allow the user to pay a monthly fee for the right to own the information entered by the user. However, until more extensive case law is developed, the extent to which these systems adhere to any of the federal or state healthcare privacy laws or even their own privacy promises is yet to be seen.

Clients and patients also have a right to fully understand how AI was utilized in crafting their treatment plans. The system will implement disclaimers and notifications to remind users (psychotherapists) that AI-generated treatment plans are supplementary and not a replacement for clinical judgment, but the therapist has the duty to inform the client or patient. Such disclosures would rightfully be made as part of the informed consent process and duly noted in the informed consent agreement. Online, dynamic informed consent would also be appropriate with ongoing changes to the initial processes disclosed.

On a clinical note, if you feel anxious about making full disclosures about using AI, including at intake, that anxiety may signal that it may be premature to use AI in that person’s care. Psychotherapists have enjoyed the benefits of public trust. We must protect this right at every turn.

Therapists who work for digital employers who use AI and natural language processing in any form, including ChatGPT, should fully inform clients and patients that data is being collected. Such disclosures should be visible to the consumer and not simply buried in a website footer link or other out-of-the-way location.

The issue of informed consent has often been sidestepped by many employers for years, but since the crackdown on BetterHelp and several other companies earlier this year, we can expect more involvement in disclosures to consumers buying services from these websites. Clinicians will continue to be responsible by licensure for informing clients of these privacy issues – and knowing about them in the first place. Claiming ignorance of the inner workings of a digital employer is a poor defense in the eyes of the law.

It is wise to ask questions and obtain written assurances in an employment contract of appropriate privacy safeguards. It is also good practice to routinely look at the materials consumers see on digital therapy websites to ensure that your services match all public descriptions. See Accepting Telehealth Jobs: 5 Big Legal & Ethical Mistakes to Avoid for more information.

Regardless of what happens behind the scenes, the client or patient has the right to know that their data is being processed by any form of AI and the privacy or other risks involved. I will discuss this issue more fully in my next article about the use of AI by large digital employers.

3. Client & Patient Education

ChatGPT can, for example, generate easy-to-understand explanations of relationship dynamics self-care techniques and suggest various courses of action to deal with low self-esteem. Depending on the platform used, it can offer advice and consolation in English and foreign languages, further aiding patient education.

Ethical Considerations. As powerful as ChatGPT might be at summarizing and re-organizing existing information, it is not human and can give misinformation. Clinicians must review and approve all patient materials before releasing them to the public, as the professional will be held responsible for misinformation offered to the recipient. Legal cases have already appeared, and AI is not being held responsible for its misinformation. Clinicians would do well to carefully examine and heed the many disclaimers offered within the information generated by ChatGPT. AI models seem to be well-versed in disclaiming responsibility for what the user does with the information offered.

Conclusion: The Ethical and Legal Dimensions of AI and Mental Health Care

In an era where digital innovation is revolutionizing healthcare practices, embracing advancements should not come at the cost of ethical integrity or legal compliance. Take a deeper dive into these critical areas and have the opportunity to have your questions answered by a leading medical attorney specializing in AI. I encourage you to attend our introductory class on Therapist AI & ChatGPT: How to Use Legally & Ethically. Enroll today to earn valuable CME or CE credits towards your licensure renewal and arm yourself with the knowledge to practice responsibly at the intersection of AI and mental health care.

And stay tuned to Telehealth.org, where we are dedicated to keeping you informed. One of our next articles will focus on the dangers of AI in psychotherapy and a book review from the most prominent author addressing the pros and cons of AI, and who has issued a worldwide alert as to the need for societies to be aware of the dangers involved if we sit passively by. I hope to see you in our ChatGTP course!

Therapist AI & ChatGPT: How to Use Legally & Ethically

Immerse yourself in our highly-engaging eLearning program and delve into the uncharted territory of Artificial Intelligence (AI) in Behavioral Healthcare!

Essential Telehealth Law & Ethical Issues

Bring your telehealth practice into legal compliance. Get up to date on inter-jurisdictional practice, privacy, HIPAA, referrals, risk management, duty to warn, the duty to report, termination, and much more!

Telehealth Law & Ethical Course Bundle

This Telehealth Legal & Ethical Course Bundle provides the most important risk management and telehealth compliance training available anywhere to help meed telehealth, regardless of the size of your telehealth services.

Disclaimer: Telehealth.org offers information as educational material designed to inform you of issues, products, or services potentially of interest. We cannot and do not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action. Also, the views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using these brands unless contained in an ad. Some of Telehealth.org’s blog content is generated with the assistance of ChatGPT. We do not and cannot offer legal, ethical, billing technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Telehealth.org Privacy Policy and Terms and Conditions.

Was this article helpful?
YesNo

Please share your thoughts in the comment box below.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Register for Free

Receive Any of Our 57 FREE Newsletters!

REGISTER

Most Popular Blog Topics

You May Also Like…