chatgpt healthcare, chatgpt psychotherapy

Psychotherapy Notes in the Age of AI: Considering ChatGPT Healthcare Ethical Applications

MARLENE MAHEU, PhD

August 17, 2023 | Reading Time: 7 Minutes
871

Please support Telehealth.org’s ability to deliver helpful news, opinions, and analyses by turning off your ad blocker. How

AI technologies such as ChatGPT are increasingly becoming central to many healthcare back-office processes, such as organizing and filing patient notes, expediting assistance with insurance claims, and prompt delivery of records. As AI & ChatGPT behavioral healthcare applications surface, pockets of misinformation and vulnerability are exposed. For clinicians, these pockets of vulnerability may include a lack of understanding regarding the legal and ethical complications involved with telehealth and software use. Too many practitioners and their organizations have operated as if they can trust technology vendors. Many have erroneously assumed that online employers and software companies provide technology that allows practitioners to comply with legal and ethical mandates. As witnessed by the recent Federal Trade Commission (FTC) filings against multiple behavioral startups, including BetterHelp, and FTC guidance related to illegal data use in apps, clinicians need more information. The article below outlines the quickly mushrooming ChatGPT healthcare ethical problems.

ChatGPT is here to stay; it has received notable endorsements from groups such as the World Health Organization and is already revolutionizing thousands of industries. It will, therefore, increasingly be used across healthcare systems to save time and energy to improve their service delivery systems to reduce disparities and improve care. The article below can help shed light on navigating the resulting legal and ethical challenges to avoid needless complications.

AI & ChatGPT Healthcare Considerations in Record-Keeping

Recent concerns voiced on websites such as ScienceBlog regarding deploying AI in sensitive areas such as patient record-keeping may raise eyebrows about organizations’ and clinicians’ potential lack of scrutiny. As was seen in the recent layoffs of therapists by Headspace, using technology to serve behavioral populations can present serious challenges when a company’s bottom line is at risk. If patient care is to shape and drive the adoption of AI, clinicians and their organizations must be able to ask the right questions to side-step the dangers involved.

Audio Recording Uses of ChatGPT Psychotherapy Notes to Enhance Engagement?

Consolidating clinical notes and summarizing patient visits with ChatGPT has been gaining popularity in medical care. This approach allows doctors to audio or video record patient encounters using ChatGPT to record, transcribe, and re-reorganize patient notes. The result is a more efficient, standardized, formatted documentation process that can reduce errors and significantly enhance patient care.

However, warnings are surfacing, such as in the ScienceBlog article cited above, where HIPAA compliance is discussed regarding handling protected health information (PHI) to comply with HIPAA regulations. ChatGPT healthcare legal and ethical compliance is needed to manage sensitive data by licensed professionals, employers, and vendors.

Guidance for Behavioral Healthcare?

It is safe to assume that some behavioral clinicians may also enjoy the benefits of recording or at least typing and pasting their session notes into ChatGPT to have them organized, formatted, and grammatically corrected. They may or may not be aware of HIPAA issues and, depending on circumstances, may or may not choose to pay attention to them. The oversight may seem innocent, particularly if employed by a digital employer.

But let’s stop for a moment. Are employers obligated to inform their licensed professionals which software is being used on their platforms? Are they being told if CHATGPT is running in the background? Such issues are typically not included in patient contracts.

Digging Deeper

The following additional questions occur to me. (You may have a few to add.)

  1. Which behavioral national association or oversight organization, if any, currently guides such ChatGPT practice for clinicians? 
  2. Looking specifically at record keeping, if oversight were provided, what types of clients’ private information should be audio or video recorded for later transcription by ChatGPT? 
  3. Will clients and patients tolerate such recording of their innermost thoughts?
  4. How will introducing such processes impact the therapeutic relationship?
  5. How will the “psychotherapy note” EHR exceptions be impacted by a shift to using ChatGPT ? What’s being done on this front? By whom?
  6. Most importantly, if such guidance does not yet exist,  how long will clinicians be expected to wait for the national associations or licensing boards to issue guidance for clinicians seeking to practice legally and ethically? (One may note that many licensing boards still need to provide clear guidance for telehealth, which started as telepsychiatry in 1959.)

Ringing the Alarm

Forgive me for ringing the alarm bell once again. Based on the past 30 years of experience, I regretfully predict that it will take too long for us to get too little guidance. I will add that the risk of not acting immediately is that too many people will be harmed by well-intentioned but un-informed clinicians and their organizations who have relied on the “authorities” for guidance. The issues are upon us, and the time to act is now.

Clinicians need to have the opportunity to participate in fact-based, forward-thinking, collegial discussions before they buy software that secretly shares PHI with third parties and before prominent behavioral health employers begin using ChatGPT behind the scenes to increase profits.

Therapist AI & ChatGPT: How to Use Legally & Ethically

Immerse yourself in our highly-engaging eLearning program and delve into the uncharted territory of Artificial Intelligence (AI) in Behavioral Healthcare!

Digging Yet Deeper: Understanding the Privacy Risks: ChatGPT and HIPAA Compliance

When protected health information (PHI) is entered into ChatGPT, it becomes housed on OpenAI’s servers, which may or may not meet HIPAA compliance standards. Any data movement outside a HIPAA-compliant server may technically constitute a data breach, presenting legal and financial risks. Potential consequences could include investigations and fines by the Department of Health and Human Services (HHS) and the inherent risks of data being on third-party servers.

While OpenAI allows clinicians to opt out of using the information to train ChatGPT, the mere act of data transfer from a HIPAA-compliant server may result in a HIPAA violation. The implications are clear: AI technologies like ChatGPT healthcare must be cautiously approached, ensuring full compliance with all relevant legal and ethical standards, including HIPAA.

HIPAA’s Requirements for DeIdentification of Protected Health Information

The Health Insurance Portability and Accountability Act (HIPAA) specifies 18 identifiers considered Protected Health Information (PHI) if they can be used to identify an individual. Here’s the concise list of these identifiers:

  1. Names
  2. All geographical subdivisions smaller than a state
  3. Dates (except year) directly related to the individual
  4. Phone numbers
  5. Fax numbers
  6. Email addresses
  7. Social Security numbers
  8. Medical record numbers
  9. Health insurance beneficiary numbers
  10. Account numbers
  11. Certificate/license numbers
  12. Vehicle identifiers and serial numbers, including license plate numbers
  13. Device identifiers and serial numbers
  14. Web URLs
  15. Internet Protocol (IP) addresses
  16. Biometric identifiers, such as fingerprints and voiceprints
  17. Full-face photographs and any comparable images
  18. Any other unique identifying number, characteristic, or code.

While most healthcare providers commonly understand name and address prohibitions, some clinicians may need help knowing where to look or whom to ask to ensure that web URLs, IP addresses, or photographs are expunged from patient records. They also may not know what to make of the last identifier (#18 above) and yet, be responsible for removing “any” such elements: “Any other unique identifying number, characteristic, or code.”

Removing these identifiers is part of de-identifying information to comply with HIPAA’s Privacy Rule. For more in-depth information, refer to the US Department of Health & Human Services guidelines on de-identification of protected health information available at HHS.gov.

It, therefore, is the shared responsibility of clinicians, organizations, and vendor communities to ensure that all software uses protect rather than compromise the integrity of patient care.

Identifying ChatGPT Healthcare Uses for Psychotherapy Notes

So as to not appear against ChatGPT (I am a strong advocate for using it ethically), I tested the educational benefits of using ChatGPT to answer this question: “Can I Use ChatGPT to Write Psychotherapy Case Notes?”

The following ChatGPT response was produced, in less than 60 seconds:

It is important to carefully consider the ethical, legal, and practical implications of using AI models like ChatGPT for writing psychotherapy patient case notes.

  • Confidentiality and Security Concerns. Patient notes in psychotherapy must remain confidential and secure, in line with the Health Insurance Portability and Accountability Act (HIPAA) in the United States. Any transmission or storage of this data through external systems may expose it to potential risks. The use of AI models might not comply with HIPAA or other relevant regulations (U.S. Department of Health & Human Services, 2021).
  • Ethical Considerations. Utilizing AI to generate patient notes may lead to a loss of nuance and individual understanding that a trained mental health professional would provide. This might impact the quality of care and ongoing assessment (American Psychological Association, 2016).
  • Accuracy and Reliability. While AI models can be highly efficient, they may not always accurately capture the subtleties and complexities of psychotherapy interactions. Misinterpretations or omissions may affect the quality of care and decision-making processes (Luxton, 2014).
  • Legal Liability. If there were an error or omission in the patient’s records, depending on how and why it occurred, there could potentially be legal ramifications. Healthcare providers are held to high standards of care, and the use of AI might not meet these standards (Blease et al., 2018).
  • Provider-Patient Relationship. The use of AI might affect the therapeutic relationship. Patients may have concerns about the impersonal nature of note-taking and potential privacy risks, which could affect trust and engagement in therapy (Rudin et al., 2020).

Conclusion: While AI models can offer efficiency and convenience, their use in clinical settings for psychotherapy case notes should be approached with extreme caution. Consultation with legal counsel, adherence to regulatory requirements, careful risk assessment, and consideration of the potential impact on the therapeutic relationship are essential steps before deciding to employ such technology.

Disclaimer: This response does not constitute legal or professional advice. Consultation with a legal or professional expert in your jurisdiction is strongly recommended to ensure compliance with all applicable laws and regulations.

References:

  1. U.S. Department of Health & Human Services. (2021). Health Information Privacy. Retrieved from HHS.gov
  2. American Psychological Association. (2016). Ethical Principles of Psychologists and Code of Conduct. Retrieved from APA.org
  3. Luxton, D. D. (2014). Recommendations for the Ethical Use and Design of Artificial Intelligent Care Providers. Artificial Intelligence in Medicine, 62(1), 1-10.
  4. Blease, C., Kaptchuk, T. J., & Bernstein, M. H. (2018). Artificial Intelligence and the Future of Primary Care: Exploratory Qualitative Study of UK General Practitioners’ Views. Journal of Medical Internet Research, 20(3), e128.
  5. Rudin, C. et al. (2020). There’s No Such Thing as a General-purpose AI Model. Harvard Business Review. Retrieved from HBR.org

What Do You Think of the ChatGPT Response?

From where you sit, is the above ChatGPT response accurate? Is anything missing?

If you have yet to think about AI and its current form, ChatGPT, now may be the time to get your feet wet to see what it can legally and ethically do to make your workdays more manageable and productive, short of having it transcribe your notes if your software is a using a non-HIPAA-compliant server.

HIPAA requires ongoing training to be integrated into annual HIPAA and privacy instructions. Like many emerging technologies, this is a dynamic space where health systems continually work to align innovation with regulation and ethical practice.

Training Activity for CME and CE Credit Hours

To discuss these and other relevant issues, join me in a collegial discussion offering CME and CE credit hours with one of Telehealth.org’s early founders, Joseph McMenamin, MD, JD.

He is returning to us for a broad-based, 2.5 webinar for psychotherapists who want to ask their questions about legal and ethical ChatGPT healthcare applications. The focus of the learning experience is harnessing the power of this latest AI technology tool to save valuable back-office time and effort – and much more. See details here.

Therapist AI & ChatGPT: How to Use Legally & Ethically

Immerse yourself in our highly-engaging eLearning program and delve into the uncharted territory of Artificial Intelligence (AI) in Behavioral Healthcare!

Essential Telehealth Law & Ethical Issues

Bring your telehealth practice into legal compliance. Get up to date on inter-jurisdictional practice, privacy, HIPAA, referrals, risk management, duty to warn, the duty to report, termination, and much more!

Advanced Telehealth Regulations & Ethical Issues: Best Practices & Informed Consent

Essentials of practice guidelines published by the leading professional associations, explained with a focus on what-to-do rather than theory that leaves you empty-handed.

Telehealth Law & Ethical Course Bundle

This Telehealth Legal & Ethical Course Bundle provides the most important risk management and telehealth compliance training available anywhere to help meed telehealth, regardless of the size of your telehealth services.

Disclaimer: Telehealth.org offers information as educational material designed to inform you of issues, products, or services potentially of interest. We cannot and do not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action. Also, the views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using these brands unless contained in an ad. Some of Telehealth.org’s blog content is generated with the assistance of ChatGPT. We do not and cannot offer legal, ethical, billing technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Telehealth.org Privacy Policy and Terms and Conditions.

Was this article helpful?
YesNo

Please share your thoughts in the comment box below.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Register for Free

Receive Any of Our 57 FREE Newsletters!

REGISTER

Most Popular Blog Topics

You May Also Like…