ChatGPT is an artificial-intelligence chatbot that uses an artificial intelligence (AI) model of human interaction to hold text conversations with its users. A discussion with ChatGPX can feel as if someone were asking you questions. The rapid advancement of artificial intelligence (AI) has ushered in a new era of collaboration in various industries, with healthcare being no exception. AI-driven technologies have transformed how healthcare professionals diagnose, treat, and manage diseases, improving patient outcomes and greater efficiency. AI technologies, such as Bing Copilot, and future ChatGPT products like BioGPT and PaLM in shaping health care’s future. Acknowledging the limitations is necessary when utilizing these tools. US legislators have recently been vocal about the dangers of AI, and ChatGPT in particular. The article below is written to serve as an introduction to thoughtful healthcare professionals and their organizations. Widespread repercussions for mental health are also outlined.
What is ChatGPT?
ChatGPT is a form of artificial intelligence that learns communication patterns from various Internet resources. It typically has been trained by different companies by processing billions of digital files to learn any language, and its industry-specific terminology, including healthcare clinician-speak, DSM diagnostic codes, and CPT codes for reimbursement. It has many other uses, such as solving problems in various programming languages, mathematics, etc.
Developed by a project known as OpenAI and launched in November 2022, healthcare applications use books, including the DSM, news, scientific journals, and Wikipedia to train. In its most basic form, ChatGPT is already used for routine research and writing tasks. As it is, ChatGPT goes far beyond Google’s search algorithm, which accesses an existing set of facts to answer questions. ChatGPT responses are based on patterns identified in available data.
ChatGPT in Education & Business
ChatGPT is already abuzz in education for writing essays, in marketing for writing blog posts, in business for composing emails, and in programming code writing. Ethical issues are surfacing daily, with schools needing to impose sanctions on students who use ChatGPT to do homework rather than students developing integrative thinking and analysis skills. Professors are rightfully concerned about students using ChatGPT to write papers and complete essays.
Current drawbacks include its inability to cite its resources accurately and leading to occasional incorrect assumptions. Still, some of the newer, free ChatGPT systems can be used to find and compile quick summaries of research topics. For example, see Elicit.org for a free service that will scan 175 million research papers to help write essays. If you ask it to write a summary of my work using the command line of “summarize the telehealth or telemedicine research into legal issues conducted by Marlene Maheu,” it will give you a place to start. Still, it can’t be trusted to be entirely accurate.
Depending on the version of ChatGPT used, references can be collected, organized, and submitted in any format, including APA style. Systems, such as OpenAI, can offer remarkable summaries with references, but the user may need to fact-check sources and references because some don’t seem to be accessible, even to experienced researchers. Maybe they are not valid, maybe they are – it is hard to tell.
Then again, publically available ChatGPT systems have only been online for six months. It is likely that they, and the legal and ethical issues they spark, are here to stay.
What is ChatGPT in Healthcare?
ChatGPT is a hot topic at many health conferences this season. Many health IT professionals no longer ask, “What is ChatGPT?” but, “How can we prevent as yet unimagined errors?” Discussions start excitedly but quickly devolve into tales of various legal and ethical problems.
A February 13 article in New York Medical College explains that ChatGPT is an advanced AI tool that can serve many roles, especially that of a virtual assistant in its current state of evolution. It can provide real-time answers to patient questions, offer guidance on health topics, send reminders about medications or appointments, and increase patient engagement. These AP-powered chatbots can augment telehealth by reaching people who otherwise would not receive care, improving outcomes, and increasing access to quality care.
Artificial intelligence (AI) applications in behavioral health have been in discussion for almost a decade. A September 2022 article by Telehealth.org outlined several expected applications of AI entitled, AI in Mental Health: Benefits of Technologies for AI Mental Health. In addition to clinical uses, ChatGPT can also automate many routine office tasks. It can improve decision-making by helping to identify patterns and relationships in large datasets, providing otherwise difficult-to-detect insights.
Lethal problems at the junction between humans and machines are still surfacing from technology introduced decades ago. For instance, in the March 15 Spokesman-Review, four deaths were confirmed by the Department of Veterans Affairs investigation into its recent purchase and continued request for funding the $10B Oracle Cerner EHR system. While a Senate Veterans Affairs Committee investigation found the errors unacceptable, one has to worry about the potential harm from even more innovative healthcare software.
Despite these worries, early AI technology has been developing for decades. Now that AI is publically available and for free, the debate includes a mixture of excitement and fear that ChatGPT will revolutionize healthcare on the one hand, but pose new, as yet unaddressed dangers on the other. Relying on suggestions from computers using AI rather than people is fraught with concern, although people are notorious for making mistakes, albeit on a much smaller scale. Escalate such risk to AI systems that write patient records or psychotherapy notes; dangers can increase exponentially.
In addition to the concerns about healthcare systems currently plagued with claims of HIPAA violations for using protected health information to market their hospitals, professionals are concerned about patients and clients being harmed by using ChatGPT for quick answers to their health concerns. In the best circumstances, it can provide instant information and support to patients in professionally-managed systems. In other cases, consumers requesting free, generic ChatGPT healthcare recommendations can all too easily get misinformation.
ChatGPT Concerns & Questions
While it is easy to get excited about innovation, the cautious professional may wish to stop momentarily to consider possible negatives. Several are offered below, and you are invited to add more in the comment section below.
Accuracy and Reliability. The greater the consequences of error, the greater the need for ChatGPT to be trained on clean datasets. Who will decide which database will be accessed for behavioral healthcare? Nine professional groups contribute to the existing datasets (i.e., behavior analysis, addictions, counseling, marriage and family therapy, nursing, psychiatry, psychiatric nursing, psychology, and social work).1
Liability & Accountability. Who will be responsible if dataset corruption injures a vulnerable patient? As seen in recent Federal Trade Commission investigations of BetterHelp and GoodRX, many employers rely on these companies who act appropriately, only to learn that they have been selling sensitive patient information online. These recent articles have not often mentioned the impact on practitioners who accepted employment from such companies or the people they served while employed. Similarly, readers must wonder if the employers who used these services also face liability. Most recently, US hospital systems have been under scrutiny for betraying patient trust in the same way by sharing sensitive health information with marketing companies such as Google, Facebook, Twitter, etc.
Workforce Impact. As a previous Telehealth.org article suggested, What is Artificial Intelligence in Healthcare? Will It Replace Your Job? AI technology’s impact on the workforce is directly proportional to how rapidly today’s healthcare jobs can be automated. The issue differs from what it can do today, five months after its public release.
Need for Oversite. AI leaders are calling for more oversite of AI systems. Thomas F. O’Neil was recently quoted at last week’s HIMSS conference, saying that healthcare boards may need to fill at least one Board seat with an experienced AI professional to oversee AI-related programs and processes.
The real issue is what it will do in healthcare in five, ten, and fifteen years. If clinicians working for digital text-messaging employers are starting to wonder if their daily texts to clients are being used as fodder to feed the subsequent iterations of ChatGPT, the question is worth considering. Another concern is whether existing biases in laws, regulations, and guidelines will perpetuate health inequity and quality issues. Many professionals in power can be intolerant of pioneers, mistaking them for heretics. Professional groups will continue diminishing power and presence without their vision to shed light on areas of benefit rather than avoid them.
Forward Trajectory of the Information Age. Technology marches on, with or without us. The question for every reader is how to best prepare for a future where healthcare, as we know it today, will be eclipsed by technologies that offer different types of care.
Struggling Economy. Macroeconomic headwinds are reducing profits for healthcare startups and healthcare systems alike. Startups face a sharp downturn in operating income, forcing them to be more prudent about digital investments. Digital health startups are dealing with the worst funding environment in five years. Innovation like ChatGPT is particularly interesting because of its multi-faceted range of applications. Companies today are less likely to spend their limited dollars on software that can only deliver one service or solve a single problem in a health system.
Exploitation-for-Financial Gain. As many unsuspecting professionals will attest, learning that innovation can be a double-edged sword can be disconcerting. Whether it is Big Brother, faceless cyber criminals, or the neighbor down the street hacking into one’s WIFI, the idea of machines managing day-to-day patient interactions can be anxiety-provoking, and rightfully so.
Privacy & Security. Users of free ChatGPT websites will find that the terms and conditions and privacy files at the bottom of such homepages contain information contrary to using ChatGPT in one’s everyday clinical practice. When the Federal Trade Commission needed to fine digital health companies GoodRx and BetterHelp for allegedly sharing millions of consumers’ personal health information (PHI) with advertisers like Facebook and Google, what can we expect from yet more innovation? And it doesn’t stop there. Cerebral was investigated by the Office for Civil Rights for alleged HIPAA violations. Other reports have shown that many healthcare apps routinely change security features without providers’ awareness. Others legally sell clients’ sensitive data to data brokers. And most recently, 98% of the US hospital systems have been reported sharing patient PHI for marketing. For the everyday clinician considering ChatGPT to clarify patient records or session notes, caution is clearly in order.
Telehealth Law & Ethical Course Bundle
This Telehealth Legal & Ethical Course Bundle provides the most important risk management and telehealth compliance training available anywhere to help meed telehealth, regardless of the size of your telehealth services.
Using ChatGPT for managing email can also be problematic. When attempting to install ChatGPT to improve email writing, the user will note that the software company offering such a service will likely mention in the setup process that you are granting them access to your email inbox. Suppose that inbox contains the names of your client or patient, their email address, and potentially their other protected health information (PHI). In that case, you can easily violate privacy laws, including HIPAA in the US and PIPEDA in Canada. The correct way to proceed is to use a service that requires a Business Associate Agreement associated with a paid ChatGPT service. Otherwise, the bank account numbers sent to you by an impulsive family member may also be up for grabs.
Similarly, if you use it for web searching or other private activity, it can read, store, and often own the content you actively insert into its pages. Furthermore, some such systems also stipulate in their Terms and Conditions file that you implicitly indemnify them against any legal consequences they face from “licensing” the material you insert into their systems.
Summary & Invitation
Many technology nay-sayers were given a new-found appreciation for technological innovation during COVID’s forced adoption of telehealth. As clinicians and their organizations are beginning to see, technology continues to develop faster than most clinicians most realize. The challenge will continue for clinicians and their organizations to learn how to use AI and ChatGPT responsibly before uninformed clinicians know they have been unwitting yet active participants in harming their clients and patients. When clinicians embrace their need to be informed about their technology, we can continue our mission of helping people in need, but with more reach, power, and sensitivity.
In Ground Truths, Eric Topol recently published an article titled, “When patient questions are answered with higher quality and empathy by ChatGPT than physicians.”
If you take a moment to reflect on Dr. Topol’s article title, you might start to wonder if ChatGPT could replace psychotherapists. I recently posed this question to a highly techno-savvy group of my colleagues. Everyone went silent. Then someone switched the topic.
It is up to us to consider how clinicians deal with technology. If this article inspired you to do more, join a group of us at the Coalition for Technology in Behavioral Science, a non-profit that I began years ago. We have a place for you at our table.
1 The Journal for Technology in Behavioral Science is one of the few scientific journals founded on principles of interprofessionalism. It was founded by the Coalition for Technology in Behavioral Science and is published by Springer Health.
Essential Telehealth Law & Ethical Issues
Bring your telehealth practice into legal compliance. Get up to date on inter-jurisdictional practice, privacy, HIPAA, referrals, risk management, duty to warn, the duty to report, termination, and much more!
Therapist AI & ChatGPT: How to Use Legally & Ethically
Immerse yourself in our highly-engaging eLearning program and delve into the uncharted territory of Artificial Intelligence (AI) in Behavioral Healthcare!
The highly important issues that are identified in your blog are helpful in alerting clinicians to be very cautious in using these new tools.
Thank you for your note of caution, Ken. From looking at the widget at the top of the page, we can see that thousands of professionals have read this article in the last two days, That alone tells us that many professionals are taking this new technology seriously. We at Telehealth.org will keep carrying news about ChatGPT as well as other technological developments in healthcare.