Artificial Intelligence in Healthcare, AI in Healthcare

What is Artificial Intelligence in Healthcare? Will It Replace Your Job?


October 26, 2021 | Reading Time: 5 Minutes

Please support’s ability to deliver helpful news, opinions, and analyses by turning off your ad blocker. How

Artificial intelligence (AI) performs tasks that have historically been accomplished by human beings in less time and at a fraction of the cost. The current article is the first in a series of articles that will highlight the importance of AI to all clinicians and organizations planning long-term growth and revenue cycles.

What is Artificial Intelligence in Healthcare?

AI involves the computerization of intelligent functions and processes. The significance of AI in the evolution of humanity was described by Sundar Pichai, Google’s CEO, in a July 2021 BBC podcast. Pichai stated, “The progress in artificial intelligence, we are still in very early stages, but I viewed it as the most profound technology that humanity will ever develop … and we have to make sure we do it in a way that we can harness it to society’s benefit.” He continued, “You know, if you think about [fire or electricity or the internet], it’s like that, [but I think even more profound].” [Emphasis added by]

The general public as well as much of the professional community and its organizations may be inadequately aware or remaining relatively silent about the massive ethical issues involved. Mr. Pichai’s comments in the BBC podcast came as Google was being criticized for its ethics related to AI, and Google announced that it was doubling its AI ethics team to address the heated controversy after several key staff departures.

Alarmed by the international use of AI in healthcare, the World Health Organization has been regularly updating its guiding principles and ethics for the past several years. The American Medical Association has also taken a firm stand regarding artificial intelligence in healthcare by writing a letter to Congress on June 23, 2021, outlining how AI algorithms can and do exacerbate social determinants of health.

The current article is written as an invitation to you, reader, to engage with us and each other in an ongoing conversation about AI in behavioral healthcare, where the people we serve are among some of the most vulnerable. Let’s share perspectives, resources, and suggestions as we all enter this next phase of human development.

The Efficiencies of Artificial Intelligence in Healthcare

From tracking disease as it spreads across the country, to insurance enrollment, to disease modeling, to direct care, AI is increasingly revolutionizing the use of large datasets or treatment options to save precious time by delivering services sooner in the trajectory of a person’s health challenges. AI is being used to follow disease states across the country, report outbreaks of flu, COVID, or other contagious diseases. It can predict trends and patterns, proving itself helpful with resource allocation before supplies and workforce shortages prove catastrophic. 

At the Centers for Medicare and Medicaid Services (CMS), AI is used in the enrollment process by streamlining the steps for monitoring membership, member processes, and reinstating health plans. Blue Shield of California and Google Cloud are collaborating to launch an AI approach to paying providers aimed at processing members’ claims in real-time. It is also being used to screen multiple factors to qualify people for Social Determinants of Health programs, providing help with transportation, food aid, internet access, and other important resources that can significantly improve health status and quality of life.

Artificial Intelligence is also being used by insurance companies to provide information and recommendations at the point of care, as advanced data sets serve to identify clients with special statuses, such as those who need preventive care or people with chronic illnesses. AI is also used by insurers to identify people who are less likely to pay for their healthcare. AI has also dramatically advanced the individualized care movement by improving patient data analysis, diagnosis, and treatment recommendations. Organizations are now able to access data, that had previously been restricted to paper-and-pencil record-keeping or locked behind proprietary data formats and interfaces. See the U.S. government’s recent  Fast Healthcare Interoperability Resources (FHIR) regulations, which require the simplification of data flow between payers, providers, and patients to improve access for providers and patients at the point of care — where they need it the most. AI can also flag patients who are “high fliers,” those who cost the insurer more than their share of payouts, and those who are less likely to pay for services used. How such people are “helped” by AI is a question that may be worth asking.

Artificial Intelligence in Behavioral & Mental Health

When used to improve diagnosis in behavioral health, AI can sort patterns in verbal output (spoken words or written), tone of voice, facial expressions, body language, and more. While some professionals may consider the tracking of these behaviors to be a real-world manifestation of being watched by “Big Brother,” the fact is, the digital world is increasingly scanning for such behaviors through several different forms of AI. Exactly how and why companies use AI in the delivery of their healthcare services is not easily determined, as is evidenced by the Google reference to doubling their ethical oversight made earlier.

Artificial Intelligence in Healthcare Ethics?

Many professionals have been deeply concerned about the ethics of AI, including David Luxton, PhD, who published a book as early as 2016 to address AI ethics.1 In a medical decision support example published in the AMA Journal of Ethics, Luxton described a hypothetical case in which a physician unsuccessfully treated a woman. The ethical issue examined was whether or not the physician would be remiss in not consulting an AI system such as Watson, given the complexities of data that can span an individual’s lifetime:

For example, IBM has suggested that a person will generate 1 million gigabytes of health-related data in a lifetime—which is equivalent to more than 300 million books. Given the amount and complexity of patient data, physicians would be remiss not to consult intelligent systems such as Watson. In the future, it may very well be considered unethical (and create liability) not to consult Watson or intelligent systems like it for a second opinion, assuming that such systems prove effective in what they purport to do. 2

Psychotherapy Practice Valuation vs. AI Valuation

The financial reality of AI is not to be ignored no matter how distasteful to behavioral healthcare providers. As these same clinicians often chafe at the thought of placing a monetary value on their practices, many are surprised by the actual numbers when they sell their practices to retire. The sale price of a behavioral health practice is determined by a mix of factors but largely influenced by the practice’s last year’s income. Such a “practice valuation” is considered to be selling at a “multiple” of 1, meaning there is a 1:1 ratio between last year’s income the current sales price. The same is true for the new kid on the block, large or small telebehavioral health service companies that hire clinicians to deliver clinical care. In effect, a $10M revenue stream can roughly fetch a price of $10M at sale or acquisition.

On the other hand, a successful software as a service (SAS) company using AI without clinicians can be valued at a “multiple of 6 to 10.” That equates to a 1:6 to 1:10 ratio of income to the selling price. In effect, a $10M revenue can fetch a sales price of $60-$100M at sale or acquisition. It is also worthy of note that healthcare applications of artificial intelligence are big business. AI in healthcare alone was valued at approximately $600 million in 2014 but is now projected to reach$150 billion by 2026. How long will some of the larger companies building AI capabilities continue to employ clinical staff? With expensive inefficiencies and management complications, the realities of how easily a clinical staff will be replaced within a few decades is literally the “multi-billion-dollar” issue.


  1. Luxton, D. D. (2016). An introduction to artificial intelligence in behavioral and mental health care. In Artificial intelligence in behavioral and mental health care (pp. 1-26). Academic Press
  2. Luxton, D. D. (2019). Should Watson be consulted for a second opinion?. AMA journal of ethics21(2), 131-137.
Is It Time to Earn Your Telehealth Certificate?

Telehealth Compliance Requirements Are Returning

Enforcement is headed our way. Improve staff competency and compliance with evidence-based telehealth BCTP® certificate training. Three levels available. Manage risk and distinguish your services now.

Disclaimer: offers information as educational material designed to inform you of issues, products, or services potentially of interest. We cannot and do not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action. Also, the views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using these brands unless contained in an ad. Some of’s blog content is generated with the assistance of ChatGPT. We do not and cannot offer legal, ethical, billing technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Privacy Policy and Terms and Conditions.

Please share your thoughts in the comment box below.

Notify of
1 Comment
Newest Most Voted
Inline Feedbacks
View all comments
Patricia Christopher
Patricia Christopher
1 year ago

It is difficult to determine where AI will be of advantage in mental health care as much of the issues of care deal with the behaviors of individuals. From initial point of care when information is gathered between the practitioner and the client/patient the importance of knowing how to glean important aspects of the client/patient issue is an essential part in diagnostics.
AI if used, could be a “secondary” attachment to transcribe the information gleaned, where as saving time, expense, and increasing productivity for the practitioner, agency, and client/patient.
As is in any changing develops in the work place the importance has to be a significant trust between the developer (change agent) and the those who are impacted by the change.

Register for Free

Receive Any of Our 57 FREE Newsletters!


Most Popular Blog Topics

You May Also Like…

ChatGPT HIPAA Considerations
ChatGPT HIPAA Considerations

ChatGPT HIPAA compliance is one of the hottest topics at 2023 conferences and with good reason. AI...