AI Future, The Coming Wave

AI’s Future in Behavioral Health: Relevant Perspectives from Suleyman’s Sobering New Book, “The Coming Wave”

MARLENE MAHEU, PhD

September 20, 2023 | Reading Time: 7 Minutes
202

Please support Telehealth.org’s ability to deliver helpful news, opinions, and analyses by turning off your ad blocker. How

The Confluence of Technology and Human Existence

I am convinced we’re on the cusp of the most important transformation of our lifetimes.The coming wave is going to change the world…. Ultimately, human beings may no longer be the primary planetary drivers, as we have become accustomed to being. We are going to live in an epoch when the majority of our daily interactions are not with other people but with AIs. This might sound intriguing or horrifying or absurd, but it is happening.

Mustafa Suleyman

“Artificial Intelligence is not merely a field of computer science; it is a factor shaping the terrain on which we must solve humanity’s grand challenges,” writes Mustafa Suleyman in his alarming work, The Coming Wave (Suleyman & Bhaskar, 2023). To better consider the depth of Suleyman’s urgent signal for an immediate, unprecedented halting to business as usual, I will give you glimpses of Sunyan’s thoughts, provide an ethical context for us to discuss AI in behavioral health through David Luxton’s groundbreaking book, Artificial Intelligence in Behavioral and Mental Health Care (Luxton, 2015), and round out the discussion with comments from Eric Topol and Scott Shapiro, two well-respected literary reviewers. They mirror Suleyman’s view of how ubiquitous AI is and how it will easily eclipse any technology created before now. They, however, raise different flags about AI’s future and whether it will bring a promise or a peril for behavioral health professionals.”

AI’s Impact on Civilization is Compared to Fire, Agriculture & the Printing Press

Suleyman’s analogy of AI being akin to the discovery of fire and agriculture is a thought-provoking comparison highlighting artificial intelligence’s profound impact on human civilization. In this context, he parallels AI and significant milestones in human history. To extend this analogy, we can liken AI to another transformative discovery: the invention of the printing press.

Here’s a detailed explanation:

  1. Fire: Fire was a pivotal discovery in human history. It provided warmth, protection, and the ability to cook food, improving nutrition and energy efficiency. Similarly, AI has the potential to provide significant benefits by automating tasks, optimizing processes, and enhancing decision-making in various fields.
  2. Agriculture: The development of agriculture marked the shift from nomadic hunting and gathering societies to settled communities engaged in farming. It ensured a stable food supply, enabling population growth and the emergence of advanced civilizations. Similarly, AI is revolutionizing industries by increasing productivity, enabling precision agriculture, and fostering economic growth.
  3. The Printing Press: The invention of the printing press by Johannes Gutenberg in the 15th century revolutionized communication and knowledge dissemination. It made books more accessible, fostering the spread of ideas, science, and culture. AI, through its ability to analyze vast amounts of data and generate insights, is reshaping information sharing and decision support in a manner comparable to the impact of the printing press.

Just as fire, agriculture, and the printing press transformed human society, AI has the potential to reshape our world in unprecedented ways. It can optimize healthcare, transportation, finance, and many other sectors, leading to greater efficiency, innovation, and quality of life. However, Suleyman cautions that, like any powerful tool, AI also comes with ethical and societal considerations that must be carefully addressed to harness its benefits responsibly.

Pessism Aversion Trap

Warnings about AI’s Future In discussing the psychology of predictions, Suleyman introduces his use of the concept of the Pessism Aversion Trap, which he explained as “The misguided analysis that arises when you are overwhelmed by a fear of confronting potentially dark realities, and the resulting tendency to look the other way.” He further explained, “Confronting this feeling is one of the purposes of this book, to take a cold, hard look at the facts, however uncomfortable. Properly addressing this wave, containing technology, and ensuring it always serves humanity means overcoming pessimism aversion. It means facing head-on the reality of what’s coming.”

Suleyman’s remaining  introductory comments include the following: 

  • Psychological Hardwiring. Suleyman suggests that human psychology is not equipped to deal with the massive transformation now needed on an immense scale.
  • Over-Optimism Among Technologists. Suleyman calls for technologists to overcome their excitement and face and take responsibility for addressing potential risks.
  • Need for Societal and Political Response. The author emphasizes that the solution is not individual but societal. How long it will take for individuals to create societal awareness is a serious issue.
  • Inaction Despite Acknowledgment. Suleyman points to a futile paradox where leaders across areas of expertise acknowledge the risks but refrain from taking substantive action while followers wait for their traditional leaders to act.
  • The Urgency of Containment. The book underlines the immediate need for fail-safe mechanisms for emerging technologies.

Containment

On May 16, 2023, the New York Times ran an article and accompanying video of Sam Altman, the Chief Executive Officer of OpenAI, who testified before members of a Senate subcommittee. Mr. Altman agreed with the Senate about the need for regulation of AI as it is being created in his company and others, including Microsoft and Google. He offered several examples of how it could be misused on a large scale and threaten Americans and people worldwide.

Suleyman shares this view and calls for “containment” to be implemented immediately. He stated that without containment of the unfettered expansion of technology, mainly AI, discussing every other aspect of technology, including ethical shortcomings or benefits, “is inconsequential.”

The urgency of Suleyman’s message about containment was realized last Wednesday when leaders of the following technology companies met with Congress to discuss possible regulations, according to CNBC:

  • OpenAI CEO Sam Altman
  • Former Microsoft CEO Bill Gates
  • Nvidia CEO Jensen Huang
  • Palantir CEO Alex Karp
  • IBM CEO Arvind Krishna
  • Tesla and SpaceX CEO Elon Musk
  • Microsoft CEO Satya Nadella
  • Alphabet and Google CEO Sundar Pichai
  • Former Google CEO Eric Schmidt
  • Meta CEO Mark Zuckerberg

The concern of doomsayers is that, given the sheer number of privacy violations created by more than one of these tech leaders, obtaining guidance from them to contain the dangers associated with AI thoughtfully is akin to asking a pack of wolves how to protect the henhouse.

What Is Containment in the Context of AI?

The term containment alludes to the governance, ethical considerations, and risk management associated with the deployment and scalability of large language models like ChatGPT-4. Such containment could be essential for several reasons:

  1. Ethical Responsibility: Containing the technology would mean implementing measures that ensure it operates within ethical boundaries. Concerns may include bias in algorithms, perpetuation of misinformation, or other ethical dilemmas that come with machine-generated content.
  2. Security Risks: Large language models can generate sensitive or harmful information if not properly contained. Measures must be implemented to prevent any unintended disclosure or malicious use of the technology.
  3. Quality Control: Containment mechanisms could also refer to quality assurance protocols. This ensures the information disseminated by the model adheres to a particular standard, avoiding any legal ramifications for incorrect or misleading advice.
  4. Social and Cultural Impact: Containment could imply that these technologies’ social and cultural impact must be understood and managed. Concerns include the erosion of jobs or the potential to enable surveillance states.
  5. Regulatory Compliance: As AI and machine learning technologies become ubiquitous, they are increasingly under regulatory scrutiny. Containment may be essential to ensure compliance with data protection laws, intellectual property rights, or sector-specific guidelines.

Suleyman’s Ethical Caveats: The Technological Utopia Reconsidered

In The Coming Wave, Mustafa Suleyman delves into the moral responsibility that accompanies technological optimism, as evidenced in his quote, “What’s required is a societal and political response” (Suleyman & Bhaskar, 2023). His sentiments underscore that technological progress, devoid of ethical foresight, might yield “seriously destabilizing outcomes” for society and that professionals must pay close attention to the growth of these technologies rather than relying on existing leadership to make a notable difference.

Diverse Lenses: From Optimism to Concern

David D. Luxton’s work in “Artificial Intelligence in Behavioral and Mental Health Care” is foundational for professionals in behavioral and mental health settings. He emphasizes AI’s positive impact, like machine learning in diagnostic procedures and treatment planning, while cautioning about ethical concerns, including data security and consent. He and his chapter authors share viewpoints that set the stage for Suleyman’s more recent narrative, highlighting the need for nuanced conversations around AI’s role in behavioral, mental health, and substance use treatment.

Eric Topol & Scott Shapiro’s Reviews of The Coming Wave

Two recent reviews of Suleyman’s 2023 book, Eric Topol and Scott Shapiro, offer diverse perspectives (Topol, 2023; Shapiro, 2023). They present a dichotomy between caution and promise, providing a deeper lens through which to view the ever-expanding role of AI.

Eric Topol is a renowned American cardiologist, geneticist, and digital medicine researcher. He is a leading advocate for integrating technology in medicine, mainly using genomics and artificial intelligence for personalized healthcare. Topol has authored seminal books like Deep Medicine, exploring how AI can humanize healthcare. His review of Suleyman’s The Coming Wave praises the book for its balanced approach, appreciating its historical grounding and realistic applications of AI in healthcare. (Topol, 2023). It emphasizes that the actual utility of AI lies in its capabilities rather than mirroring human-like intelligence. While acknowledging AI’s double-edged nature, the review offers an optimism rooted in practical applications and societal advancement.

Scott Shapiro is a journalist or contributor in technology and societal impact, among other subjects, and writes for The Guardian. His review of Suleman’s book takes a more critical stance, casting the book as an urgent warning about technological expansionism (Shapiro, 2023). Here, the technological advancements are not solely lauded but are regarded as potentially catastrophic forces. Unlike the first review, Shapiro questions Suleyman’s optimistic viewpoints, especially concerning ethical and legal constraints, pointing out gaps in the book’s proposed solutions for AI containment.

Both reviews acknowledge that the AI train has left the station but differ on the eventual destinations—promise or peril.

Takeaway for Behavioral Health Professionals

For behavioral health professionals, the main takeaway could be the need for a nuanced understanding of AI’s burgeoning role in healthcare. While AI has the potential to revolutionize diagnostics and treatment plans, the ethical implications, especially in a behavioral health setting, are equally noteworthy. Consequently, the need for vigilant monitoring and ethical governance in deploying AI technologies within behavioral healthcare becomes paramount. Gaining a firm grasp of how AI works in behavioral healthcare, legally and ethically, is strongly advised.

In Closing: Riding the Wave with Ethical Vigilance

In summary, Suleyman’s The Coming Wave is the most well-received book about AI’s future from an authority in the field. AI has the potential to revolutionize behavioral health by making it more accessible, personalized, and effective. While it cannot replace human expertise and empathy, it can complement the work of behavioral health professionals, leading to better outcomes for individuals and society as a whole.

Note: For this discussion, it is important to acknowledge that these perspectives are interpretations of the book and may not represent the full depth of AI’s impact on healthcare. Therefore, professionals should exercise caution and consult a range of sources when considering the integration of AI into their practices.

Stay tuned to Telehealth.org for other discussions of AI, ChatGPT, and other health IT platforms now developing AI technologies for behavioral health professionals. As always, your thoughts and resources are invited below.

Bibliography

  • Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  • Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., … & Kurakin, A. (2019). On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705.
  • Davenport, Thomas H., and Jeanne G. Harris. “Automated decision making comes of age.” MIT Sloan Management Review, 2005.
  • Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291.
  • Luxton, D. D. (Ed.). (2015). Artificial Intelligence in Behavioral and Mental Health Care. Academic Press.
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society3(2). 
  • Russell, S., Dewey, D., & Tegmark, M. (2015). Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine, 36(4), 105-114.
  • Schwartz, P. M. (2010). Data protection law and the ethical use of analytics. The Center for Information Policy Leadership, Hunton and Williams LLP. Retrieved from: privacyassociation. org/media/pdf/knowledge_center/Ethical_Underpinnings_of_Analytics. pdf.
  • Shapiro, S. (2023). “The Coming Wave by Mustafa Suleyman review – a tech tsunami,” The Guardian.
  • Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2004). Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis: An International Journal, 24(2), 311-322.
  • Suleyman, M., & Bhaskar, M. (2023). The Coming Wave. Publisher.
  • Topol, E. (2023). Review of “The Coming Wave.”
  • Zuboff, S. (2023). The age of surveillance capitalism. In Social Theory Re-Wired (pp. 203-213). Routledge.
Therapist AI & ChatGPT: How to Use Legally & Ethically

Immerse yourself in our highly-engaging eLearning program and delve into the uncharted territory of Artificial Intelligence (AI) in Behavioral Healthcare!

Essential Telehealth Law & Ethical Issues

Bring your telehealth practice into legal compliance. Get up to date on inter-jurisdictional practice, privacy, HIPAA, referrals, risk management, duty to warn, the duty to report, termination, and much more!

Telehealth Law & Ethical Course Bundle

This Telehealth Legal & Ethical Course Bundle provides the most important risk management and telehealth compliance training available anywhere to help meed telehealth, regardless of the size of your telehealth services.

Disclaimer: Telehealth.org offers information as educational material designed to inform you of issues, products, or services potentially of interest. We cannot and do not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action. Also, the views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using these brands unless contained in an ad. Some of Telehealth.org’s blog content is generated with the assistance of ChatGPT. We do not and cannot offer legal, ethical, billing technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Telehealth.org Privacy Policy and Terms and Conditions.

Was this article helpful?
YesNo

Please share your thoughts in the comment box below.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Register for Free

Receive Any of Our 57 FREE Newsletters!

REGISTER

Most Popular Blog Topics

You May Also Like…