• HOME
  • EHR/PM
  • How Artificial Intelligence Will First Find Its Way into Mental Health

How Artificial Intelligence Will First Find Its Way into Mental Health

  • Last Updated : November 3, 2023
  • 726 Views
  • 8 Min Read
AI in Mental Health

Artificial intelligence (AI) startup, Woebot, has made the news recently for some of its disastrously flawed artificial bot responses to text messages that were sent to it mimicking a crisis. Woebot, which raised $90 million in a Series B round, responded that it is not intended for use during crises. Company leadership woefully expects patients, who may not be thinking completely rationally, to have the recognition to stop using their typical form of communication and reach out to an alternative system. While physicians are held responsible for harms inflicted upon patients for treatment, startup companies seeking to enter this space are not held to the same standard. To make matters worse for the vulnerable patients, these systems are also not held to the same privacy standards. Entering the AI space and interacting directly with patients is especially complicated because many patients routinely experience attenuated crises, below the threshold of needing to call 911, making a bot not equipped for handling crises most likely not well equipped to handle the throes patients experience on a daily basis.
 
Despite the risks involved in artificially and unintentionally bungling patient crises, mental health startups lured into this space raised a total of $1.3 billion in the first half of 2022. Unfortunately, there are many difficulties in communicating directly with patients and AI is not ready for this task yet. Words can be used in slang, or with alternative meanings. The meaning of a sentence may change depending on the patient’s history, cultural values, gestures, prosody, and tone of voice. Further, it is important to consider a patient’s subconscious motives in a therapeutic session – which is not easily elucidated from AI. As much as artificial intelligence may be able to detect literal meanings of words, it will not be able to understand the meaning behind that which is unsaid to the extent that a human therapist can. Given the number of difficulties in replacing human therapists, artificial intelligence is more likely to have an impact behind the scenes in other ways.
 
Although there are many challenges when relying on an artificial bot to interact with patients, there are still areas where artificial intelligence can augment decision making. Health insurance companies already see the value in artificial intelligence in reducing costs by identifying patients who are high utilizers of healthcare services. Prescribing providers routinely receive notifications from health insurance companies regarding irregular refills of prescriptions to encourage discontinuation of prescriptions that are not optimally used. Indeed, large insurance companies possess large data sets that are currently being analyzed to predict the onset of Alzheimers, diabetes, heart failure, and COPD. In fact, AI has already become FDA approved for specific uses, and currently AI shines when it is applied to a very specific clinical issue. The AI systems are initially being sought to enhance clinical judgement, rather than replace clinical judgement. Ideally, AI will enhance clinician productivity by handling the mundane tasks and alerting to that which may be equivocal and requiring further investigation by a human. According to insurance company Optum, the top three AI applications are: monitoring data with wearables, accelerating clinical trials, and improving accuracy of healthcare coding. Goals currently are not to increase the amount of data, but to present the data in a way that is meaningful and actionable by the clinician.
 
Artificial intelligence will begin to impact providers with informative tips and alerts, thus helping augment decision making and reducing human error. The practice of medicine is full of rote tasks that are ripe opportunities to be offloaded to a computer. For example, one common application of AI is in the evaluation of retina images, which allows ophthalmologists to focus on other areas of medicine they find more rewarding. As AI makes its way into healthcare, clinicians should not worry about whether they will be replaced, but instead how their practice will continue to evolve over time, and hopefully for the better.  
 
One difficulty in applying AI to the provider space is that medical records are not uniformly structured and styles are highly variable from provider to provider. Medical records may also contain inherent bias, depending on the patient population that is most typical of that practice. Bias fed into an AI system will yield a biased result. Thus the what of AI is not the only important factor in its application, but how it is applied and what is done with the results is also very meaningful in the impact it has. Tips and alerts that appear at moments when the clinician is distracted, or accustomed to viewing another screen, may be overlooked. The user experience of AI will have an impact on alert fatigue, a well-known phenomena recently which has led to some landmark cases. Thus, AI is only as impactful as the medium it is delivered, and the state of the user at the time it's presented.
 
If we have learned anything from the newsworthy AI blunders, is that we may not hold AI to the same privacy standards, we do hold it to a higher standard than typical human performance. It would be entirely unacceptable for an AI system to harm a single patient. We expect AI to not only perform better than humans, but to not harm any patients. So for now, AI will continue to work its alchemic magic in the background, quietly taking responsibility, or not, for how it affects healthcare.

 

Related Topics

  • Bruce Bassi

    Dr. Bruce Bassi is a physician, double board-certified in General (adult) and Addiction Psychiatry and is the founder and medical director of TelePsychHealth, which provides virtual mental health treatment across the United States and is based in Jacksonville, FL. He earned a master's degree in biomedical engineering from Columbia University and subsequently graduated from medical school at the University of Michigan. He completed psychiatry residency at the University of Florida, and his addiction psychiatry fellowship at Northwestern University. He enjoys writing and lecturing on the use of technology in medicine to increase clinician efficiency and enhance patient care. His clinical interests are treating addiction and sleep disorders.

    Disclaimer: Dr. Bruce Bassi offers information as educational material designed to inform you of issues, concepts, products, or services potentially of interest. He cannot and does not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action, including consultation with an attorney. The views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using or mentioning these brands. We do not and cannot offer legal, ethical, billing, technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Charm EHR Privacy Policy and Terms and Conditions.

Leave a Reply

Your email address will not be published. Required fields are marked

By submitting this form, you agree to the processing of personal data according to our Privacy Policy.

You may also like