AI Aids Efforts to Cut Nuisance Alerts for Health Care Teams: Study

Feb 26, 2024 at 10:16 am by WGNS News

L to R: Lead Author Siru Liu, PhD and Senior Author Adam Wright, PhD

A new study from Vanderbilt University Medical Center demonstrates the promise of artificial intelligence to help refine and target the myriad computerized alerts intended to assist doctors and other team members in day-to-day clinical decision-making.

These pop-up notifications advise users on anything from drug contraindications to gaps in patient care documentation. But exclusion criteria and targeting of these alerts is wanting and up to 90% are ignored, contributing to "alert fatigue." From an information technology perspective, throwing human experts at the targeting problem looks slow, expensive and somewhat hit and miss.

ADVERTISEMENT

"Across health care, most of these well-intentioned automated alerts are overridden by busy users. The alerts serve an essential purpose, but the need to improve them is clear to everyone," said lead author Siru Liu, PhD, assistant professor of Biomedical Informatics at VUMC.

Liu, senior author Adam Wright, PhD, professor of Biomedical Informatics and director of the Vanderbilt Clinical Informatics Center, and a research team reported the study in the Journal of the American Medical Informatics Association.        

Liu developed a machine learning approach to analyze two years of data on user interactions with alerts at VUMC. Based on patient characteristics, a model accurately predicted when specific alerts would be dismissed by users.

She then used various processes and methods to peer inside the predictive model, understand its reasoning, and generate suggested improvements to alert logic. This step, termed explainable artificial intelligence, or AIX, involved transforming the model's predictions into rules explaining when users are less likely to accept alerts. For example, "if the patient is a hospice patient, then the user is less likely to accept the breast cancer screening alert."

Out of 1,727 suggestions analyzed, 76 were found to match later manual updates to VUMC alerts and another 20 were found to align with best practices as determined through interviews with clinicians. The authors calculated that these 96 recommendations would have eliminated 9.3% of the nearly 3 million alerts analyzed in the study, cutting disruptive pop-ups while maintaining patient safety.

"The alignment of the model's suggestions with manual adjustments made by clinicians to alert logic underscores the robust potential of this technology to enhance health care quality and efficiency," Liu said. "Our approach can identify areas overlooked in manual reviews and transform alert improvement into a continuous learning process."

Beyond refining alerts, she added, the methodology uncovered situations indicating problems in workflow, education or staffing. In this way the approach might more broadly improve quality: “The transparency of our model unveiled scenarios where alerts are dismissed due to downstream issues beyond the alerts themselves.” 

Liu and colleagues have several related projects under consideration, including a multisite prospective study of the effects on patient care of machine learning for CDS improvement; designing an interface for CDS experts to visualize the AIX process and evaluate model-generated suggestions; and exploring capabilities of large language models like ChatGPT for optimizing CDS alerts based on user comments and current research literature

Others on the study from VUMC include Allison McCoy, PhD, Josh Peterson, MD, MPH, Thomas Lasko, MD, PhD, Scott Nelson, PharmD, MS, Jennifer Andrews, MD, Lorraine Patterson, MSN, Cheryl Cobb, MD, David Mulherin, PharmD, and Colleen Morton, MD.

The study was supported by the National Institutes of Health (R00LM014097, R01AG062499, R01LM013995).

Sections: News