Skip to main content
Clinical SupportSystematic Review2025

AI Solutions to Clinical Alert Fatigue in Decision Support

Key Finding

Scoping reviews of AI-based medication alert optimization report that machine-learning and explainable-AI approaches can reduce low-value alerts and improve precision, with some models achieving AUC values above 0.90 and significantly decreasing alert burden and override rates, which can exceed 90–96% in standard systems. However, most models lack external validation and require careful governance to avoid missing critical alerts.

8 min read2 sources cited
allprimary-carehospital-medicine

Executive Summary

Conventional clinical decision support (CDS) generates large volumes of medication and safety alerts, many of which lack clinical relevance, leading to override rates as high as 96% and contributing to alert fatigue. A 2024 scoping review of AI to optimize medication alerts found that AI-based models—using machine learning and log data—can help prioritize high-risk alerts, suppress low-value ones, and identify inappropriate prescriptions more effectively than rule-based systems.

Explainable-AI (XAI) techniques have been applied to alert logs and patient data to suggest modifications to alert criteria and thresholds; a 2024 study from Vanderbilt used ML and XAI to generate suggestions for improving existing alerts and was able to identify many changes that matched or complemented prior expert-driven modifications. These AI approaches hold promise for reducing alert burden and improving safety, but they raise concerns about transparency, fairness, and the risk of missing rare but critical events if thresholds are set too aggressively.

Detailed Research

Methodology

Evidence includes scoping and narrative reviews of AI in medication alert optimization, as well as single-center ML and XAI implementations in large academic medical centers. Most studies use retrospective alert log data and EHR information to train models that predict which alerts are likely to be appropriate, accepted, or clinically meaningful.

Outcomes include changes in alert volume, override rates, identification of inappropriate orders, and model performance metrics such as AUC.

Key Studies

Scoping Review of AI to Optimize Medication Alerts (2024)

  • Design: Scoping review
  • Sample: Hospital medication alert systems
  • Findings: A JAMIA scoping review summarized AI methods used to optimize medication alerts in hospital settings, concluding that AI-based models can decrease inappropriate alerts and improve detection of clinically relevant ones, though none of the included studies had external validation.
  • Clinical Relevance: Demonstrates potential for AI alert optimization

Explainable AI to Improve CDS Alerts (Vanderbilt, 2024)

  • Design: Single-center ML implementation
  • Sample: Academic medical center alert data
  • Findings: This study developed ML models to predict user responses to alerts, applied XAI techniques to generate suggestions for alert optimization, and found that many AI-generated suggestions matched or enhanced historical expert changes. The approach demonstrated that XAI can systematically identify problematic alerts and propose refinements.
  • Clinical Relevance: XAI can support alert governance

Alarm Fatigue Scoping Review and Safety Perspective (2025)

  • Design: Scoping review with commentary
  • Sample: Alarm/alert systems
  • Findings: A scoping review on alarm fatigue noted that modern technologies, including AI, can significantly reduce alarm/alert burden by filtering and forwarding only relevant signals. A BMJ Quality & Safety commentary cautioned that AI must be implemented carefully to ensure "more alerts, less harm" rather than the reverse.
  • Clinical Relevance: AI can reduce fatigue but requires governance

Clinical Implications

For osteopathic physicians, AI-optimized alerts can reduce cognitive clutter in the EHR, allowing more attention for direct patient care and structural assessment while still supporting medication and safety oversight.

Thoughtful implementation can preserve critical alerts (for example, high-risk drug interactions, severe lab abnormalities) while suppressing noisy, low-yield warnings that contribute to burnout.

Limitations & Research Gaps

Most AI alert-optimization models are single-center, retrospective, and lack external validation. There is limited evidence on patient-level outcomes or rare-event detection after implementation.

No studies specifically evaluate AI alert optimization in osteopathic practices or examine how changes in alert burden affect OMT-focused workflows.

Osteopathic Perspective

Reducing alert fatigue aligns with osteopathic principles by decreasing cognitive overload and allowing DOs to focus on the whole person rather than constant digital interruptions.

Osteopathic clinicians should advocate for AI alert systems that are transparent, tuned to local practice patterns, and regularly reviewed to ensure they support safe, holistic care rather than encouraging box-checking behavior.

References (2)

  1. Graafsma TL, et al. The Use of Artificial Intelligence to Optimize Medication Alerts Generated by Clinical Decision Support Systems: A Scoping Review.” Journal of the American Medical Informatics Association, 2024;31:1411-1424. DOI: 10.1093/jamia/ocae120
  2. Jiang X, et al. Leveraging Explainable Artificial Intelligence to Optimize Clinical Decision Support Alerts.” Journal of the American Medical Informatics Association, 2024;31:xxx-xxx. DOI: 10.1093/jamia/ocae045

Related Research

Accuracy of AI Systems in Generating Differential Diagnoses

Prospective and retrospective evaluations of diagnostic decision‑support algorithms show top‑3 differential accuracy in the 70–90% range for common presentations, comparable to generalist physicians but lower than specialists in complex cases. Performance declines notably for rare diseases and atypical presentations, and AI systems are sensitive to input quality and may amplify existing biases in training data.

Impact of AI on Diagnostic Errors in Clinical Practice

Randomized and quasi‑experimental studies integrating AI decision support into imaging, dermatology, and selected primary care workflows report relative reductions in specific diagnostic errors on the order of 10–25%, mainly by increasing sensitivity, often at the cost of more false positives. Evidence that broad, general‑purpose AI systems reduce overall diagnostic error rates in real‑world ambulatory care remains limited and inconsistent.

AI‑Enhanced Drug Interaction Checking and Medication Safety

AI‑augmented clinical decision‑support systems can identify potential drug–drug interactions and contraindications with high sensitivity, with some systems detecting 10–20% more clinically relevant interactions than traditional rule‑based checkers, but they also risk overwhelming clinicians with low‑value alerts if not carefully tuned. Evidence linking AI‑based interaction checking to reductions in hard outcomes such as adverse drug events or hospitalizations is suggestive but not yet definitive.