Ensuring safe artificial intelligence in healthcare
The challenges of monitoring artificial intelligence in clinical settings
The use of artificial intelligence (AI) is on the rise, yet the implementation of AI models in healthcare settings is slow, with one possible reason being the lack of safety.
For AI models to assist in important clinical decision-making in a safe and effective manner, they must perform at a high quality at all times. Due to the nature of AI models declining in performance over time, it is crucial to implement adequate quality control to monitor these AI models, to ensure minimal risk to patient safety. But achieving this is far from straightforward.
An editorial in the latest issue of JBI Evidence Synthesis explores these challenges, revealing that while AI has great potential in healthcare, practical guidance on monitoring its performance in clinical settings is severely lacking.
In response, the authors conducted a scoping review, Monitoring performance of clinical artificial intelligence in health care, which analysed over 13,000 sources to identify 39 key studies on monitoring artificial intelligence in clinical settings. Their findings shed light on available strategies, the challenges of implementing AI monitoring in real-world settings, and the lack of focus on this important topic.
Read more in full editorial, Bringing artificial intelligence safely to the clinics: hope is not a strategy
Additional Resources
Bringing artificial intelligence safely to the clinics: hope is not a strategy Andersen, Eline Sandvig JBI Evidence Synthesis 22(12):p 2421-2422, December 2024. | DOI: 10.11124/JBIES-24-00501
Monitoring performance of clinical artificial intelligence in health care: a scoping review Andersen, Eline Sandvig; Birk-Korch, Johan Baden; Hansen, Rasmus Søgaard; Fly, Line Haugaard; Röttger, Richard; Arcani, Diana Maria Cespedes; Brasen, Claus Lohman; Brandslund, Ivan; Madsen, Jonna Skov JBI Evidence Synthesis 22(12):p 2423-2446, December 2024. | DOI: 10.11124/JBIES-24-00042