Adopt-A-Doc reposted this
Advanced technology does not automatically equate to real world clinical utility. I have been pitched the following 15 times (I counted): "We will use AI to analyse multi-omics data (pathology, biochemistry, EHR data, radiology) to determine the: risk of developing / optimal treatment / adverse events of [insert disease here]" This sounds pretty good right? Except that in the real clinical settings, it surfaces up 3 major questions: 1. 𝐖𝐡𝐨 𝐢𝐬 𝐠𝐨𝐢𝐧𝐠 𝐭𝐨 𝐛𝐞 𝐢𝐧𝐩𝐮𝐭𝐭𝐢𝐧𝐠 𝐭𝐡𝐞 𝐝𝐚𝐭𝐚? Most similar algorithms are created using datasets that have been cleaned and structured. In the real world setting, these data sits in 6-7 different systems. Unless the algorithm is automatically integrated with all of them, It will be the clinicians who have to input the data themselves. Meaning a significant increase in workload and liability. Now this is fine if there is real clinical benefit, Which brings us to our second question. 2. 𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐰𝐞 𝐬𝐮𝐩𝐩𝐨𝐬𝐞𝐝 𝐭𝐨 𝐝𝐨 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞 𝐝𝐚𝐭𝐚 𝐨𝐮𝐭𝐩𝐮𝐭? Let's take an example of an algorithm that predicts the risk of developing lung cancer in 10 years time. Imagine if a patient were to be determined as high risk by the AI. 🤔 What is the GP supposed to do with this result? - Should he refer to his respiratory colleagues based on a proprietary result from an external vendor? - Are we going to have to investigate the patient now with CT scans every single year? - If the CT scans come up negative (which will be likely the case), do we then do random lung biopsies to find cancer? OR do we do nothing because there is nothing we can do, and just leave the patient in worry and suspense for 10 years? 3. 𝐖𝐡𝐲 𝐬𝐡𝐨𝐮𝐥𝐝 𝐜𝐥𝐢𝐧𝐢𝐜𝐢𝐚𝐧𝐬 𝐭𝐫𝐮𝐬𝐭 𝐭𝐡𝐞 𝐨𝐮𝐭𝐩𝐮𝐭 𝐨𝐟 𝐲𝐨𝐮𝐫 𝐀𝐈? Practice changing innovation requires game changing evidence. To convince the entire medical fraternity to adopt any solution, particularly one that is consequential: You will need evidence. Not just a paper or two. Prospective studies done in real clinical settings on a sufficiently large sample size. Without demonstrating that the output of your algorithm is indeed credible in the real life setting, there is slim to zero chance anyone will take it seriously. _____________________________________ So, why is clinical utility so important to start-up founders anyway? Because no clinical utility = no commercial viability. If you are not able to demonstrate that your tool can improve patient clinical outcomes, how are you going to sell to health systems exactly? This is not an anti-technology post. My hope is for founders building similar tools above to consider clearer how their ultimate product is going to look, feel and work in the clinical setting. It is easy to be enamoured by the technical challenges of AI. But miss the whole point completely. Hope this helps. Simone Korsgaard Jensen | Dominic Mehr | Nina Sesto, PhD | Dr. Elsa Zekeng |