The RSNA article's comments on the future of AI in radiology
Curtis Langlotz, a Stanford professor of radiology and biomedical data science, published an article titled "The Future of AI and Informatics in Radiology" at the end of October. It includes ten forecasts regarding AI's role in our sector going forward. We asked Evgeny Nikitin, the head of Celsus AI, to comment on each item in the article during our interview.
Radiology Will Continue to Lead the Way for AI in Medicine
When we talk about "AI in medicine," we also consider "AI in radiology." This is accurate in many respects—medical images continue to produce a staggering amount of papers, talks, debates, and datasets. Simultaneously, this year, more than half of the Russian regions invested in predictive analytics based on electronic medical records rather than radiological AI systems (prediction of the likelihood of cardiovascular diseases). Medical Uncles have garnered a lot of attention this year as well; they've already had enough for a whole review.
Perhaps, however, in terms of product medical AI maturity, the amount of issues resolved, the expected economic impact, and comprehensible application scenarios, picture systems remain at the forefront. After all, machine learning is a highly suitable tool for solving the problem of identifying specific patterns in 2D and 3D images. Furthermore, AI effectively addresses the common reasons why doctors make mistakes, such as cognitive distortions, fatigue and heavy workloads, and "blind spots" in their studies. However, AI is still not as good as a doctor when it comes to compiling all of the patient's data and providing the attending physician with insightful recommendations.
In general, AI in medical imaging is no longer on the wave of HYPE, but definitely with us for a long time.
Virtual Assistants Will Draft Radiology Reports and Address Radiologist Burnout
We have made a lot of effort this year to standardize and improve the doctor's convenience with our text reports. This necessitated numerous improvements, including the creation of new classes and functions as well as the relocation of text generation to the machine learning side in order to increase development speed, flexibility, and thorough testing.
According to the article, text reports will be produced by LM using picture neurons' predicates. I still have trouble agreeing with this. I'm not really sure what advantages LLM has over a deterministic generation algorithm, but there could be surface-level issues (hallucinations, etc.). In theory, we can assume an option with a medical history, a summary of a text report, and a recommendation for further research from a therapist, but even in this case, I'm not quite ready to place my entire bet on LLM.
An Intelligent Image Interpretation Cockpit Will Become as Pervasive as Email
This prediction describes a beautiful pipeline of the radiologist's work.
By the time the doctor begins working, everything has already been segmented, described, measured, and a preliminary report has been generated because everything is deployed in the cloud. A few clicks or a voice command can be used to instantly alter any of these measurements.
In theory, this work process already exists in some capacity, but not without challenges:
Regarding the standardization of the format for storing ML service results, there are still a lot of misunderstandings. Right now, Secondary Capture (SC) + Structured Report (SR) is the combination used for 100% of integrations. The issue with SC is that it is an independent RGB image that has been marked up, making it an exact replica of the original study. This makes it possible to create stunning images, but it also increases the difficulty of communication between a medical facility and an AI service and necessitates expensive storage.Other straightforward methods exist for storing different kinds of markup in DICOM format (such as classification, segmentation, and detection), and working with them is made easier by python libraries. Unfortunately, nobody really wants to get worked up about this just yet, but I believe that the best is yet to come. Using platforms like deepc, which handle the integration between hospitals and AI vendors themselves, is an additional option. In this instance, the platform generates all reports in the required format; the developer only needs to supply the work product in JSON format.
For certain clients, local deployment on a physical machine within the client circuit is still a popular scheme. While there are some benefits (like infosec and data exchange speed), this also makes monitoring AI system quality more difficult and complicates a variety of trendy cloud interaction scenarios.
Recommended by LinkedIn
Highly Sensitive AI Will Reduce the Need for Human Image Interpretation
The idea of this work scenario is simple - part of the research can be processed automatically, without any involvement of a doctor. This is especially true for mass screening scenarios, where the overwhelming number of studies does not contain pathology and does not present much difficulty in terms of interpretation.
Our fluorography service processed over 60k studies in the "highly sensitive AI" scenario, based on third-quarter results this year. With the automatic generation of the conclusion "without pathology" for 67% of the studies, the sensitivity of the service was 99.93%. In addition, only 40% of these 0.07% discrepancies had the expert doctor's review confirm the inconsistency. To put it another way, the service mistake-free automatically classified over 40k studies into the "normal" category, but it also made about 15-20 mistakes.
Although there are a lot of challenges in the way of full-fledged implementation, there are signs that the possibilities for the practical use of autonomous scenarios have clearly exceeded fiction. We still need to improve sensitivity even further (primarily by solving problems involving a wide range of complex cases and uncommon pathologies) and comprehend what to do in the event of a developer error.
LLMs Will Transform Patients' Understanding of Radiology
Perhaps the only use of LLM in radiology that I currently perceive to be legitimate. Every time I performed an MRI on my knee, ankle, or lower back, I had to look up what I had and how to handle it on Google. Naturally, a doctor should ideally clarify this, but this isn't always feasible, and in addition, inquisitive patients frequently want to confirm everything for themselves. The "translation" from medical to English is generally a direction that shows promise.
Multimodal AI Will Discover New Uses for Diagnostic Images
Though everything depends on the developers' genuine absence, I truly believe that I have been wanting to work with multimodal data for a long time (a straightforward example was provided above, aggregating a text report on the study, patient's medical history, and notes from the patient's appointment). Two "picture - doctor's report" is the closest thing we actually have in sufficient quantities, but it's not really useful for making porridge. There is a viewpoint that says AI that solely describes visual data—rather than concentrating on a priori data—is actually not all that horrible. Nevertheless, it's thought that multimodality will enable X-ray AI to advance to the next level.
Online Image Exchange Will Reduce Health Care Costs by over $200 Million Annually
Sounds plausible. I am not an expert here, so I will not comment.
Reformed Regulations Will Accelerate AI-based Improvements in Care Delivery
Currently, it takes at least six months to update a medical device's registration certificate. That is to say, a protracted and laborious certification update procedure ought to coincide with every neuron update, from retraining to modifying the report format. Everybody in the business is aware that something has to change. In addition to the Russian set of state standards on AI in medicine, the FDA has proposed a document specifically addressing this issue. The ability to update the version quickly in practice is a minor matter. Additionally, the Moscow experiment has been a positive experience.
I also want to emphasize how crucial transparency is to this process. The likelihood of critical errors during algorithm version updates is greatly decreased by the availability of comprehensive documentation outlining the AI systems themselves, the testing procedure and outcomes, the retraining process, and the data sets utilized. We have updated the procedure for recording system testing of new versions as part of our efforts to become ISO 13485 compliant. There are two categories for releases: minor and major. Major versions come with a mandatory testing report and a technical task, while minor versions (which don't alter quality metrics) come with a release checklist.
A Widely Available Petabyte-scale Imaging Database Will Unleash Unbiased AI
We have already collected a quarter of a petabyte of data in the company's existence. We are currently considering what to do with all of this, whether we really need all of the data, and what storage guidelines to put in place in order to avoid going broke every month. Though in reality there's usually not enough time or computational power, conversations about some crazy pretrain are constantly picked up again (I'm trying to train my DINO on the full volume of unmarked mammography data right now).
Flexible and Collaborative Academic Organizations Will Lead AI Innovation
I wholeheartedly agree with the argument made regarding the significance of interdisciplinary teams. There is now a doctor on every machine learning team who takes an active role in the work, answering questions, keeping an eye on workflow, coming up with theories, and developing the markup process.
However, I find it hard to believe that ethicists, economists, and philosophers aren't also on these teams, which are currently led by doctors. Although the development itself is now based on ML competencies with advisory support from medicine, these specialists are still required to support the implementation, change the regulatory framework, and perform other related tasks.
Regarding the universities' importance in the region's development, I'm not sure. Of course, my home university, NYU, is one example; it has a wealth of data, experts, computing power, and opportunities for collaboration with medical organizations. However, I am confident that commercial businesses have played and will continue to play a significant role in the actual adoption of AI in the field of radiology.