What’s Next For ChatGPT In Healthcare?
You have probably heard that ChatGPT has recently passed business, law and medical exams, qualified as a level-3 coding engineer at Google (with a $180K starting salary!), outperformed most students in microbiology and checked a passable grade in a 12th-grade AP literature test.
Of course, better take these results with a pinch of salt. Although the algorithm indeed did reasonably well on the USMLE test, it was not a full-value assessment as all questions requiring visual assessment were removed.
Pinch of salt or not, however, it is obvious how the capabilities of these large language models have been expanding and how far they have travelled. So what’s next? What will the new reality with smart AI at our fingertips look like?
1. AI tools will not be credited as authors in scientific publications
As of now, large language models will not become co-authors of scientific publications as they can’t take responsibility for their work - this is the ruling by Springer, the world’s largest academic publisher.
It doesn’t mean that such tools will not be used, thus we should move in a direction when authors declare what kinds of AI tools were used in exactly what ways in the methods/acknowledgements sections of publications.
2. Medical alternatives of ChatGPT will arrive
As large language models develop, we will soon meet ones that were specifically designed for medical use. Google’s MedPaLM is an early example, and I expect that not only will they make Bard available for public use, but they will also develop a medical version of it. Similarly, a medical ChatGPT is also on the horizon.
These will become game-changers. Trained on verified, accurate medical data, such algorithms will offer enormous assistance for healthcare workers in a number of ways, from checking suspicious symptoms to crafting easy-to-understand info material on the most common problems in any given practice.
3. Doctors will see mayhem as patients start arriving with info from ChatGPT
Physicians had some trouble with patients using Google in recent years, after all, not all headaches are induced by brain tumours, even though the first few pages of search results suggest it.
Recommended by LinkedIn
Now they will face a new difficulty: patients arriving with info provided by large language models, which may or may not be correct or relevant. This will require a new kind of empathy, and a new skill set to help users make proper sense of using such chatbots and understand their limitations.
4. Where are your sources and references, ChatGPT?
Medical information is non-existent unless there is a source we can verify. This will change forever, as there is no way such large language models - building their answers on billions and trillions of pages of diverse information - could list their sources. In some cases, each word in a sentence comes from somewhere else.
Thus medical professionals will increasingly become gatekeepers of reliable information
5. Using chatbots will become part of the (medical) curriculum
This is an immediate necessity, we don’t have years to figure out whether to include these algorithms in the study material, as most students are using them daily by now. At Hungary’s Semmelweis university, it is already part of the training, and I'm sure this is (will very soon be) the case at most universities all over the world. It is extremely important to react to this phenomenon and teach kids to use AI in a smart, critical and responsive manner
A connected issue: written tests and assignments will also need to change in a way that can’t be easily completed by these algorithms alone. Essays are probably a thing of the past.
6. Will see the first healthcare company implementing it in their practice this year
I bet the first breaking ground will be a reasonably fancy healthcare practice that aims to exploit all marketing benefits of using such an advanced tool. I would not be at all surprised to hear about it this year, and the most likely use cases will be something like community outreach or using ChatGPT to craft messages.
Everyone is looking for an angle now
Such AI tools present brand new challenges for everyone, from medical associations to regulatory bodies, from universities to companies. No wonder there is quite a bit of confusion around these solutions and their potential use cases and legal/ethical/technical implications of implementing them. We will definitely keep our eyes on the target and cover how this field matures.
Senior Clinician, Clinical Informatitian, Clinical Safety Officer (CSO), Webmaster, Email and Communications manager, Tech enthusiast, Medical innovator, Chief Executive Officer, Company Director of Multiple Companies
1yBertalan Meskó, MD, PhD, Those who think this will impact in a slow controlled fashion is to be rudely surprised.The impact in Medical Practice is going to be rapid and far reaching, causing considerable disquiet. Long-standing slow debates on ethics etc will suddenly be plunged into trying to adapt quickly to changing requirements. Anybody relying on the essay format to gauge learning and attainment, will have to change to alternative methods ASAP. Not all is rosy - the output of LLM’s can be incorrect, even correct information in the wrong context, lead to erroneous judgement, decisions and management. Vital that output is checked by relevant subject matter experts, to assure the correct information is being supplied and used in the right context. Clinicians already see a disconnect where people are unable to grasp the fact that even though the information supplied by “Dr Google et al” is technically correct, their interpretation and the applicability of that information to them is erroneous, leading to very tense discussions and people feeling that “their concerns” are being disregarded. Always much more difficult to remove/correct erroneous misinformed opinions than people properly assessed and informed from the outset!
🩺 Clinical Subject Matter Expert UX • Board Director Ignite Aotearoa • Clinical UX • General Practitioner • Mental Health • Clinical Decision Support • Mentor
1yI think this will allow clinicians to do their job without being bogged down by paperwork/computer interaction which takes up to 60% of their time. However it’s more than a pinch of salt, and more like a cup. When passing the medical exams they took out questions it couldn’t answer and it’s the same for the recent chatgpt 4.0 release. Errors tend to be of the catastrophic nature which is seldomly seen in medicine when providing and opinion.
Co-Director Cancer and Work, scientific lead
1yI have already adjusted my course syllabus, whereby term papers are replaced by process reviews and live group discussions of their reflections. The term "paper" only serves as the impetus for the above. For more ideas, see ChatGPT and the rise of AI writers: how should higher education respond?
Co-Founder & CEO at Sleepiz AG
1yThere is immense potential in AI Bertalan It’s application in crucial sectors like healthcare can bring about a revolution in the industry.
Teacher of Economics at Nexus International School Malaysia
1yA very interesting discussion