Bad data tools are dangerous, so how do you spot them?
Navigating the potential of data analytics in healthcare requires careful consideration. While analytics holds transformative promise, cautionary tales, such as IBM’s Watson, underscore the need for mindful development and user confidence.
At AphA, we know that data is one of the most powerful resources the NHS has. And we hope we’ve been able to persuade you of the same. That in the hands of professional analytics teams, it can improve outcomes, help you make better spending decision and ultimately help prevent disease among the populations you serve.
But as important as it is to recognise the potential of these innovations, we must also be realistic about their limitations and their dangers, particularly as terms like artificial intelligence, machine learning and robotic process automation become commonplace.
This tech may be grabbing headlines, but it’s still very much in its infancy. As it becomes more and more embedded in our industry, we need to make sure it’s developed and used with care.
Navigating healthcare analytics
For better or worse, the data analytics industry already has an abundance of cautionary tales with which to guide us.
You might remember one of the earliest AI players in the healthcare field: IBM’s Watson, a natural language processor launched to great excitement back in 2010. Touted as a tool to democratise cutting-edge clinical knowledge, the firm poured millions into developing what were effectively virtual cancer and genomics experts.
By 2016, “Watson for Oncology” was being trialled in India ahead of launches in the U.S and a host of other countries. But after just four years, IBM discontinued the tech. Two years later, it sold off its Watson Health division altogether.
So, what went wrong?
The idea was a noble one — an expert digital consultant that’s always up to speed on cutting-edge clinical practice, available to doctors anywhere in the world. Watson would recommend treatment options for real cancer patients, based on a database of hypothetical scenarios provided by experts at the Memorial Sloan Kettering Cancer Centre in the U.S.
IBM hoped the tech would close the gap between doctors’ clinical knowledge and rapidly evolving treatment options for both rare and common cancers.
This clinical knowledge largely came from the “synthetic” patient data that MSK clinicians fed Watson for Oncology from 2012 onwards. In theory, this information wouldn’t lag behind guidelines that changed faster than actual treatment practise.
But in practise, many clinicians were less than impressed with Watson’s recommendations.
Back in 2017, STATnews likened it to a chess-playing “Mechanical Turk” that MSK doctors “laboriously” pumped with their own clinical preferences in lieu of actual patient data.
When data-driven recommendations are inaccurate or untrustworthy, they may question the reliability of the wider healthcare they receive, eroding confidence in medical institutions and hindering patient engagement and compliance
Instead of producing insights from real outcomes, the publication argued, Watson for Oncology was biased towards the clinical strategies preferred by the doctors training it: Doctors at a single, cutting-edge institution that offers techniques, medicines and pathways that aren’t always available in other places.
For doctors who don’t follow U.S. treatment guidelines — including those at some Dutch and Danish hospitals who rejected the technology — or for those whose insurance systems didn’t fund the recommended treatments, Watson for Oncology was not a particularly useful decision-making tool.
And although its “synthetic” data didn’t include real patient populations, it was still biased by virtue of the patients MSK doctors were used to treating: a generally affluent U.S. population, whose social and economic experiences did not align with most patients around the world.
In 2018, a follow-up STATnews article claimed it often recommended “unsafe and incorrect” treatments.
Recommended by LinkedIn
IBM responded to some of these criticisms in 2019, when Watson Oncology and Genomics Chief Medical Officer Nathan Levitan stated that the technology “localises” its recommendations according to “drug availability, dosing units, and language translation.”
That same year, the authors of a Chinese study of Watson Oncology called for IBM to “accelerate” this localisation. But by 2021, a meta-analysis showed it still offered treatment suggestions that were inappropriate either for the patient populations it was serving, or the health systems it was working in.
The study authors stated Watson still had the potential to be a useful complement to existing care, if it accounted for more local factors. But by this point, it was too late. IBM had already discontinued the tech.
So what lessons can we glean from this cautionary tale?
For one, headlines aren’t everything. Technology might sound promising, but completely fail to live up to its hype. Right now, the hype around AI couldn’t be bigger — it was even named “word of the year” by Collins Dictionary. But it’s no magic bullet for speeding up diagnosis or improving resource allocation.
Luckily you don’t need to be Sherlock Holmes to reveal the secrets of the multitude of “Mechanical Turks” coming to the NHS. Your analysts will do that job! They’re your very own Baker Street Irregulars — meticulously examining clues, using analysis and their knowledge to help and your organisation better understand the intricate details of this new-fangled tech.
The role of expertise in analytics
At AphA, we’re committed to ensuring high professional standards and maintaining an up-to-date body of knowledge for our members. This means they will have the best understanding of how to use the latest analytics tools — and the limits of what they can achieve. With their insight, you won’t be tricked by any Mechanical Turks — unlike Napoleon Bonaparte, Benjamin Franklin and, funnily enough, Charles Babbage.
“Good analytics requires serious expertise,” says AphA chief executive Rony Arafin. “Analytics professionals know better than anyone the risks of bad data and bad algorithms — as well as how to mitigate them. It’s a professional discipline that’s evolving at rapid pace to keep on top the latest developments in AI and other innovations.”
IBM’s Watson saga also underlines the importance of instilling confidence in analytics tools. Perhaps the researchers were right and the product really could have been a useful adjunct to standard practice in many countries. But overzealous promises and underwhelming results led to a raft of bad press and, in the end, executives lost faith in the product and it never really got off the ground.
These kinds of concerns aren’t limited to those selling analytics tools. Many patients will already be sceptical about the use of things like “big data” and AI in healthcare.
When data-driven recommendations are inaccurate or untrustworthy, they may question the reliability of the wider healthcare they receive, eroding confidence in medical institutions and hindering patient engagement and compliance.
Patients may be scared or suspicious about the safety and security of these tools — as well as the motivations organisations have for using them. Is this tech just a way of saving money? What if I don’t understand it? Will it mean I have less access to real-life healthcare professionals?
Data-driven health tools need to be designed and used with these concerns in mind to make sure they really improve access to care.
At St George’s, University of London, for example, researchers are currently investigating how different groups of patients feel about AI diabetic retinopathy screening.
The team are primarily creating a database of retinal images of patients of different ages and ethnic backgrounds to improve the accuracy of the technology. But they also want to make sure it’s as accessible as possible.
To do so, they’ve asked patients if they are concerned about its safety, about the privacy of their data or the fact the program is intended to reduce costs. Addressing these concerns will be vital to ensure patients still attend their appointments.
Making sure patients are comfortable with data analytics and tools like AI in healthcare will be crucial to ensuring equitable access as it becomes more common.
--
1yCan I add wisdom; knowledge innovations beyond imagination are tools of trade to embark in reaching for the sky when goals orientated
Founder of Creative IQ | Offering Online Courses and Coaching for Hobbyists Who Want to Monetize their Passion and Become Creative Entrepreneurs | DM Now!
1yCheers Andi Orlowski Let the investigative prowess of your team uncover the insights needed for strategic decision-making.
Data Architect | Data Strategy |Data Governance |Data Engineering |Data Modeling | Python | SQL
1yGood article!