Deep fakes could supercharge health care's misinformation problem.
“Deepfakes”—a term that first emerged in 2017 to describe realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies is becoming more mainstream. Recently, A startup called Metaphysic has managed to advance to this season’s America’s Got Talent competition’s final round competing for the $1 million price, by producing remarkable deepfakes of Simon Cowell and the other contest judges in real-time. The judges have been blown-away by seeing performers who have only the vaguest resemblance to them—a somewhat similar face and body shape—suddenly transform into their digital doppelgangers, right before their eyes. For all the promise that artificial intelligence holds for health care, one of the industry's big fears is its potential to churn out more convincing misinformation.
Post COVID, more and more healthcare consumers are getting comfortable engaging with the doctors and nurses through remote interactions both video and audio. Video and audio consults are becoming important in complementing in person visits.
Critical to the engagement is to ensure that the interaction is authentic. As Deepfake proliferates each of these interactions will come into question whether they were with the real patient or a fake? Currently Video is used as a tool for remote patient monitoring, the authenticity of each interaction will be challenged as Deepfake models proliferate and become mainstream.
Why it matters: AI experts are warning that tech used to create sophisticated false images, audio and video known as deepfakes is getting so good it could soon become almost impossible to distinguish fact from fiction.
The big picture: This technology is becoming better and more ubiquitous sooner than experts expected at a time when health information is being politicized and social media's already weak guardrails have been whittled down.
State of play: The threat to health care appears to be theoretical for now, but the industry doesn't want to get caught flat-footed.
Among health care's major concerns with deepfakes:
Recommended by LinkedIn
Harder to stop misinformation: False images and audio that appear to come from a trusted source will make it harder to spread accurate health messages and will erode the public's confidence in legitimate sources.
More convincing phishing: Phone calls and messages to patients appearing to come from their health insurer or doctor could be a tool for scammers to steal their financial or health information.
More effective cyberattacks: Similarly, a hacker could gain entry into a hospital's information systems by using artificially or synthetically generated audio of a known individual — such as the hospital's CEO — to call the organization's help desk for a new password.
The other side: Of course, health care is still very bullish on the upsides of generative AI — even including deepfakes.
The intrigue: The RAND study of how well individuals can identify deepfakes in scientific communication does little to allay fears about the technology.