Deep fakes could supercharge health care's misinformation problem.

Deep fakes could supercharge health care's misinformation problem.

Deepfakes”—a term that first emerged in 2017 to describe realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies is becoming more mainstream. Recently, A startup called Metaphysic has managed to advance to this season’s America’s Got Talent competition’s final round competing for the $1 million price, by producing remarkable deepfakes of Simon Cowell and the other contest judges in real-time. The judges have been blown-away by seeing performers who have only the vaguest resemblance to them—a somewhat similar face and body shape—suddenly transform into their digital doppelgangers, right before their eyes. For all the promise that artificial intelligence holds for health care, one of the industry's big fears is its potential to churn out more convincing misinformation.

Post COVID, more and more healthcare consumers are getting comfortable engaging with the doctors and nurses through remote interactions both video and audio. Video and audio consults are becoming important in complementing in person visits.

 Critical to the engagement is to ensure that the interaction is authentic. As Deepfake proliferates each of these interactions will come into question whether they were with the real patient or a fake? Currently Video is used as a tool for remote patient monitoring, the authenticity of each interaction will be challenged as Deepfake models proliferate and become mainstream.

Why it matters: AI experts are warning that tech used to create sophisticated false images, audio and video known as deepfakes is getting so good it could soon become almost impossible to distinguish fact from fiction.

  • The COVID-19 pandemic laid bare the deadly stakes of health care misinformation, as false information on vaccines, treatments and masks flooded social media sites.
  • Deepfakes could make it even more challenging to react to emerging public health threats, secure patients' sensitive data or combat increasing cyberattacks on hospitals, experts told Axios.

The big picture: This technology is becoming better and more ubiquitous sooner than experts expected at a time when health information is being politicized and social media's already weak guardrails have been whittled down.

  • "Really this year, it has come to the forefront based on the explosive, explosive development of generative AI," said John Riggi, national adviser for cybersecurity and risk for the American Hospital Association.

State of play: The threat to health care appears to be theoretical for now, but the industry doesn't want to get caught flat-footed.

  • "We really need to be vigilant about it and try to get a hold of it now when it's still a bit nascent," said Chris Doss of RAND Corporation, who led a recent study on deep fakes in scientific communication published in Scientific Reports.
  • In September, AHA urged health systems to be vigilant about the emerging risk deepfakes pose to patient information and hospitals' cyber defenses.
  • "We do not want to play catch-up as we have, unfortunately, in the past with, for instance, ransomware attacks," Riggi said.

Among health care's major concerns with deepfakes:

Harder to stop misinformation: False images and audio that appear to come from a trusted source will make it harder to spread accurate health messages and will erode the public's confidence in legitimate sources.

  • Imagine the impact of a deepfake Anthony Fauci video telling people not to get vaccinated, for instance.
  • AI could enable disinformation to be automated and disseminated at scale. "That's the super-threat here," said Heather Lane, senior architect of the data science team for Athenahealth.

More convincing phishing: Phone calls and messages to patients appearing to come from their health insurer or doctor could be a tool for scammers to steal their financial or health information.

More effective cyberattacks: Similarly, a hacker could gain entry into a hospital's information systems by using artificially or synthetically generated audio of a known individual — such as the hospital's CEO — to call the organization's help desk for a new password.

The other side: Of course, health care is still very bullish on the upsides of generative AI — even including deepfakes.

  • Early work with ChatGPT has found it can offer patients more empathetic answers than doctors can.
  • Researchers have suggested that deepfakes could improve facial emotion recognition by AI and also create artificial patients to help in designing new molecules for treating disease.

The intrigue: The RAND study of how well individuals can identify deepfakes in scientific communication does little to allay fears about the technology.

  • Even those working in science were fooled by messaging in deepfake videos relaying climate information. And the more individuals were exposed to deepfakes, the worse they were at identifying them.
  • You might think "as deepfakes proliferate, people are going to get good at it just by being able to pick it out better with experience," Doss said. "Our study says that might not be true."
  • "In fact, the opposite might be true."Is it legal to use deepfake?If a deepfake video violates an individual's privacy by using their likeness without consent, the victim can potentially file a complaint under this law. Section 66 D of the IT Act deals with the punishment for cheating by personation by using a computer resource.


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics