Identity fraud attacks using AI are fooling biometric security systems

Concept image of a person having their face scanned, indicating the risk of identity theft.
(Image credit: Tumisu from Pixabay)
  • Deepfake selfies can now bypass traditional verification systems
  • Fraudsters are exploiting AI for synthetic identity creation
  • Organizations must adopt advanced behavior-based detection methods

The latest Global Identity Fraud Report by AU10TIX reveals a new wave in identity fraud, largely driven by the industrialization of AI-based attacks.

With millions of transactions analyzed from July through September 2024, the report reveals how digital platforms across sectors, particularly social media, payments, and crypto, are facing unprecedented challenges.

Fraud tactics have evolved from simple document forgeries to sophisticated synthetic identities, deepfake images, and automated bots that can bypass conventional verification systems.

Election-driven surge in social media bot attacks

Social media platforms experienced a dramatic escalation in automated bot attacks in the lead-up to the 2024 US presidential election. The report reveals that social media attacks accounted for 28% of all fraud attempts in Q3 2024, a notable jump from only 3% in Q1.

These attacks focus on disinformation and the manipulation of public opinion on a large scale. AU10TIX says these bot-driven disinformation campaigns employ advanced Generative AI (GenAI) elements to avoid detection, an innovation that has enabled attackers to scale their operations while evading traditional verification systems.

The GenAI-powered attacks began escalating in March 2024 and peaked in September and are believed to influence public perception by spreading false narratives and inflammatory content.

One of the most striking discoveries in the report involves the emergence of 100% deepfake synthetic selfies - hyper-realistic images created to mimic authentic facial features with the intention of bypassing verification systems.

Traditionally, selfies were considered a reliable method for biometric authentication, as the technology needed to convincingly fake a facial image was beyond the reach of most fraudsters.

AU10TIX highlights these synthetic selfies pose a unique challenge to traditional KYC (Know Your Customer) procedures. The shift suggests that moving forward, organizations relying solely on facial matching technology may need to re-evaluate and bolster their detection methods.

Furthermore, fraudsters are increasingly using AI to generate variations of synthetic identities with the help of “image template” attacks. These involve manipulating a single ID template to create multiple unique identities, complete with randomized photo elements, document numbers, and other personal identifiers, allowing attackers to quickly create fraudulent accounts across platforms by leveraging AI to scale synthetic identity creation.

In the payment sector, the fraud rate saw a decline in Q3, from 52% in Q2 to 39%. AU10TIX credits this progress to increased regulatory oversight and law enforcement interventions. However, despite the reduction in direct attacks, the payments industry remains the most frequently targeted sector with many fraudsters, deterred by heightened security, redirecting their efforts toward the crypto market, which accounted for 31% of all attacks in Q3.

AU10TIX recommends that organizations move beyond traditional document-based verification methods. One critical recommendation is adopting behaviour-based detection systems that go deeper than standard identity checks. By analyzing patterns in user behaviour such as login routines, traffic sources, and other unique behavioural cues, companies can identify anomalies that indicate potentially fraudulent activity.

“Fraudsters are evolving faster than ever, leveraging AI to scale and execute their attacks, especially in the social media and payments sectors,” said Dan Yerushalmi, CEO of AU10TIX.

“While companies are using AI to bolster security, criminals are weaponizing the same technology to create synthetic selfies and fake documents, making detection almost impossible."

You might also like

Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com

Read more
Biometrics
Like selling your virtual soul: Researchers uncover extraordinary identity farming operation where the culprits are the victims
Concept art representing cybersecurity principles
Cybercriminals cashing in on holiday sales rush
Dark Web cybercriminals are buying up ID to bypass KYC methods
AI deepfake faces
In a test, 2000 people were shown deepfake content, and only two of them managed to get a perfect score
Hands typing on a keyboard surrounded by security icons
Tackling the threat of deepfakes in the workplace
A graphic showing fleet tracking locations over a city.
How can banks truly understand the changing regulatory landscape?
Latest in Pro
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
Nation-state threats are targeting UK AI research
Application Security Testing Concept with Digital Magnifying Glass Scanning Applications to Detect Vulnerabilities - AST - Process of Making Apps Resistant to Security Threats - 3D Illustration
Google bug bounty payments hit nearly $12 million in 2024
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
How decision makers can overcome analysis paralysis with AI
Scam alert
A new SMS energy scam is using Elon Musk’s face to steal your money
Representational image of a cybercriminal
Criminals are spreading malware disguised as DeepSeek AI
Europe
Apple and Meta set to face fines for alleged breaches of EU DMA
Latest in News
Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.
Nation-state threats are targeting UK AI research
An AMD Radeon RX 9070 XT made by Sapphire on a table with its retail packaging
Want to buy an RX 9070 or 9070 XT but fed up of the GPUs being out of stock? AMD promises that “more supply is coming ASAP”
iOS 18 Control Center
iOS 19: the 3 biggest rumors so far, and what I want to see
Doom: The Dark Ages
Doom: The Dark Ages' director confirms DLC is in the works and says the game won't end the way 2016's Doom begins: 'If we took it all the way to that point, then that would mean that we couldn't tell any more medieval stories'
DVDs in a pile
Warner Bros is replacing some DVDs that ‘rot’ and become unwatchable – but there’s a big catch that undermines the value of physical media
A costumed Matt Murdock smiles at someone off-camera in Netflix's Daredevil TV show
Daredevil: Born Again is Disney+'s biggest series of 2025 so far, but another Marvel TV show has performed even better