The Role of AI in AML and KYC

The Role of AI in AML and KYC

Financial regulations are becoming more stringent and financial crime is getting more complex, institutions are now increasingly turning to Artificial Intelligence (AI) to enhance their Anti-Money Laundering and Know Your Customer processes. AI models can definitely improve the speed and accuracy of compliance efforts, but as the recent Bunq court case demonstrated, caution is necessary when relying on AI without proper oversight.

AI has the potential to transform compliance but oversight, auditability and explainability, remains crucial. Below I’ll explore several AI models used in AML and KYC along with the benefits of combining these models.

AI Models: The Building Blocks of Modern Compliance

AI offers a wide array of models that can each serve specific roles in compliance processes. Below are the most commonly used models that I’ve come across in my review:

1. Machine Learning (ML) Models

Machine learning models are most used for transaction monitoring and risk assessment. They learn from historical data to detect suspicious behaviour in real time which makes them suited to identifying money laundering activities such as structured deposits and withdrawals.

Challenges:

  • Data dependency: ML models require high-quality, consistent data for accurate results.
  • False positives: Without human intervention and model maintenance, ML models can flag too many legitimate transactions, which wastes time and resources on unnecessary reviews.

2. Graph Neural Networks (GNNs)

Graph Neural Networks (GNNs) are used for mapping out complex relationships between entities, such as customers, accounts, and companies. They can reveal hidden links between accounts that indicate broader money laundering or fraud networks.

Challenges:

  • Implementation complexity: GNNs require significant technical expertise and sometime huge computational power, making them resource-intensive.
  • Data integration: Successfully implementing GNNs requires data from various systems, which can be difficult to consolidate especially in relation to transactional data.

3. Retrieval-Augmented Generation (RAG) Models

RAG models are useful for navigating local regulations and conducting negative media screening. They retrieve real-time information like regulatory requirements or media reports, to ensure compliance with local rules wherever you operate.

Challenges:

  • Data freshness: RAG models rely on up-to-date data sources, and if those sources are outdated or inaccurate because they need to be input manually, the system could produce non-compliant decisions.
  • Customisation: Tailoring the model for different regions or jurisdictions requires additional effort and expertise as they can sometime struggle which a regulators interpretation.

4. Vector-Based Models

These models represent data in multiple dimensions, allowing for more nuanced comparisons between data points. Vector-based models are particularly useful for detecting subtle changes in behaviour like a customer suddenly starting to make high-risk transactions after a history of low activity.

Challenges:

  • Interpretability: Vector-based models can be difficult to interpret, making it hard to explain why certain transactions or behaviours are flagged. This means additional expertise is generally needed.
  • Resource intensity: Scaling up these models to handle larger datasets is generally resource-intensive in terms of computational power and expertise.

5. Anomaly Detection Models

Anomaly detection models focus on finding deviations from established norms. These can help institutions detect new or emerging types of financial crime.

Challenges:

  • Defining ‘normal’ behaviour: Customer behaviours can vary widely, making it difficult to establish baselines without multiple models. This can result in too many false positives leading to increased technical resources to fine-tune the model.


Combining AI Models: A Comprehensive Approach

After reviewing the strengths and weaknesses of a number of models, it’s clear that combining AI models offers the most effective compliance solution. Leveraging the strengths of each model, institutions are able create a multi-layered approach that ensures thorough, efficient, and accurate monitoring.

A Combined Approach in Action

A scenario that we can consider is where an institution monitors a high volume of transactions across multiple countries:

  • A machine learning model can flag potentially suspicious transactions based on historical patterns.
  • A Graph Neural Network (GNN) can then be use to analyse the flagged transactions in detail, potentially uncovering hidden connections between accounts or individuals that may suggest money laundering.
  • We can then add a RAG model retrieves real-time local regulatory data to ensure compliance in each jurisdiction.
  • Finally, we could us an anomaly detection model to provide an additional layer of security by identifying any outlier behaviours that weren’t captured by the previous models.

Using this kind of approach, each model works to complement the others, making the process more holistic. A layered system like this can offer more comprehensive coverage, but it also comes with challenges – mainly the need the institution to be able to comprehensively explain how each element works to demonstrate transparency, auditability, and human oversight.


The Bunq Case

The Bunq case provides a powerful example of how AI can enhance AML compliance when implemented responsibly. Bunq, a Dutch neobank, challenged the Dutch Central Bank’s (DNB) decision to prohibit the use of their AI-based AML system. Bunq argued that its AI system was more effective than the traditional rule-based approach. The court ruled in Bunq’s favour, allowing them to use AI, provided that the system adheres to regulatory standards.

However, the case also highlighted that AI systems must be auditable and explainable. While Bunq’s AI-driven system was permitted, it had to meet the same compliance standards as traditional methods. This means that financial institutions can’t let AI make fully autonomous decisions without oversight—they need to be able to demonstrate how and why the AI flagged certain transactions. The need for transparency is critical when it comes to proving compliance during regulatory audits.


AI is a Support Tool, Not a Replacement

While AI could offer significant benefits for improving AML and KYC processes, human oversight remains essential to ensure these systems are used responsibly and in accordance with the law and regulations. AI models - no matter how advanced they appear - cannot replace human judgement, particularly in high-risk scenarios involving PEPs, sanctions, or negative media.

Key reasons for maintaining human oversight:

  1. AI systems can make mistakes: Even the best models can and do misinterpret data or miss critical context.
  2. Regulatory standards require explainability: Compliance teams need to understand and be able to explain why AI made certain decisions, especially when transactions are flagged as suspicious. Auditable models ensure that all decisions are traceable and transparent.
  3. False positives and negatives: While AI can reduce false positives, it won’t eliminate them. Human review is still needs to distinguish between legitimate and suspicious transactions.


Responsible AI Use for AML and KYC

AI has incredible potential to improve how financial institutions manage AML, KYC, PEP, sanctions, and negative media screening. Models like Machine Learning, Graph-Based Networks, RAG, Vector-Based, and Anomaly Detection can work together to create a robust and multi-layered compliance system.

However, the Bunq case reminds us all that while AI has the possibility to improve efficiency and accuracy, human oversight remains essential. Institutions must ensure their AI systems are auditable and explainable, and that the final decision-making process involves human intervention when necessary.

In today’s ever more complex regulatory landscape, the key to success is balancing AI's capabilities with human expertise, ensuring that AI supports compliance teams rather than replaces them in their mission to combat financial crime.

It's fascinating to see how AI is being leveraged in the fight against financial crime. The intersection of technology and compliance is indeed a complex landscape to navigate, and your insights shed light on the nuances involved.

Utkarsh Srivastava

Payment Professional | Fintech | Payment Processing | Card Solutions | Business Transformation | Low & High Risk Industries | Public Speaker | Ex-Naukri | Ex-CCS

3mo

Financial regulations are indeed evolving rapidly, Daniel. Your insights on the intricate role of AI in combating financial crime are enlightening. It's crucial to navigate this landscape with a blend of AI and human oversight for effective risk management.

Dale Atkinson

Busy fighting stage IV cancer

3mo

Great article Daniel! And, I completely agree that human oversight is essential. FinCrime is becoming as much about understanding the data and tech as it is understanding the criminal mechanisms and regulations we’re all bound by.

Like
Reply
Rudhra kumar Thota

Data Scientist | Modelling | Compliance | Automation | FinTech | Finance Division

4mo

Daniel Smith, FICA How can financial institutions effectively balance the complexity and power of advanced AI models (like Graph Neural Networks or Vector-Based Models) with the regulatory requirement for transparency and auditability?

Like
Reply
Rob Cutler

Ex MLRO, Financial crime professional

4mo

Interesting article but I still wonder whether any of these technologies are infact true "AI". For example, can any of these systems learn from experiences and adapt their behaviours, can they make decisions based on the data they recieve and can they display creative capabilities. ie can they mimic human intelligence and perform cognitive functions?

Like
Reply

To view or add a comment, sign in

More articles by Daniel Smith, FICA

Insights from the community

Others also viewed

Explore topics