Machine learning & AI are coming to banks. Are you ready for the Risk?

Machine learning & AI are coming to banks. Are you ready for the Risk?

Byline for Frank Cummings

Machine learning and AI: Mitigate that Risk

We at AML Partners see machine-learning and AI systems (ML/AI) as a double-edged sword. And because of the Risk involved for every financial institution using these technologies, we emphasize Risk analysis of ML/AI systems in our Legitimacy Lifecycle, which applies a lifecycle lens to Risk Management and mitigation. 

ML/AI Risk in the Legitimacy Lifecycle

Unlike other lifecycle-management systems, AML Partners’ Legitimacy Lifecycle monitors and can mitigate Risk related to all human and human-caused activity within an institution. ML/AI systems are a prime example of what we describe as human-caused activity. And because the technologies are new, rapidly evolving, and largely opaque to end users, these AI/ML systems can pose substantial risk to financial institutions using them without informed Risk Management.

In AML Partners’ Legitimacy Lifecycle, ML/AI warrants its own Risk category because these systems can cause funds to be moved with--and without—the institution’s knowledge. Decision-makers at financial institutions can find themselves astride the razor’s edge with these new technologies. 

Institutions want and need the benefits of ML/AI-driven processes. But they find themselves having to take vendors at their word regarding what exactly institutions are onboarding into their systems. Institutions can, however, mitigate the Risk of rogue ML/AI. Risk analysts can evaluate the risks of whether and how these ML/AI systems might learn how to circumvent all internal controls or create other problems not easily tracked.

Some institutions will probably choose to bar ML/AI systems from their networks in order to stamp out the Risk entirely. But that would not solve the problem for users of newer finance-related technologies targeted for other uses. Most new systems include some ML/AI aspects even if that is not their primary function. And ML/AI systems show promise for elevating accuracy and efficiency in the work of financial institutions. 

For every good ML/AI system available, you can expect five bad ones. For strong Risk Management, users of these systems need to get prepared, be both wary and skeptical, and Know the Risk. 

An Initial Approach to Due Diligence for ML/AI Tools

ML/AI tools clearly have allure, but risks loom. To mitigate Risk, we can think deeply about how due-diligence choices can lower Risk related to ML/AI systems at financial institutions. Following are some ideas about how to tackle due diligence in this early stage of adoption:

Sensible Isolation

Institutions should consider placing the ML/AI system on an isolated network segment and force the app to communicate via API.  Most ML/AI systems want direct access to the user’s production systems. Resist at all costs connecting the ML/AI system to your production system. If your ML/AI tools cannot communicate via an API interface with flat files--and most cannot yet--you have serious risks to mitigate. I would even  recommend you tell the vendor to come back when their systems pose fewer risks.

Screen Tools Like You Screen People

Each of these tech tools has a name, and we know what to do with names: We screen them.  Start by screening the name just like you would a customer for adverse media, and/or do an advanced Google search.  And let’s not forget about identifying the Ultimate Beneficial Owners and applying the full screening treatment for them and for any other related parties identified.

Conduct Due Diligence for Key Elements of ML/AI Applications

The following list is likely a short list, but it’s a solid starting point. An institution that applies Risk Management analysis to these elements will have a good start on due diligence of their ML/AI systems. 

  1. Source model: Identifying and understanding who built the source model or algorithm is a key. Users need to know both who built the model and who trained it. Everything and everyone involved need to be screened for adverse media.
  2. Number of nodes: The more nodes (i.e. the more functions) an ML/AI application has, the greater the risk.Think of a node as performing one action or small set of actions to complete one function. If the purpose of the ML/AI tool is to fix address information, why would it have 20 nodes? Strive to understand the relationship between the number of nodes in a tool and the work you are expecting it to perform.
  3. Purpose of each node: Fully describe all the functions of each node in the system. Do they make sense in the context of the work required?
  4. Training method for each node: Which of these main training methods was used--supervised, unsupervised, or responsive? How the ML/AI was trained is just as important as the data with which it was trained. The most risk would be an ML/AI app that was trained with the unsupervised method, which is the easiest option. The supervised training method (which is difficult to accomplish) and the responsive training method (which verifies learning via human input) require more work but substantially decrease the risk of unintended learning. 
  5. Training data source: What was the source of the training data for each node? What implications for learning might the training data source have relative to your expectation of the learning needed to accomplish your tasks?
  6. Purpose of each rule for each node: Your ML/AI tool will come with rules embedded by the vendor that direct the algorithm how to process the data it is given. You can mitigate risk by knowing the rules and verifying that they achieve the expected outcomes. 
  7. Input method: What method will feed data to the ML/AI app? Direct access poses by far the most risk.  Flat-file input poses the least risk.
  8. Output method: What is the output method of the ML/AI system? Similar to #7 above, if the ML/AI app requires direct access to update your production systems, this poses the most risk. A flat-file output poses the least risk.
  9. Integration points: How is the ML/AI app integrated into your systems?  A file-drop-and-process poses the least risk. Direct integration via programming tools like “R” or “Python” will add more risk. 
  10. Comprehensive scope of functions: Determine all functions possible with the ML/AI you purchase. Be clear that the ML/AI you purchase for a particular need might have additional functions that you do not want to activate. Plan exactly how to contain the ML/AI to perform only the function(s) you actively choose.
  11. To what systems does the ML/AI need access? Determine precisely which internal systems the ML/AI system will want to access. Understand why that access should or shouldn’t be granted.
  12. Can the ML/AI app operate behind an API Gateway? API Gateways are crucial to Risk Management in many of the systems used by financial institutions. Can your ML/AI systems function behind an API Gateway? Can all input and output operations be conducted via API from an isolated network segment? Input and output using flat files can greatly help mitigate risk. 

Leverage Powerful ML/AI—But Mitigate the Risk

Clearly, machine learning and artificial intelligence will enable major leaps in our understanding of financial crimes, and they will help us do more good work faster than ever before. But these advancements will usher in completely different Risk Management challenges than we have seen before. The arrival of rogue ML/AI systems could be devastating--unless you develop internal controls to mitigate their risk.  

For every good ML/AI system available, you can expect five bad ones. For strong Risk Management, users of these systems need to get prepared, be both wary and skeptical, and Know the Risk.

Section Divider Line
Art showing logos of AML Partners and RegTechONE platform

eKYC Golden Record and Perpetual KYC--Just the start of what a RegTech platform can achieve

AML Partners has been working with financial institutions on how platform technologies can leverage AML/KYC solutions to meet needs specific to each institution.

We are excited about capabilities for end-to-end AML/CTF, KYC-CDD, and various GRC needs. And our customers are using the RegTechONE platform in powerful new ways.

Perpetual KYC and eKYC Golden Record are two prime examples. If you'd like to learn more about how RegTechONE might transform the accuracy and efficiency in Your Institution, please reach out to Jonathan C. Almeida .

Muhammad Furqan Najeeb MBA, CTFP (ICC), CSSP (IBP) FX specialist (NIBAF)

ICC Certified Trade Finance Professional and SBP Certified Sanctions from IBP and FX specialist from NIBAF

1y

Indeed it would bring improvement in existing procedures and processes ... Great update!!

Rakesh Vollangula

Team Leader in Sanctions and PEP, KYC Project in Fin Tech Company

1y

Helpful! This will

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics