Menu
Banking Exchange Magazine Logo
Menu

How Banks Can Mitigate Bias in their A.I. Applications for Anti-Money Laundering

In recent years, artificial intelligence (A.I.) has gone mainstream

  • |
  • Written by  John Edison, Global Head of Financial Crime and Compliance Management Products, Oracle Financial Services
 
 
How Banks Can Mitigate Bias in their A.I. Applications for Anti-Money Laundering

In recent years, artificial intelligence (A.I.) has gone mainstream. In fact, one survey from the World Economic Forum found 85 percent of financial institutions have implemented A.I. in some form and 77 percent anticipated A.I. to possess high or very high overall importance to their businesses within two years.

One area where financial institutions see great potential to apply A.I. is their anti-money laundering (AML) programs. Banks are facing a fast-approaching reality: Legacy rules-based AML systems have become antiquated. They lack the sophistication needed to spot the nuances of rapidly evolving criminal patterns and keep up with evolving consumer behaviors and products. As a result, financial institutions are experiencing high false positives and low detection rates in their AML departments. This drains resources because financial institutions need more experienced and more costly compliance staff members who can sort through these false positives.

Fortunately, intelligent technologies such as A.I. can manipulate data more effectively across AML programs. There’s clear consensus that A.I. and machine learning have tremendous potential to deliver higher efficiencies for compliance programs and improve the effectiveness of AML and other financial crime programs, although concerns about its use still exist.

Specifically, there have been instances where machine learning (ML) models have made biased predictions, thus raising questions and concerns around banking equality. In fact, there’s a growing body of research about the fairness, privacy, interpretability, and trustworthiness of A.I. and ML models. This subject area - termed responsible A.I.—is now broadly discussed and focuses on themes related to privacy, transparency, inclusiveness, accountability, and security.

Compliance officers at financial institutions have an ethical responsibility to consider how to reduce A.I. bias in their AML programs. Proactively doing so will help prevent customer mistrust, lost business opportunities, and reputational damage.

Where Bias Can Creep Into AML Programs

To understand how to avoid bias, institutions must first understand how bias can sneak into A.I. model workflows and at which stages. During the data sourcing and preparation stage, ML models rely on accurate and complete data training. However, many financial institutions’ business operations were set up before broad digitalization occurred, which means ML models sometimes use information that is recorded incorrectly, incompletely, or not at all. In fact, only a small stream of application data – about five to ten percent of the total – makes it through the pipeline and to the data lake for analysis, according to one study.

Bias also occurs when the ML training data is not representative of diversity. This occurred when Duke University researchers created a model that could take pixelated photos and create realistic high-resolution images of people. However, since white people were overrepresented in the data used to train the model, it failed to work for people of other races and ethnicities. It even turned a low-resolution photo of former president Barack Obama into a Caucasian man.

Bias can also creep into the feature selection stage. In an AML program, the Know Your Customer (KYC) stage is the most susceptible to biased model features, since it assesses individual people. Therefore, data science teams must be vigilant and make sure the attributes they use, such as employment status and net worth, do not inadvertently encode systemic bias into their models. Keep in mind that even seemingly innocuous location data like a postal code or country could serve as a proxy for data that is impermissible to consider, such as race or ethnicity.

Lastly, financial institutions must recognize the human biases that can influence how AML professionals respond to A.I. model outputs. For example, it is the AML analyst or investor who takes the information provided by the A.I. model and decides which alerts to investigate, which to combine into cases, and which to report to authorities. Like all humans, these professionals are susceptible to cognitive biases and processes such as wishful thinking, societal influence, fatigue, etc., which can unintentionally sway decision-making as it relates to model predictions and outputs.

Steps for Reducing Bias in A.I.

Here are a few ways compliance and data science teams at financial institutions can work together to support responsible A.I. in their organization.

  • Ensure collaboration with data science teams: Clear and direct communication between compliance teams and data science teams is needed because the compliance teams can provide guidance on the company values, principles, and regulatory guidelines that ML models should align with. There’s also an opportunity to emphasize the need to include the evaluation of bias in models as part of the overall success criteria just as teams do with other metrics like false positives/negatives and detection.
  • Call for auditability: The A.I. development and deployment process must prioritize full transparency and auditability, carefully tracking who made which modification to what model. This ensures there is always an accurate, complete log of model creation.
  • Prioritize interpretable models: Building interpretable A.I. models instead of black-box models also helps ensure full transparency. Interpretable models are preferable to explainable black-box models because black-box models can be inconsistent across vendors. This leads to confusion among analysts. What’s more, the explanations of these models can be difficult to decipher depending on the background and knowledge level of the analyst.

However, black-box models should still be used in cases where they will perform better than interpretable models. In these instances, the team should prioritize explanations that provide relevant context, such as the program’s strengths and weaknesses or the data used to arrive at a decision, to understand why an alternative decision was not chosen. Explanations are easier to understand and use when they are in a graphic form or in a pre-built natural language narrative that can be used in regulatory reports.

  • Continuously monitor and re-train model performance: Once a model is trained and deployed, it needs continuous monitoring, evaluation, and re-training to ensure a bias-free performance. As customer behavior changes, new product releases, or other systemic changes occur, a model can “drift” as relationships among data change over time. As a result, these changes can cause the model performance to degrade. If not corrected by retraining the models periodically, biased or incorrect decisions can occur. Evaluating and re-training automatically is key to continual monitoring.
  • Assess fairness: Financial institutions must evaluate the model performance on various population segments to be sure there isn’t a disparate impact on specific groups. For instance, institutions can build a risk score model to classify customers as high and medium risks and can cross-check the risk scores against sensitive attributes such as race, religion, zip code, or income to investigate the model for bias.

Let’s say risk scores for lower-income individuals are consistently higher than those of higher-income individuals. In this case, the financial institution should determine which features are driving the risk scores and whether those features truly represent risk. If a feature is found to represent a different characteristic or behavior due to different financial circumstances and not inherent risk, then the feature should be modified or removed to reduce model bias. Looking further into the scenario where an organization may find that a model results in higher risk scores for low-income people, the financial institution may determine the difference in risk scores is being driven by the fact that low-income people are more likely to spend their entire paycheck faster instead of truly risky money movement patterns.

Getting Ahead

To encourage innovation and growth in A.I., U.S. regulators have sent strong signals indicating they are avoiding regulatory or non-regulatory actions around the technology. However, as A.I. starts to permeate the financial industry, ethical and responsible behavior and governance issues will continue to arise. As a result, regulators are likely to develop more A.I.-specific regulation in the years ahead. Until then, compliance and data science teams must work closely together to ensure that A.I. usage within AML programs is effective, responsible and free of bias.


By John Edison, Global Head of Financial Crime and Compliance Management Products, Oracle Financial Services


If you would like to attend a free Banking Exchange webinar on Generation Z Banking, Please click to register on the link below. The event will take place on February 24. Thank you for supporting Banking Exchange.
https://event.on24.com/wcc/r/3624294/28B5F8B441274C617360F751DDB26A99?partnerref=BankingExchangeStory

back to top

Sections

About Us

Connect With Us

Resources

On-Demand:

Banking Exchange Interview with
Rachel Lewis of Stock Yards Bank

As part of the Banking Exchange Interview Series we and SkyStem are proud to present our interview with Rachel Lewis, Assistant Controller at Stock Yards Bank & Trust.

In this interview, Banking Exchange's Publisher Erik Vander Kolk, speaks with Rachel Lewis at length. We get a brief overview of her professional journey in the banking industry and get insights into what role technology plays in helping her do her work.

VIEW INTERVIEW NOW!

This Executive Interview is brought to you by:
SkyStem logo