Thriving in Financial Services with AI built on Trust
Image from iStock

Thriving in Financial Services with AI built on Trust

By Theodora Lau and Bradley Leimer of Unconventional Ventures

Many of the issues surrounding artificial intelligence (AI) lie within unpredictability and trust. 

Can we build trust in something that we cannot see, feel, and touch? How do we best mitigate unintended consequences? Can we teach machines to be unbiased when humans — who program and create the algorithms behind this designed intelligence — are biased themselves? 

How do we build trustworthy AI? 

Let’s consider a high profile case from a few years ago. 

Was a new credit card sexist when it launched, or was the algorithm simply reflecting past gender biases? There was a fierce debate around the credit decisioning process for a big tech company and its issuing partner, whose new card was accused of offering lower credit limits to women applicants and denying women accounts unfairly.

Ultimately, the New York State Department of Financial Services cleared the firm of deliberate discrimination. 

With underwriting and loan approval processes increasingly being automated, we suspect this will not be the last time we hear similar allegations of wrongdoing. And this high-profile incident served to highlight the effects of the inherent bias on deciding creditworthiness, the need for transparency in the process, as well as the benefits and challenges of using artificial intelligence in automated financial decisions.

The case has also demonstrated the importance of explainability. In matters where consumer applications for credit are being made, we simply cannot hide behind the algorithms, unable to explain the outcomes from the magic black box. Not only does this create regulatory and compliance risks, it can also cause confusion and consumer dissatisfaction when discrepancies occur. The incident also highlights the importance of keeping humans in the loop.

But more importantly, this also shows the effect of inherent bias within existing credit models and decisioning platforms, and how legacy bias and inequality could be carried over with ongoing automation. Unequal access to credit in its current form is a systemic problem that needs to be resolved — independent of algorithmic decisions. Coupled with biased historical lending data used to create and test credit models, unfair outcomes can be perpetuated as they mirror past disparities of credit access, particularly for women and communities of color. 

Building with trust and confidence

The opportunity for leveraging AI in business is vast, but the stakes are also high. Regardless of industry, it is only as good as how much people trust it, and only as trustworthy as the transparency of the data and algorithms behind it.

How can financial institutions best leverage such powerful emerging technologies to make informed decisions with their data, while keeping up with the accelerated pace of digital transformation and changing market conditions? 

While there is no one-size-fits-all solution, here are a few areas to consider within the framework of people, technology, and regulatory and compliance:

[1] Data governance

Data is fuel for artificial intelligence and machine learning. When the data is flawed, so will be the outcome. Ensuring the data is clean, usable, and secure is one of the most important steps of the process. How is the data managed and what are the policies and procedures to ensure data reliability and security? And as organizations scale, having a data strategy in place — as well as the right technology partners — is crucial to identifying what data is needed and how the disparate sources are integrated and leveraged. 

[2] Fairness and transparency

AI is not a magic box. While technology advances enable us to solve more sophisticated business problems, we also need to ensure the results we are producing are fair, trustworthy, and auditable. How do we select our training data? Are we able to explain what the algorithms do? What are the processes and procedures to address bias and unintended outcomes? Who ensures that these outcomes are fair?

[3] Ethics and accountability

Ethics must be at the heart of any AI operation, and aligned with the values and core principles of the organization. Who is accountable and who are the stakeholders with a voice at the table? Who else are we partnering with in the ecosystem to drive discussions and development?  

We have merely scratched the surface when it comes to implementing artificial intelligence in financial services. In the next decade, we expect more automation and more intelligent operations. With thoughtfulness and intention, we can make it more accessible and fairer for all.

Register here to join us at IBM Think 2021 as we explore the currency of trust in the digital age and how it affects your organization's ability to survive and thrive.

This article was sponsored by IBM. Opinions expressed are our own.

________

Unconventional Ventures helps drive innovation to improve systematic financial wellness. We connect founders to funders, provide mentorship to entrepreneurs, strategic advisory services to a broad set of corporates, and broaden opportunities for diversity within the ecosystem. Our belief is that anyone with great ideas should have a chance to succeed and every voice should be heard. Visit unconventionalventures.com to learn how you can partner with us.

Michael Heinz

Keynote Speaker, Radio Moderator & Future Strategist

3y

You Bankers, pls. listen: Trust, Accountability, Transparency & Ethical usage of AI elements inside of your legacy systems while your institutions thrive towards becoming more digital, more innovative, more up-to-date..... These are the key overall systems and at the same time user-based design requirements!

Like
Reply
Richard Turrin

Helping you make sense of going Cashless | Best-selling author of "Cashless" and "Innovation Lab Excellence" | Consultant | Speaker | Top media source on China's CBDC, the digital yuan | China AI and tech

3y

Leave it to "Beyond Good" authors Theo and Brad to take on the hard topics! Great read and agree trust is key. With humans we look in their eyes and observe their body language, all subliminal triggers for trust. With an opaque AI there is no such trigger and even worse they are sold by their users as being mathematically rigorous, when they aren't. Agree that in order to bring trust we will need greater transparency and that this will only come through 3rd party like validation of AI results. For lack of a better term a "Consumer Reports" for AI.

Charlie Moore

Climate I Finance I Technology I Corp Dev

3y

The adjacent use of AI in ESG for financial services is helping investors to shine a spotlight on labor rights, diversity and other social issues, that corporations previously tried to keep out of the picture.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics