Back to blog home

Supporting Responsible AI in Financial Services

Governor Lael Brainard of the Federal Reserve recently spoke at the AI Symposium about the use of Responsible AI in Financial Services. The speech provides important insights that can be early indicators into how the Fed guidelines might look for AI governance. Financial services companies that are already leveraging AI to provide new or enhanced customer experiences can review these remarks to get a head start and ensure their AI operations are better prepared. A full transcript of the speech can be found here. This post summarizes key points in this speech and how teams should think about its applicability to their ML practices, i.e. MLOps.

Benefits of AI to Financial Services

The Fed broadly embraces AI’s benefits in combating fraud and enabling better credit availability. AI enables companies to respond faster and better to fraud which is escalating with the increased digitization of financial services. Machine Learning (ML) models for credit risk and credit decisions built with traditional and alternative data provide more accurate and fair credit decisions to many more people outside of the current credit framework (Joint Fed statement opening up alternative data). However, the Fed cautions that historical data with racial bias might perpetuate the bias if used in opaque AI models without proper guardrails and protections. AI systems need to make a positive impact as well as protect previously marginalized classes. 

AI’s Black Box Problems

The key problem is a lack of ML model transparency and the Fed outlines the reasons behind it:

  1. Unlike statistical models that are designed by humans, ML models are trained on data automatically by algorithms.
  2. As a result of this automated generation, ML models can absorb complex nonlinear interactions from the data that humans cannot otherwise discern.  

This complexity obscures how a model converts input to output, and it gets worse for deep learning models, making it difficult to explain and reason about, which is a major challenge toward responsible AI.

The Importance of Context

The Fed outlines how context is key in understanding and explaining models. Even as the AI research community has made advances in explaining models, explanations depend on the person asking for them and the type of the model’s prediction. For example, an explanation given to a technical model developer would be a lot more detailed than one given to a compliance officer. In addition, the end user needs to receive an easy to understand and actionable explanation of a model. For example, if a loan applicant gets a denial, understanding how the decision was made along with suggestions on actions to increase their approval odds will enable them to make changes and reapply.

For financial services teams adopting AI, this highlights the need to have an ML system that caters to all the stakeholders of AI, not just the model developers. It needs to address the varying degree of ML comprehension of these stakeholders and allow for model explanations to be surfaced correctly to the end user. 

Key banking use cases, especially credit lending, are regulated by a host of laws including the Equal Credit Opportunity Act (ECOA), the Fair Housing Act (FHA), Civil Rights Act, Immigration Reform Act, etc. The laws require AI models and the data powering them to be understood and assessed to address any unwanted bias. Even if protected attributes like race are not used in model development, models can unknowingly absorb relationships with the protected class from correlated data inputs, i.e. proxy bias. Enabling model development under these stringent constraints to promote equitable outcomes with financial inclusion is therefore an active topic of study.

Financial services are already well set up to assess statistical models for bias. To meet the same requirement for ML models and build responsible AI, AI teams need an updated bias testing process with tooling to evaluate and mitigate AI bias in the context of the use case.

Bank management needs confidence that their models are robust, as they make critical decisions. They need to ensure the model will behave correctly when confronted with real word data that can have more complex interactions. Explanations are a critical tool in providing the model development and assessment teams with this confidence. Not all ML systems, however, need the same level of understanding. For example, a lower threshold for transparency would suffice for secondary challenger systems used in conjunction with the primary AI system.

As teams scale their ML development, the process would need to provide a robust collection of validation and monitoring tools to allow model developers and IT to ensure compliance with regulation and risk requirements from guidelines like SR 11-7 and OCC Bulletin 2011-12. Banks have started to introduce AI validators in their second line of defense to enable model validation.

Forms of Explanations for Responsible AI

The speech outlines how explanations can differ based on the complexity and structure of the model. Banks are recommended to consider using the appropriate amount of transparency into the model based on the use case. Some models, for example, can be developed as fully ‘interpretable’ but potentially less accurate. For example, a logistic regression model decision can be explained by the weights of the input. Other models are more complex and accurate but not inherently interpretable. In this case, explanations are obtained by using model agnostic techniques that provide explanations by probing the model with varying inputs and observing the change in its output. While these ‘post-hoc’ explanations can enable understanding in certain use cases, they may not be as reliable as explanations from an inherently explainable model. One of the key questions banks will therefore face is whether the model agnostic explanation is acceptable or an interpretable model is necessary. An accurate model explanation, however, does not guarantee a robust and fair model which can only be developed over time and with experience.

Explainable AI, a recent research advancement, is the technology that unlocks the AI black box so humans can understand what’s going on inside AI models to ensure AI-driven decisions are transparent, accountable, and trustworthy. This explainability powers the explanations of model outputs. Financial services companies need to have platforms in place to allow their teams to generate explanations for a wide range of models that can be consumed across a variety of internal and external teams. 

Expectations for Banks

The Fed speech ends with a commitment to support the development of responsible AI and a call for feedback on transparency techniques and their risk implications from experts in the field. 

As the Fed seeks input, it is clear that the financial services teams deploying AI models need to explore ways to bolster their ML development with updated processes and tools to bring in transparency across model understanding, robustness, and fairness so they are better prepared for upcoming guidelines.