Causality in model explanations and in the real world

You can’t always change a human’s input to see the output.

At Fiddler Labs, we place great emphasis on model explanations being faithful to the model’s behavior. Ideally, feature importance explanations should surface and appropriately quantify all and only those factors that are causally responsible for the prediction. This is especially important if we want explanations to be legally compliant (e.g., GDPR, article 13 section 2f, people have a right to ‘[information about] the existence of automated decision-making, including profiling .. and .. meaningful information about the logic involved’), and actionable. Even when making post-processing explanations human-intelligible, we must preserve faithfulness to the model.

How do we differentiate between features that are correlated with the outcome, and those that cause the outcome? In other words, how do we think about the causality of a feature to a model output, or to a real-world task? Let’s take those one at a time.

Explaining causality in models is hard

When explaining a model prediction, we’d like to quantify the contribution of each (causal) feature to the prediction.

For example, in a credit risk model, we might like to know how important income or zip code is to the prediction.

Note that zip code may be causal to a model’s prediction (i.e. changing zip code may change the model prediction) even though it may not be causal to the underlying task (i.e. changing zip code may not change the decision of whether to grant a loan). However, these two things may be related if this model’s output is used in the real-world decision process.

The good news is that since we have input-output access to the model, we can probe it with arbitrary inputs. This allows examining counterfactuals, inputs that are different from those of the prediction being explained. These counterfactuals might be elsewhere in the dataset, or they might not.

Shapley values (a classic result from game theory) offer an elegant, axiomatic approach to quantify feature contributions.

One challenge is they rely on probing with an exponentially large set of counterfactuals, too large to compute. Hence, there are several papers on approximating Shapley values, especially for specific classes of model functions. 

However, a more fundamental challenge is that when features are correlated, not all counterfactuals may be realistic. There is no clear consensus on how to address this issue, and existing approaches differ on the exact set of counterfactuals to be considered.

To overcome these challenges, it is tempting to rely on observational data. For instance, using the observed data to define the counterfactuals for applying Shapley values. Or more simply, fitting an interpretable model on it to mimic the main model’s prediction and then explaining the interpretable model in lieu of the main model. But, this can be dangerous.

Consider a credit risk model with features including the applicant’s income and zip code. Say the model internally only relies on the zip code (i.e., it redlines applicants). Explanations based on observational data might reveal that the applicant’s income, by virtue of being correlated to zip code, is as predictive of the model’s output. This may mislead us to explain the model’s output in terms of the applicant’s income. In fact, a naive explanation algorithm will split attributions equally between two perfectly correlated features.

To learn more, we can intervene in features. One counterfactual changing zip code but not income will reveal that zip code causes the model’s prediction to change. A second counterfactual that changes income but not zip code will reveal that income does not. These two together will allow us to conclude that zip code is causal to the model’s prediction, and income is not.

Explaining causality requires the right counterfactuals.

Explaining causality in the real world is harder

Above we outlined a method to try to explain causality in models: study what happens when features change. To do so in the real world, you have to be able to apply interventions. This is commonly called a “randomized controlled trial” (also known as an “A/B testing” when there are two variants, especially in the tech industry). You divide a population into two or more groups randomly, and apply different interventions to each group. The randomization ensures that the only differences among the groups are your intervention. Therefore, you can conclude that your intervention causes the measurable differences in the groups.

The challenge in applying this method to real-world tasks is that not all interventions are feasible. You can’t ethically ask someone to take up smoking. In the real world, you may not be able to get the data you need to properly examine causality.

We can probe models as we wish, but not people.

Natural experiments can provide us an opportunity to examine situations where we would not normally intervene, like in epidemiology and economics. However, these provide us a limited toolkit, leaving many questions in these fields up for debate.

There are proposals for other theories that allow us to use domain knowledge to separate correlation from causation. These are subject to ongoing debate and research.

Now you know why explaining causality in models is hard, and explaining it in the real world is even harder.

To learn more about explaining models, email us at info@fiddler.ai. (Photo credit: pixabay.) This post was co-written with Ankur Taly.

“Hey, what’s that?” Debugging predictions using explanations

What does debugging look like in the new world of machine learning models? One way uses model explanations.

Machine learning (ML) models are popping up everywhere. There is a lot of technical innovation (e.g., deep learning, explainable AI) that has made them more accurate, more broadly applicable, and usable by more people in more business applications. The lists are everywhere: banking, healthcare, tech, all of the above.

However, as with any computer program, models can have errors, or more colloquially, bugs. The process of finding those bugs is quite different from previous technology, and requires a new developer stack. “Soon we won’t program computers, we’ll train them like dogs” (Wired, 2016). “Gradient descent can write code better than you. I’m sorry” (Andrej Karpathy, 2017).

In a deep learning neural network, instead of lines of code written by people, we are looking at possibly millions of weights linked together into an incomprehensible network. (picture credit)

How do we find bugs in this network?

So how do we find bugs in this network? One way is to explain your model predictions. Let’s look at two types of bugs we can find through explanations (data leakage and data bias), illustrated with examples from predicting loan default. Both of these are actually data bugs, but a model summarizes the data, so they show up in the model.

Bug #1: data leakage

Most ML models are supervised. You choose a precise prediction goal (also called the “prediction target”), gather a dataset with features, and label each example with the target. Then you train a model to use the features to predict the target. Surprisingly often there are features in the dataset that relate to the prediction target but are not useful for prediction. For example, they might be added from the future (i.e. long after prediction time), or otherwise unavailable at prediction time.

Here is an example from the Lending Club dataset. We can use this dataset to try modeling predicting loan default with loan_status field as our prediction target. It takes the values “Fully Paid” (okay) or “Charged Off” (bank declared a loss, i.e. the borrower defaulted). In this dataset, there are also fields such as total_pymnt (the payments received) and loan_amnt (amount borrowed). Here are a few example values:

Whenever the loan is “Charged Off”, delta is positive. But, we don’t know delta at loan time.

Notice anything? Whenever the loan has defaulted (“Charged Off”), the total payments are less than the loan amount, and delta (=loan_amnt-total_pymnt) is positive. Well, that’s not terribly surprising. Rather, it’s nearly the definition of default: by the end of the loan term, the borrower paid less than what was loaned. Now, delta doesn’t have to be positive for a default: you could default after paying back the entire loan principal amount but not all of the interest. But, in this data, 98% of the time if delta is negative, the loan was fully paid; and 100% of the time delta is positive, the loan was charged off. Including total_pymnt gives us nearly perfect information, but we don’t get total_pymnt until after the entire loan term (3 years)!

Including both loan_amnt and total_pymnt in the data potentially allows nearly perfect prediction, but we won’t really have total_pymnt for the real prediction task. Including them both in the training data is data leakage of the prediction target.

If we make a (cheating) model, it will perform very well. Too well. And, if we run a feature importance algorithm on some predictions (a common form of model explanation), we’ll see these two variables come up as important, and with any luck realize this data leakage.

Below, the Fiddler explanation UI shows “delta” stands out as a huge factor in raising this example prediction.

“delta” really stands out, because it’s data leakage.

There are other, more subtle potential data leakages in this dataset. For example, the grade and sub_grade are assigned by a Lending Club proprietary model, which almost completely determines the interest rate. So, if you want to build your own risk scoring model without Lending Club, then grade, sub_grade, and int_rate are all data leakage. They wouldn’t allow you to perfectly predict default, but presumably they would help, or Lending Club would not use their own model. Moreover, for their model, they include FICO score, yet another proprietary risk score, but one that most financial institutions buy and use. If you don’t want to use FICO score, then that is also data leakage.

Data leakage is any predictive data that you can’t or won’t use for prediction. A model built on data with leakage is buggy.

Bug #2: data bias

Suppose through poor data collection or a bug in preprocessing, our data in biased. More specifically, there is a spurious correlation between a feature and the prediction target. In that case, explaining predictions will show an unexpected feature often being important.

We can simulate a data processing bug in our lending data by dropping all the charged off loans from zip codes starting with 1 through 5. Before this bug, zip code is not very predictive of chargeoff (an AUC of 0.54, only slightly above random). After this bug, any zip code starting with 1 through 5 will never be charged off, and the AUC jumps to 0.78. So, zip code will show up as an important feature in predicting (no) loan default from data examples in those zip codes. In this example, we could investigate by looking at predictions where zip code was important. If we are observant, we might notice the pattern, and realize the bias.

Below is what charge-off rate would look like if summarized by the first digit of zip code. Some zips would have no charge-offs, while the rest had a rate similar to the dataset overall.

In this buggy dataset, there are no charged-off loans with zip codes starting with 6, 7, 8, 9, 0.

Below, the Fiddler explanation UI shows zip code prefix stands out as a huge factor in lowering this example prediction.

“zip_code_prefix” really stands out, because the model has a bug related to zip code.

A model built from this biased data is not useful for making predictions on (unbiased) data we haven’t seen yet. It is only accurate in the biased data. Thus, a model built on biased data is buggy.

Other model debugging methods

There are many other possibilities for model debugging that don’t involve model explanations. For example:

  1. Look for overfitting or underfitting. If your model architecture is too simple, it will underfit. If it is too complex, it will overfit.
  2. Regression tests on a golden set of predictions that you understand. If these fail, you might be able to narrow down which scenarios are broken.

Since explanations aren’t involved with these methods, I won’t say more here.

Summary

If you are not sure your model is using your data appropriately, use explanations of feature importance to examine its behavior. You might see data leakage or data bias. Then, you can fix your data, which is the best way to fix your model.

Fiddler is building an Explainable AI Engine that can help debug models. Email us at info@fiddler.ai.

Can Congress help keep AI fair for consumers?

A Congressional hearing on June 26 was a wake-up call for financial services.

How do firms ensure that AI systems are not having a disparate impact on vulnerable communities, and what safeguards should regulators and Congress put in place to protect consumers?

To what extent should companies be required to audit these algorithms so that they don’t unfairly discriminate? Who should determine the standards for that?

We need to ensure that AI does not create biases in lending toward discrimination.

These aren’t questions from an academic discourse or the editorial pages. These were posed to the witnesses of a June 26 hearing before the US House Committee on Financial Services — by both Democrats and Republicans, representatives of Illinois, North Carolina, and Arkansas.

It is a bipartisan sentiment that, left unchecked, AI can pose a risk to fairness in financial services. While the exact extent of this danger might be debated, governments in the US and abroad acknowledge the necessity and assert the right to regulate financial institutions for this purpose.

The June 26 hearing was the first wake-up call for financial services: they need to be prepared to respond and comply with future legislation requiring transparency and fairness.

In this post, we review the notable events of this hearing, and we explore how the US House is beginning to examine the risks and benefits of AI in financial services.

Two new House Task Forces to regulate fintech and AI

On May 9 of this year, the chairwoman of the US House Committee on Financial Services, Congresswoman Maxine Waters (D-CA), announced the creation of two task forces: one on fintech, and one on AI.

Generally, task forces convene to investigate a specific issue that might require a change in policy. These investigations may involve hearings that call forth experts to inform the task force.

These two task forces overlap in jurisdiction, but the committee’s objectives implied some distinctions:

  • The fintech task force should have a nearer-term focus on applications (e.g. underwriting, payments, immediate regulation).
  • The AI task force should have a longer-term focus on risks (e.g. fraud, job automation, digital identification).

And explicitly, Chairwoman Waters explained her overall interest in regulation:

Make sure that responsible innovation is encouraged, and that regulators and the law are adapting to the changing landscape to best protect consumers, investors, and small businesses.

The appointed chairman of the Task Force on AI, Congressman Bill Foster (D-IL), extolled AI’s potential in a similar statement, but also cautioned,

It is crucial that the application of AI to financial services contributes to an economy that is fair for all Americans.

This first hearing did find ample AI applications in financial services. But it also concluded that these worried sentiments are neither misrepresentative of their constituents nor misplaced.

From left to right: Maxine Waters (D-CA), Chairwoman of the US House Committee on Financial Services; Bill Foster (D-IL), Chairman of the Task Force on AI; French Hill (R-AR), Ranking Member on the Task Force on AI

Risks of AI

In a humorous exchange later in the hearing, Congresswoman Sylvia Garcia (D-TX) asks a witness, Dr. Bonnie Buchanan of the University of Surrey, to address the average American and explain AI in 25 words or less. It does not go well.

DR. BUCHANAN
I would say it’s a group of technologies and processes that can look at determining general pattern recognition, universal approximation of relationships, and trying to detect patterns from noisy data or sensory perception.

REP. GARCIA
I think that probably confused them more.

DR. BUCHANAN
Oh, sorry.

Beyond making jokes, Congresswoman Garcia has a point. AI is extraordinarily complex. Not only that, to many Americans it can be threatening. As Garcia later expresses, “I think there’s an idea that all these robots are going to take over all the jobs, and everybody’s going to get into our information.”

In his opening statement, task force ranking member Congressman French Hill (R-AR) tries to preempt at least the first concern. He cites a World Economic Forum study that the 75 million jobs lost because of AI will be more than offset by 130 million new jobs. But Americans are still anxious about AI development.

In a June 2018 survey of 2,000 Americans conducted by Oxford’s Center for the Governance of AI, researchers observed

  • overwhelming support for careful management of robots and/or AI (82% support)
  • more trust in tech companies than in the US government to manage AI in the interest of the public
  • mixed support for developing high-level machine intelligence (defined as “when machines are able to perform almost all tasks that are economically relevant today better than the median human today”)

This public apprehension about AI development is mirrored by concerns from the task force and experts. Personal privacy is mentioned nine times throughout the hearing, notably in Congressman Anthony Gonzalez’s (R-OH) broad question on “balancing innovation with empowering consumers with their data,” which the panel does not quite adequately address.

But more often, the witnesses discuss fairness and how AI models could discriminate unnoticed. Most notably, Dr. Nicol Turner-Lee, a fellow at the the Brookings Institution, suggests implementing guardrails to prevent biased training data from “replicat[ing] and amplify[ing] stereotypes historically prescribed to people of color and other vulnerable populations.”

And she’s not alone. A separate April 2019 Brookings report seconds this concern of an unfairness “whereby algorithms deny credit or increase interest rates using a host of variables that are fundamentally driven by historical discriminatory factors that remain embedded in society.”

So if we’re so worried, why bother introducing the Pandora’s box of AI to financial services at all?

Benefits of AI

AI’s potential benefits, according to Congressman Hill, are to “gather enormous amounts of data, detect abnormalities, and solve complex problems.” In financial services, this means actually fairer and more accurate models for fraud, insurance, and underwriting. This can simultaneously improve bank profitability and extend services to the previously underbanked.

Both Hill and Foster cite a National Bureau of Economic Research working paper finding where in one case, algorithmic lending models discriminate 40% less than face-to-face lenders. Furthermore, Dr. Douglas Merrill, CEO of ZestFinance and expert witness, claims that customers using his company’s AI tools experience higher approval rates for credit cards, auto loans, and personal loans, each with no increase in defaults.

Moreover, Hill frames his statement with an important point about how AI could reshape the industry: this advancement will work “for both disruptive innovators and for our incumbent financial players.” At first this might seem counterintuitive.

“Disruptive innovators,” more agile and hindered less by legacy processes, can have an advantage in implementing new technology. But without the immense budgets and customer bases of “incumbent financial players,” how can these disruptors succeed? And will incumbents, stuck in old ways, ever adopt AI?

Mr. Jesse McWaters, financial innovation lead at the World Economic Forum and the final expert witness, addresses this apparent paradox, discussing what will “redraw the map of what we consider the financial sector.” Third-party AI service providers — from traditional banks to small fintech companies — can “help smaller community banks remain digitally relevant to their customers” and “enable financial institutions to leapfrog forward.”

Enabling competitive markets, especially in concentrated industries like financial services, is an unadulterated benefit according to free market enthusiasts in Congress. However, “redrawing the map” in this manner makes the financial sector larger and more complex. Congress will have to develop policy responding to not only more complex models, but also a more complex financial system.

This system poses risks both to corporations, acting in the interest of shareholders, and to the government, acting in the interest of consumers.

Business and government look at risks

Businesses are already acting to avert potential losses from AI model failure and system complexity. A June 2019 Gartner report predicts that 75% of large organizations will hire AI behavioral forensic experts to reduce brand and reputation risk by 2023.

However, governments recognize that business-led initiatives, if motivated to protect company brand and profits, may only go so far. For a government to protect consumers, investors, and small businesses (the relevant parties according to Chairwoman Waters), a gap may still remain.

As governments explore how to fill this gap, they are establishing principles that will underpin future guidance and regulation. The themes are consistent across governing bodies:

  • AI systems need to be trustworthy.
  • They therefore require some government guidance or regulation from government representing the people.
  • This guidance should encourage fairness, privacy, and transparency.

In the US, President Donald Trump signed an executive order in February 2019 “to Maintain American Leadership in Artificial Intelligence,” directing federal agencies to, among other goals, “foster public trust in AI systems by establishing guidance for AI development and use.” The Republican White House and Democratic House of Representatives seem to clash at every turn, but they align here.

The EU is also establishing a regulatory framework for ensuring trustworthy AI. Likewise included among the seven requirements in their latest communication from April 2019: privacy, transparency, and fairness.

And June’s G20 summit drew upon similar ideas to create their own set of principles, including fairness and transparency, but also adding explainability.

These governing bodies are in a fact-finding stage, establishing principles and learning what they are up against before guiding policy. In the words of Chairman Foster, the task force must understand “how this technology will shape the questions that policymakers will have to grapple with in the coming years.”

Conclusion: Explain your models

An hour before Congresswoman Garcia’s amusing challenge, Dr. Buchanan reflected upon a couple common themes of concern.

Policymakers need to be concerned about the explainability of artificial intelligence models. And we should avoid black-box modeling where humans cannot determine the underlying process or outcomes of the machine learning or deep learning algorithms.

But through this statement, she suggests a solution: make these AI models explainable. If humans can indeed understand the inputs, process, and outputs of a model, we can trust our AI. Then throughout AI applications in financial services, we can promote fairness for all Americans.

Sources

  1. United States House Committee of Financial Services. “Perspectives on Artificial Intelligence: Where We Are and the Next Frontier in Financial Services.” https://financialservices.house.gov/calendar/eventsingle.aspx?EventID=403824. Accessed July 18, 2019.
  2. United States House Committee of Financial Services. “Waters Announces Committee Task Forces on Financial Technology and Artificial Intelligence.” https://financialservices.house.gov/news/documentsingle.aspx?DocumentID=403738. Accessed July 18, 2019.
  3. Leopold, Till Alexander, Vesselina Ratcheva, and Saadia Zahidi. “The Future of Jobs Report 2018.” World Economic Forum. http://www3.weforum.org/docs/WEF_Future_of_Jobs_2018.pdf
  4. Zhang, Baobao and Allan Dafoe. “Artificial Intelligence: American Attitudes and Trends.” Oxford, UK: Center for the Governance of AI, Future of Humanity Institute, University of Oxford, 2019. https://ssrn.com/abstract=3312874
  5. Klein, Aaron. “Credit Denial in the Age of AI.” Brookings Institution. April 11, 2019. https://www.brookings.edu/research/credit-denial-in-the-age-of-ai/
  6. Bartlett, Robert, Adair Morse, Richard Stanton, Nancy Wallace, “Consumer-Lending Discrimination in the FinTech Era.” National Bureau of Economic Research, June 2019. https://www.nber.org/papers/w25943
  7. Snyder, Scott. “How Banks Can Keep Up with Digital Disruptors.” Philadelphia, PA: The Wharton School of the University of Pennsylvania, 2017. https://knowledge.wharton.upenn.edu/article/banking-and-fintech/
  8. “Gartner Predicts 75% of Large Organizations Will Hire AI Behavior Forensic Experts to Reduce Brand and Reputation Risk by 2023.” Gartner. June 6, 2019. https://www.gartner.com/en/newsroom/press-releases/2019-06-06-gartner-predicts-75–of-large-organizations-will-hire
  9. United States, Executive Office of the President [Donald Trump]. Executive order 13859: Executive Order on Maintaining American Leadership in Artificial Intelligence. February 11, 2019. https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/
  10. “Building Trust in Human-Centric Artificial Intelligence.” European Commission. April 8, 2019. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top
  11. “G20 Ministerial Statement on Trade and Digital Economy.” June 9, 2019. http://trade.ec.europa.eu/doclib/press/index.cfm?id=2027