FAccT 2020 – Three Trends in Explainability

Last month, I was privileged to attend the ACM Fairness, Accountability, and Transparency (FAccT) conference in Barcelona on behalf of Fiddler Labs. (For those familiar with the conference, notice the new acronym!) While I have closely followed papers from this conference, this was my first time attending it in person. 

It was amazing to see how inter-disciplinary this conference is. The conference had a stimulating mix of computer scientists, social scientists, psychologists, policy makers, and lawyers. The topics ranged from explainability and fairness to values, ethics and policy.

Krishnaram Kenthapady (Amazon AWS), Sahin Geyik (Linkedin) and I (Fiddler) jointly instructed a tutorial on Explainable AI in Industry. The tutorial focussed on several case studies on deploying explainability techniques (such as: SHAP, Integrated Gradients, and LIME) in practice. I was able to introduce the audience to Fiddler along with some of its new cutting-edge features like Slice & Explain. I will save a deeper discussion of the tutorial for another post, and instead foqcus on three trends in Explainability that I noted at FAccT this year. 

The rise of counterfactual explanations

Counterfactual explanations deal with the question: how should the features change to obtain a different outcome? They were brought to the fore by Wachter et al. in 2017, who argued that counterfactual explanations are well aligned with the requirements of the European Union’s General Data Protection Regulation (GDPR)

Counterfactual explanations are attractive as they are easy to comprehend, and can be used to offer a path of recourse to end-users receiving unfavorable decisions. For instance, one could explain a credit rejection by saying: ‘Had you earned $5,000 more, your request for credit would have been approved.’ In a paper at FAccT this year, Barocas et al. suggest that counterfactual explanations may better serve the intended purpose of adverse action notices.

In the last couple of years, there has been a surge in techniques for obtaining recourse for a variety of ML models, as well as, a new fairness metric based on equalized recourse across protected classes. A paper at FAccT this year discussed the philosophical basis of recourse, arguing that an ability to reverse unfavorable decisions made by algorithms and bureaucracies is fundamental to human agency, and our ability to engage in long-term planning towards our goals.

While most previous papers focussed on the technical challenges associated with solving the problem of obtaining recourse, the recent work of Barocas et al. highlights several practical challenges. For recourse to be practical, one must take into account the real-world feasibility of the suggested feature changes, as well as the causal dependencies between various features. A fascinating example from the paper is that of a credit lending model suggesting increasing income by $5,000. One may act on this by either waiting to obtain a raise at their current job or taking up a new high-paying job. Either of these actions would increase income but would also affect “length of employment’’, which may be another feature of the model. The unforeseen change to “length of employment’’ may adversely affect the prediction despite the increase in income. 

Growing desiderata for explainability methods

As explainability techniques spread to diverse domains such as healthcare, lending, fraud, etc., the list of desiderata for these techniques keeps growing as well. The seminal LIME paper from 2016 focussed on two desiderata for explanations — (a) they must be faithful to the model’s logic, and (b) they must be human-intelligible. At FAccT this year, Explainability Fact Sheets extended the desiderata to 36 different requirements. They were compiled based on a survey of explainability research in both the computer science and social science communities. The requirements are organized as groups of functional, operational, security, usability, and validation requirements. 

While this is a massive list of requirements, I expect the fact sheets to be helpful. They can enable a fine-grained comparison of different explainability methods, and can also help in shaping the impending regulation around ML explainability.

Another paper proposed a new desideratum that explanations must capture real patterns in the input data. The argument is that explanations are often used not just to understand the model at hand but also to extract relationships underlying the phenomena being modeled. This is especially true for various sciences where the goal is not just to predict different outcomes but also reveal the rules governing those outcomes. The author further proposes robustness of explanations as a measure of how well they capture real patterns. If explanations are robust against tweaks to the input and model, they are more likely to correspond to actual patterns in the data and not artifacts of a single model. I found this paper to be a refreshingly novel take on explanations! While the paper is written with a philosophical perspective, I expect to see follow-up work on crafting new explanation algorithms that meet this desideratum.

Explainability tools spreading in industry

With black box machine learning (ML) models spreading to high stakes domains, the need for interpreting and thoroughly evaluating such models is also Frapidly growing. At FAccT this year, there were three different tutorials showcasing explainability tools developed by industry. Besides our tutorial on explainability tools developed at Fiddler labs and Linkedin, there were tutorials showcasing the IBM AIX 360 toolkit and Google’s What-If Tool. Additionally, a group of folks from the Partnership on AI along with several other industry partners, including Fiddler labs, published a comprehensive survey of explainable machine learning in deployment.

The field of explainability in AI/ML seems to be at an inflection point. There is a tremendous need from the societal, regulatory, commercial, and end-user perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. In my opinion, the three key challenges for the research community are: (a) achieving consensus on the right notion of model explainability, (b) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (c) designing measures for evaluating explainability techniques.

References