[Video] AI Explained: What are Integrated Gradients?

We started a video series with quick, short snippets of information on Explainability and Explainable AI. The second in this series is on Integrated Gradients – more about this method and its applications. Learn more in the ~10min video below. The first in the series is one on Shapley values – watch that here. What … Continue reading “[Video] AI Explained: What are Integrated Gradients?”

AI Explained Video series: What are Shapley Values?

We are starting a video series with quick, short snippets of information on Explainability and Explainable AI. The first in the series is one on Shapley values – axioms, challenges, and how it applies to explainability of ML models. Shapley values is an elegant attribution method from Cooperative Game Theory dating back to 1953. It … Continue reading “AI Explained Video series: What are Shapley Values?”

Counterfactual Explanations vs. Attribution based Explanations

This post is co-authored by Aalok Shanbhag and Ankur Taly As “black box” machine learning models spread to high stakes domains (e.g., lending, hiring, and healthcare), there is a growing need for explaining their predictions from end-user, regulatory, operations, and societal perspectives. Consequently, practical and scalable explainability approaches are being developed at a rapid pace.  … Continue reading “Counterfactual Explanations vs. Attribution based Explanations”

FAccT 2020 – Three Trends in Explainability

Last month, I was privileged to attend the ACM Fairness, Accountability, and Transparency (FAccT) conference in Barcelona on behalf of Fiddler Labs. (For those familiar with the conference, notice the new acronym!) While I have closely followed papers from this conference, this was my first time attending it in person.  It was amazing to see … Continue reading “FAccT 2020 – Three Trends in Explainability”