Explainable Monitoring: Stop flying blind and monitor your AI

Data Science teams find Explainable Monitoring essential to manage their AI. Photo by Bruce Warrington on Unsplash The Need for AI/ML Monitoring We’re living in unprecedented times where in a matter of a few weeks, things changed dramatically for many humans and businesses across the globe. With COVID-19 spreading its wings across the globe, and … Continue reading “Explainable Monitoring: Stop flying blind and monitor your AI”

AI Explained Video series: What are Shapley Values?

We are starting a video series with quick, short snippets of information on Explainability and Explainable AI. The first in the series is one on Shapley values – axioms, challenges, and how it applies to explainability of ML models. Shapley values is an elegant attribution method from Cooperative Game Theory dating back to 1953. It … Continue reading “AI Explained Video series: What are Shapley Values?”

The State of Explainability: Impressions from Partnership on AI (PAI)’s Workshop in NYC

Two weeks ago, I had the opportunity to spend a day with 40 researchers, technologists, regulators, policy-makers, lawyers, and social scientists at PAI’s workshop on Explainability in ML.  PAI is a nonprofit that collaborates with over 100 organizations, across academic, governmental, industrial, and nonprofit sectors, to evaluate the social impact of artificial intelligence and to … Continue reading “The State of Explainability: Impressions from Partnership on AI (PAI)’s Workshop in NYC”

Counterfactual Explanations vs. Attribution based Explanations

This post is co-authored by Aalok Shanbhag and Ankur Taly As “black box” machine learning models spread to high stakes domains (e.g., lending, hiring, and healthcare), there is a growing need for explaining their predictions from end-user, regulatory, operations, and societal perspectives. Consequently, practical and scalable explainability approaches are being developed at a rapid pace.  … Continue reading “Counterfactual Explanations vs. Attribution based Explanations”