AI Explained Video series: What are Shapley Values?

We are starting a video series with quick, short snippets of information on Explainability and Explainable AI. The first in the series is one on Shapley values – axioms, challenges, and how it applies to explainability of ML models. Shapley values is an elegant attribution method from Cooperative Game Theory dating back to 1953. It … Continue reading “AI Explained Video series: What are Shapley Values?”

The State of Explainability: Impressions from Partnership on AI (PAI)’s Workshop in NYC

Two weeks ago, I had the opportunity to spend a day with 40 researchers, technologists, regulators, policy-makers, lawyers, and social scientists at PAI’s workshop on Explainability in ML.  PAI is a nonprofit that collaborates with over 100 organizations, across academic, governmental, industrial, and nonprofit sectors, to evaluate the social impact of artificial intelligence and to … Continue reading “The State of Explainability: Impressions from Partnership on AI (PAI)’s Workshop in NYC”

Counterfactual Explanations vs. Attribution based Explanations

This post is co-authored by Aalok Shanbhag and Ankur Taly As “black box” machine learning models spread to high stakes domains (e.g., lending, hiring, and healthcare), there is a growing need for explaining their predictions from end-user, regulatory, operations, and societal perspectives. Consequently, practical and scalable explainability approaches are being developed at a rapid pace.  … Continue reading “Counterfactual Explanations vs. Attribution based Explanations”

The Next Generation of AI: Explainable AI

For most businesses, decisions — from creating marketing taglines to which merger or acquisition to approve — are made solely by humans using instinct, expertise, and understanding built through years of experience. However, these decisions were invariably susceptible to the nuances of human nature: decisions included bias, stereotypes and inconsistent influences. And while humans eventually … Continue reading “The Next Generation of AI: Explainable AI”

CIO outlook 2020: Building an explainable AI strategy for your company

Building an AI strategy needs to include explainability for complete visibility into machine-generated decisions. Explainable AI ensures there’s a human present in the loop of the AI process, where the machine provides transparent and reliable explanations, and the human can correct the machine in cases where its decisions are wrong. The sooner such an AI … Continue reading “CIO outlook 2020: Building an explainable AI strategy for your company”