Latest Explainable AI Newsletter: January 2020

‘CBInsights Game Changer’ 

We’re honored to be named a Game Changer 2020 by CB Inisghts. “..identified high-momentum companies pioneering new ways to solve big problems.” Read on CBInsights

CBInsights Game Changer 2020

CIO Outlook: Building an XAI Strategy

Our CEO, Krishna Gade, discusses how building an AI strategy needs to include explainability for complete visibility into machine-generated decisions. Read on ITProPortal

CIO Outlook: Building an Explainable AI Strategy

Responsible AI with MRM

Our CPO, Amit Paka, discusses responsible AI with MRM including how gaining transparency into models offers many tangible benefits like providing insights to persons reviewing models. Read on Forbes

Responsible AI with Model Risk Management

XAI: the next generation of AI

Our Head of PMM, Anusha Sethuraman, discusses how explainable AI is the future of business decision-making and plays a role in every aspect of AI. Read on Datanami

The next generation of AI: Explainable AI

What we’re reading/watching

3 reasons to open the AI black-box – Dr. Harry Shum

‘If we want to help humans improve AI, we need to understand it. Where are those errors happening? To which part of the data segments? With which part of the model?’ Watch on a16z’s YouTube channel

Explainable machine learning in deployment – Umang Bhatt at the FAT (Fairness, Accountability, Transparency) conference

‘Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little understanding of how organizations use these methods in practice.’ Read on arXiv

AI isn’t dangerous, but human bias is – Richard Socher, Salesforce

‘AI is rapidly becoming part of the fabric of our daily lives as it moves out of academia and research labs and into the real world… But I do believe we have to think about any unintended consequences of using this technology.’ Read on WEF

We still don’t understand how YouTube’s algorithm works – and that’s a problem – Chico Camargo, FastCompany

‘Many experts think that YouTube contributes to radicalization, showing people increasingly extreme videos. But we won’t know for sure unless the company shows researchers what’s under the hood.’ Read on FastCompany

Subscribe to our monthly Explainable AI newsletter