How to Design to make AI Explainable

Explainable AI, a topic of research until recently, is now mainstream. Recent research has enabled insights into the behavior of inherently black box AI models that can address its otherwise significant business risks related to bias, compliance and opaque outcomes. However, many platforms and solutions provide these explanations either with flat numbers via API or … Continue reading “How to Design to make AI Explainable”

Introducing Slice and Explain™ – Automated Insights for your AI Models

Today, we’re announcing the launch of an industry-first integrated AI Analytics Workflow powered by Explainable AI, ‘Slice and Explain’™, to expand Fiddler’s industry leading AI explanations. Explainable AI, a topic of research until recently, is now mainstream. But ML practitioners still struggle to utilize it to get meaningful insights from their AI models, detect potential … Continue reading “Introducing Slice and Explain™ – Automated Insights for your AI Models”

Fed Opens Up Alternative Data – More Credit, More Algorithms, More Regulation

A Dec. 4 joint statement released by the Federal Reserve Board, the Consumer Financial Protection Bureau (CFPB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA) and the Comptroller of the Currency (OCC), highlighted the importance of consumer protections in using alternative data (such as cash flow etc.) across a wide range … Continue reading “Fed Opens Up Alternative Data – More Credit, More Algorithms, More Regulation”

Explainable AI Podcast: S&P Global & Fiddler discuss AI, explainability, and machine learning

We recently chatted with Ganesh Nagarathnam, Director of Analytics and Machine Learning Engineering, at S&P Global. Take a listen to the podcast below or read the transcript. (Transcript edited for clarity and length.) Listen to all the Explainable AI Podcasts here Fiddler: Welcome to Fiddler’s explainable AI podcast. I’m Anusha Sethuraman. And today I have … Continue reading “Explainable AI Podcast: S&P Global & Fiddler discuss AI, explainability, and machine learning”

Regulations To Trust AI Are Here. And It’s a Good Thing.

This article was previously posted on Forbes. As artificial intelligence (AI) adoption grows, so do the risks of today’s typical black-box AI. These risks include customer mistrust, brand risk and compliance risk. As recently as last month, concerns about AI-driven facial recognition that was biased against certain demographics resulted in a PR backlash.  With customer protection … Continue reading “Regulations To Trust AI Are Here. And It’s a Good Thing.”