Podcast: Founder & CTO of Elixr AI, Farhan Shah, discusses AI and the need for transparency

We recently chatted with Farhan Shah, Founder & CTO of Elixr AI and former tech executive at large insurance companies. Take a listen to the podcast below or read the transcript. (Transcript lightly edited for clarity and length.) Fiddler: Hello everyone, welcome to today’s podcast. My name is Anusha from Fiddler Labs. Today I have … Continue reading “Podcast: Founder & CTO of Elixr AI, Farhan Shah, discusses AI and the need for transparency”

How to Design to make AI Explainable

Explainable AI, a topic of research until recently, is now mainstream. Recent research has enabled insights into the behavior of inherently black box AI models that can address its otherwise significant business risks related to bias, compliance and opaque outcomes. However, many platforms and solutions provide these explanations either with flat numbers via API or … Continue reading “How to Design to make AI Explainable”

Where is AI headed in 2020?

As we reflect on the closing year, many aspects of the 2019 AI journey stand out. Of note – the explainable AI rocket ship has taken off way faster than we anticipated. This is great news for anyone who cares about building responsible and trustworthy AI. The team at Fiddler is prepping for much more … Continue reading “Where is AI headed in 2020?”

Introducing Slice and Explain™ – Automated Insights for your AI Models

Today, we’re announcing the launch of an industry-first integrated AI Analytics Workflow powered by Explainable AI, ‘Slice and Explain’™, to expand Fiddler’s industry leading AI explanations. Explainable AI, a topic of research until recently, is now mainstream. But ML practitioners still struggle to utilize it to get meaningful insights from their AI models, detect potential … Continue reading “Introducing Slice and Explain™ – Automated Insights for your AI Models”

Explainable AI at NeurIPS 2019

The 33rd annual NeurIPS conference has now wrapped up. By the numbers, NeurIPS has become a behemoth, with over 1,400 papers accepted and around 13,000 people registered. The quickly growing field of Explainable AI (XAI) made a noticeable appearance in this multitude of papers and people. Additionally, many papers not geared specifically toward explainability turned … Continue reading “Explainable AI at NeurIPS 2019”