Featured

How to Design to make AI Explainable

Explainable AI, a topic of research until recently, is now mainstream. Recent research has enabled insights into the behavior of inherently black box AI models that can address its otherwise significant business risks related to bias, compliance and opaque outcomes. However, many platforms and solutions provide these explanations either with flat numbers via API or … Continue reading “How to Design to make AI Explainable”

Featured

Where is AI headed in 2020?

As we reflect on the closing year, many aspects of the 2019 AI journey stand out. Of note – the explainable AI rocket ship has taken off way faster than we anticipated. This is great news for anyone who cares about building responsible and trustworthy AI. The team at Fiddler is prepping for much more … Continue reading “Where is AI headed in 2020?”

Featured

Introducing Slice and Explain™ – Automated Insights for your AI Models

Today, we’re announcing the launch of an industry-first integrated AI Analytics Workflow powered by Explainable AI, ‘Slice and Explain’™, to expand Fiddler’s industry leading AI explanations. Explainable AI, a topic of research until recently, is now mainstream. But ML practitioners still struggle to utilize it to get meaningful insights from their AI models, detect potential … Continue reading “Introducing Slice and Explain™ – Automated Insights for your AI Models”

Featured

Fed Opens Up Alternative Data – More Credit, More Algorithms, More Regulation

A Dec. 4 joint statement released by the Federal Reserve Board, the Consumer Financial Protection Bureau (CFPB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA) and the Comptroller of the Currency (OCC), highlighted the importance of consumer protections in using alternative data (such as cash flow etc.) across a wide range … Continue reading “Fed Opens Up Alternative Data – More Credit, More Algorithms, More Regulation”

Featured

Explainable AI goes mainstream. But who should be explaining?

Bias in AI is an issue that has really come to the forefront in recent months — our recent blog post discussed the Apple Card/Goldman Sachs alleged bias issue. And this isn’t an isolated instance: Racial bias in healthcare algorithms and bias in AI for judicial decisions are just a few more examples of rampant … Continue reading “Explainable AI goes mainstream. But who should be explaining?”

Featured

The never-ending issues around AI and bias. Who’s to blame when AI goes wrong?

We’ve seen it before, we’re seeing it again now with the recent Apple and Goldman Sachs alleged credit card bias issue, and we’ll very likely continue seeing it well into 2020 and beyond. Bias in AI is there, it’s usually hidden, (until it comes out), and it needs a foundational fix. This past weekend we … Continue reading “The never-ending issues around AI and bias. Who’s to blame when AI goes wrong?”

Featured

Series A: Why now and what’s next for Fiddler?

In the news: VentureBeat, Axios Pro Rata- Venture Capital Deals, TechStartups, SiliconAngle, peHUB, Finsmes, Analytics India Magazine  It has been a year since we founded Fiddler Labs, and the journey so far has been incredible. I’m very excited to announce that we’ve raised our Series A funding round at $10.2 million led by Lightspeed Venture … Continue reading “Series A: Why now and what’s next for Fiddler?”

Featured

Regulations To Trust AI Are Here. And It’s a Good Thing.

This article was previously posted on Forbes. As artificial intelligence (AI) adoption grows, so do the risks of today’s typical black-box AI. These risks include customer mistrust, brand risk and compliance risk. As recently as last month, concerns about AI-driven facial recognition that was biased against certain demographics resulted in a PR backlash.  With customer protection … Continue reading “Regulations To Trust AI Are Here. And It’s a Good Thing.”

Podcast: Founder & CTO of Elixr AI, Farhan Shah, discusses AI and the need for transparency

We recently chatted with Farhan Shah, Founder & CTO of Elixr AI and former tech executive at large insurance companies. Take a listen to the podcast below or read the transcript. (Transcript lightly edited for clarity and length.) Fiddler: Hello everyone, welcome to today’s podcast. My name is Anusha from Fiddler Labs. Today I have … Continue reading “Podcast: Founder & CTO of Elixr AI, Farhan Shah, discusses AI and the need for transparency”

Explainable AI at NeurIPS 2019

The 33rd annual NeurIPS conference has now wrapped up. By the numbers, NeurIPS has become a behemoth, with over 1,400 papers accepted and around 13,000 people registered. The quickly growing field of Explainable AI (XAI) made a noticeable appearance in this multitude of papers and people. Additionally, many papers not geared specifically toward explainability turned … Continue reading “Explainable AI at NeurIPS 2019”