Identifying bias when sensitive attribute data is unavailable

The perils of automated decision-making systems are becoming increasingly apparent, with racial and gender bias documented in algorithmic hiring decisions, health care provision, and beyond. Decisions made by algorithmic systems may reflect issues with the historical data used to build them, and understanding discriminatory patterns in these systems can be a challenging task [1]. Moreover, … Continue reading “Identifying bias when sensitive attribute data is unavailable”

Explainable AI Podcast: Founder of, Merve Hickok, explains the importance of ethical AI and its future

We recently chatted with Merve Hickok, Founder of and Lighthouse Career Consulting. Take a listen to the podcast below or read the transcript. (Transcript lightly edited for clarity.) Listen to all the Explainable AI Podcasts here Fiddler: Welcome to the Fiddler Explainable AI podcast. My name is Anusha Sethuraman. Today I have with me Merve Hickok, … Continue reading “Explainable AI Podcast: Founder of, Merve Hickok, explains the importance of ethical AI and its future”

FAccT 2020 – Three Trends in Explainability

Last month, I was privileged to attend the ACM Fairness, Accountability, and Transparency (FAccT) conference in Barcelona on behalf of Fiddler Labs. (For those familiar with the conference, notice the new acronym!) While I have closely followed papers from this conference, this was my first time attending it in person.  It was amazing to see … Continue reading “FAccT 2020 – Three Trends in Explainability”

The Next Generation of AI: Explainable AI

For most businesses, decisions — from creating marketing taglines to which merger or acquisition to approve — are made solely by humans using instinct, expertise, and understanding built through years of experience. However, these decisions were invariably susceptible to the nuances of human nature: decisions included bias, stereotypes and inconsistent influences. And while humans eventually … Continue reading “The Next Generation of AI: Explainable AI”

Responsible AI With Model Risk Management

The desire among financial institutions to better mitigate risk gained renewed prominence as a result of the financial crisis of 2008. Subsequent regulatory and governance requirements fostered interest in risk modeling and sophisticated forecasting based on artificial intelligence (AI) to improve outcomes. It now seems common to have AI-driven models supporting decision making related to … Continue reading “Responsible AI With Model Risk Management”