Identifying bias when sensitive attribute data is unavailable: Geolocation in Mortgage Data

In our last post, we explored data on mortgage applicants from 2017 released in accordance with the Home Mortgage Disclosure Act (HMDA). We will use that data, which includes self-reported race of applicants, to test how well we can infer race using applicants’ geolocations in our effort to better understand methods to infer missing sensitive … Continue reading “Identifying bias when sensitive attribute data is unavailable: Geolocation in Mortgage Data”

Identifying bias when sensitive attribute data is unavailable: Exploring Data from the HMDA

To test their automated systems for possible bias across racial or gender lines, organizations may seek to know which individuals belong to each race and gender group. However, such information may not be easily accessible, and organizations may use techniques to infer such information in the absence of available data [1]. Here, we explore a … Continue reading “Identifying bias when sensitive attribute data is unavailable: Exploring Data from the HMDA”

[Video] AI Explained: What are Integrated Gradients?

We started a video series with quick, short snippets of information on Explainability and Explainable AI. The second in this series is on Integrated Gradients – more about this method and its applications. Learn more in the ~10min video below. The first in the series is one on Shapley values – watch that here. What … Continue reading “[Video] AI Explained: What are Integrated Gradients?”

Explainable Monitoring: Stop flying blind and monitor your AI

Data Science teams find Explainable Monitoring essential to manage their AI. Photo by Bruce Warrington on Unsplash The Need for AI/ML Monitoring We’re living in unprecedented times where in a matter of a few weeks, things changed dramatically for many humans and businesses across the globe. With COVID-19 spreading its wings across the globe, and … Continue reading “Explainable Monitoring: Stop flying blind and monitor your AI”

AI Explained Video series: What are Shapley Values?

We are starting a video series with quick, short snippets of information on Explainability and Explainable AI. The first in the series is one on Shapley values – axioms, challenges, and how it applies to explainability of ML models. Shapley values is an elegant attribution method from Cooperative Game Theory dating back to 1953. It … Continue reading “AI Explained Video series: What are Shapley Values?”