Identifying bias when sensitive attribute data is unavailable: Geolocation in Mortgage Data

In our last post, we explored data on mortgage applicants from 2017 released in accordance with the Home Mortgage Disclosure Act (HMDA). We will use that data, which includes self-reported race of applicants, to test how well we can infer race using applicants’ geolocations in our effort to better understand methods to infer missing sensitive … Continue reading “Identifying bias when sensitive attribute data is unavailable: Geolocation in Mortgage Data”

Identifying bias when sensitive attribute data is unavailable: Techniques for inferring protected characteristics

To evaluate whether decisions in lending, health care, hiring and beyond are made equitably across race or gender groups, organizations must know which individuals belong to each race and gender group. However, as we explored in our last post, the sensitive attribute data needed to conduct analyses of bias and fairness may not always be … Continue reading “Identifying bias when sensitive attribute data is unavailable: Techniques for inferring protected characteristics”

Explainable AI goes mainstream. But who should be explaining?

Bias in AI is an issue that has really come to the forefront in recent months — our recent blog post discussed the Apple Card/Goldman Sachs alleged bias issue. And this isn’t an isolated instance: Racial bias in healthcare algorithms and bias in AI for judicial decisions are just a few more examples of rampant … Continue reading “Explainable AI goes mainstream. But who should be explaining?”