Identifying bias when sensitive attribute data is unavailable: Techniques for inferring protected characteristics

To evaluate whether decisions in lending, health care, hiring and beyond are made equitably across race or gender groups, organizations must know which individuals belong to each race and gender group. However, as we explored in our last post, the sensitive attribute data needed to conduct analyses of bias and fairness may not always be … Continue reading “Identifying bias when sensitive attribute data is unavailable: Techniques for inferring protected characteristics”

Identifying bias when sensitive attribute data is unavailable

The perils of automated decision-making systems are becoming increasingly apparent, with racial and gender bias documented in algorithmic hiring decisions, health care provision, and beyond. Decisions made by algorithmic systems may reflect issues with the historical data used to build them, and understanding discriminatory patterns in these systems can be a challenging task [1]. Moreover, … Continue reading “Identifying bias when sensitive attribute data is unavailable”

FAccT 2020 – Three Trends in Explainability

Last month, I was privileged to attend the ACM Fairness, Accountability, and Transparency (FAccT) conference in Barcelona on behalf of Fiddler Labs. (For those familiar with the conference, notice the new acronym!) While I have closely followed papers from this conference, this was my first time attending it in person.  It was amazing to see … Continue reading “FAccT 2020 – Three Trends in Explainability”

The never-ending issues around AI and bias. Who’s to blame when AI goes wrong?

We’ve seen it before, we’re seeing it again now with the recent Apple and Goldman Sachs alleged credit card bias issue, and we’ll very likely continue seeing it well into 2020 and beyond. Bias in AI is there, it’s usually hidden, (until it comes out), and it needs a foundational fix. This past weekend we … Continue reading “The never-ending issues around AI and bias. Who’s to blame when AI goes wrong?”