Identifying bias when sensitive attribute data is unavailable: Techniques for inferring protected characteristics

To evaluate whether decisions in lending, health care, hiring and beyond are made equitably across race or gender groups, organizations must know which individuals belong to each race and gender group. However, as we explored in our last post, the sensitive attribute data needed to conduct analyses of bias and fairness may not always be … Continue reading “Identifying bias when sensitive attribute data is unavailable: Techniques for inferring protected characteristics”

Explainable AI goes mainstream. But who should be explaining?

Bias in AI is an issue that has really come to the forefront in recent months — our recent blog post discussed the Apple Card/Goldman Sachs alleged bias issue. And this isn’t an isolated instance: Racial bias in healthcare algorithms and bias in AI for judicial decisions are just a few more examples of rampant … Continue reading “Explainable AI goes mainstream. But who should be explaining?”