This page is moving to a new website.
Artificial Intelligence (AI) algorithms that are used for crime detection, loan approvals, and employee evaluations are considered by many to be objective, but they can sometimes have many of the same prejudices and biases that human evaluators have. Given the opacity of many black box approaches to AI, this could lead to serious problems with fairness and equity. This article discusses an admittedly imperfect approach by Microsoft to evaluate these AI algorithms using (surprise!) an AI algorithm. It flags situations where an algorithm appears to have problems with unfair differential treatments based on race, gender, or age. Continue reading