algorithmic fairness in AI

AI-based algorithms are becoming increasingly popular in making decisions about humans. Example applications include screening of job applicants and passing judgments in the courtroom. A majority of these algorithms use machine learning and are trained using historical data that are filled with stereotypes and biases. As a result, these AI algorithms have been shown to be often biased with respect to sensitive attributes of individuals, such as gender and race.

I develop algorithms for monitoring and rectifying biases of deployed AI decision makers at runtime. These algorithms complement the existing design-time algorithms for training fair ML models, and can be used as trusted third-party “fairness watchdogs.”