Automated decision making and classification algorithms have become ubiquitous, with applications ranging from image identification and loan processing to public safety & policing. However, there is growing concern that these algorithms can discriminate or introduce new biases in certain use cases. There is increasing pressure from both industry and government bodies for developing more socially responsible artificial intelligence. As a result, there is an increasing body of literature that is dedicated to exploring this field.
Our special session aims to highlight important mathematical developments promoting fairness in machine learning. Participants will see first-hand how fairness can be mathematically formulated and applied to well-established machine learning frameworks.
TOPICS COVERED:
- Existing metrics for measuring fairness in classification outcomes (e.g., parity, disparate treatment, disparate impact)
- Developing responsibly AI for minimizing harmful effects of automated decision making against vulnerable populations
- Techniques for solving constrained optimization problems with fairness constraints for classification problems, unsupervised learning, regression, and natural language processing
- Improving fairness in AI used in criminal justice, job screening, and loan applications
Organizers: Stan Uryasev, Robert Golan