Jump to Content

Consistent Plug-in Classifiers for Complex Objectives and Constraints

Harish Ramaswamy
Shiv Tavker
NeurIPS 2020
Google Scholar

Abstract

We present a consistent algorithm for constrained classification problems where the objective (e.g. F-measure, G-mean) and the constraints (e.g. demographic parity fairness, coverage) are defined by general functions of the confusion matrix. Our approach reduces the problem into a sequence of plug-in classifier learning tasks. The reduction is achieved by posing the learning problem as an optimization over the intersection of two sets: the set of confusion matrices that are achievable and those that are feasible. This decoupling of the constraint space then allows us to solve the problem using a Frank-Wolfe based algorithm. For objective and constraints that are convex functions of the confusion matrix, our algorithm requires $O(1/\epsilon^2)$ calls to the plug-in subroutine, which improves on the $O(1/\epsilon^3)$ rate needed by the reduction based algorithm of Narasimhan (2018). We show empirically that our algorithm is competitive with prior methods.

Research Areas