Jump to Content
Serena Lutong Wang

Serena Lutong Wang

Serena Wang joined Google Research in 2016, where she works with Maya Gupta on building controllable and interpretable machine learning systems. She is also currently a PhD student in EECS at UC Berkeley advised by Michael Jordan, supported by the NSF Graduate Research Fellowship. She received a B.A. in Computer Science from Harvard University, where she worked in the intersection of machine learning and Operating Systems. Serena has presented workshops at Grace Hopper 2018 and ODSC West 2017.

Research Areas

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract We demonstrate how easy it is for modern machine-learned systems to violate common deontological ethical principles and social norms such as "favor the less fortunate," and "do not penalize good attributes." We propose that in some cases such ethical principles can be incorporated into a machine-learned model by adding shape constraints that constrain the model to respond only positively to relevant inputs. We analyze the relationship between these deontological constraints that act on individuals and the consequentialist group-based fairness goals of one-sided statistical parity and equal opportunity. This strategy works with sensitive attributes that are Boolean or real-valued such as income and age, and can help produce more responsible and trustworthy AI. View details
    Preview abstract In machine learning applications such as ranking fairness or fairness over intersectional groups, one often encounters optimization problems with an extremely large number of constraints. In particular, with ranking fairness tasks, there may even be a variable number of constraints, e.g. one for each query in the training set. In these cases, the standard approach of optimizing a Lagrangian while maintaining one Lagrange multiplier per constraint may no longer be practical. Our proposal is to associate a feature vector with each constraint, and to learn a "multiplier model" that maps each such vector to the corresponding Lagrange multiplier. We prove optimality and feasibility guarantees under assumptions on the flexibility of the multiplier model, and empirically demonstrate that our method is effective on real-world case studies. View details
    Pairwise Fairness for Ranking and Regression
    Andy Cotter
    Maya Gupta
    33rd AAAI Conference on Artificial Intelligence (2020)
    Preview abstract Improving fairness for ranking and regression models has less mature algorithmic tooling than classifiers. Here, we present pairwise formulations of fairness for ranking and regression models that can express analogues of statistical fairness notions like equal opportunity or equal accuracy, as well as statistical parity. The proposed framework supports both discrete protected groups, and continuous protected attributes. We show that the resulting training problems can be efficiently and effectively solved using constrained optimization or robust optimization algorithms. Experiments illustrate the broad applicability and trade-offs of these methods. View details
    Shape Constraints for Set Functions
    Andrew Cotter
    Maya R. Gupta
    Heinrich Jiang
    Erez Louidor
    Jim Muller
    Taman Narayan
    Tao Zhu
    International Conference on Machine Learning (2019) (to appear)
    Preview abstract Set functions predict a label from a permutation-invariant variable-size collection of feature vectors. We propose making set functions more understandable and regularized by capturing domain knowledge through shape constraints. We show how prior work in monotonic constraints can be adapted to set functions. Then we propose two new shape constraints designed to generalize the conditioning role of weights in a weighted mean. We show how one can train standard functions and set functions that satisfy these shape constraints with a deep lattice network. We propose a non-linear estimation strategy we call the semantic feature engine that uses set functions with the proposed shape constraints to estimate labels for compound sparse categorical features. Experiments on real-world data show the achieved accuracy is similar to deep sets or deep neural networks, but provides guarantees of the model behavior and is thus easier to explain and debug. View details
    Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
    Andrew Cotter
    Heinrich Jiang
    Taman Narayan
    Seungil You
    Karthik Sridharan
    Maya R. Gupta
    Journal of Machine Learning Research (2019) (to appear)
    Preview abstract Machine learning models often need to satisfy many real-world policy goals and capture some kinds of side information. We show that many such goals can be mathematically expressed as constraints on the model’s predictions on the data, which we call rate constraints. In this paper, we study the specific problem of training non-convex models subject to these rate constraints (which are non-convex and non-differentiable), and the general problem of constrained optimization of possibly non-convex objectives with possibly non-convex and non-differentiable constraints. In the non-convex setting, the Lagrangian may not have an equilibrium to converge to, and thus using the standard approach of Lagrange multipliers may fail as a deterministic solution may not even exist. Furthermore, if the constraints are non-differentiable, then one cannot optimize the Lagrangian with gradient-based methods by definition. To solve these issues, we present the proxy-Lagrangian, which leads to an algorithm that produces a stochastic classifier with theoretical guarantees by playing a two-player non-zero-sum game. The first player minimizes external regret in terms of a differentiable relaxation of the constraints and the second player enforces the original constraints by minimizing swap-regret: this leads to finding a solution concept which we call a semi-coarse correlated equilibrium which we interestingly show corresponds to an approximately optimal and feasible solution to the constrained optimization problem. We then give a procedure which shrinks the randomized solution down to one that is a mixture of at most m + 1 deterministic solutions. This culminates into end-to-end procedures which can provably solve non-convex constrained optimization problems with possibly non-differentiable and non-convex constraints. We provide extensive experimental results on benchmark and real-world problems, enforcing a broad range of policy goals including different fairness metrics, and other goals on accuracy, coverage, recall, and churn. View details
    Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints
    Andrew Cotter
    Maya Gupta
    Heinrich Jiang
    Nathan Srebro
    Karthik Sridharan
    Blake Woodworth
    Seungil You
    International Conference on Machine Learning (2019) (to appear)
    Preview abstract Classifiers can be trained with data-dependent constraints to satisfy fairness goals, reduce churn, achieve a targeted false positive rate, or other policy goals. We study the generalization performance for such constrained optimization problems, in terms of how well the constraints are satisfied at evaluation time, given that they are satisfied at training time. To improve generalization, we frame the problem as a two-player game where one player optimizes the model parameters on a training dataset, and the other player enforces the constraints on an independent validation dataset. We build on recent work in two-player constrained optimization to show that if one uses this two-dataset approach, then constraint generalization can be significantly improved. As we illustrate experimentally, this approach works not only in theory, but also in practice. View details
    No Results Found