# Andres Munoz Medina

Authored Publications

Google Publications

Other Publications

Sort By

A Joint Exponential Mechanism for Differentially Private Top-k

Jenny Gillenwater

Monica Ribero Diaz

International Conference on Machine Learning (ICML) 2022

Preview abstract
We present a differentially private algorithm for releasing the sequence of $k$ elements with the highest counts from a data domain of $d$ elements. The algorithm is a ``joint'' instance of the exponential mechanism, and its output space consists of all $O(d^k)$ length-$k$ sequences. Our main contribution is a method to sample this exponential mechanism in time $O(dk\log(k) + d\log(d))$ and space $O(dk)$. Experiments show that this approach outperforms existing pure differentially private methods and often improves upon even approximate differentially private methods for moderate $k$.
View details

Scalable Differentially Private Clustering via Hierarchically Separated Trees

Chris Schwiegelshohn

David Saulpic

2022 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2022) (to appear)

Preview abstract
We study the private $k$-median and $k$-means clustering problem in $d$ dimensional Euclidean space.
By leveraging tree embeddings, we give an efficient and easy to implement algorithm, that is empirically competitive with state of the art non private methods.
We prove that our method computes a solution with cost at most $O(d^{3/2}\log n)\cdot OPT + O(k d^2 \log^2 n / \epsilon^2)$, where $\epsilon$ is the privacy guarantee. (The dimension term, $d$, can be replaced with $O(\log k)$ using standard dimension reduction techniques.) Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical, runs in near-linear, $\tilde{O}(nkd)$, time and scales to tens of millions of points. We also show that our method is amenable to parallelization in large-scale distributed computing environments. In particular we show that our private algorithms can be implemented in logarithmic number of MPC rounds in the sublinear memory regime.
Finally, we complement our theoretical analysis with an empirical evaluation demonstrating the algorithm's efficiency and accuracy in comparison to other privacy clustering baselines.
View details

Adaptation Based on Generalized Discrepancy

Journal of Machine Learning Research, vol. 20 (2019), pp. 1-30

Preview abstract
We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization.
View details

Testing Incentive Compatibility in Display Ad Auctions

Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018

Preview abstract
Consider a buyer participating in a repeated auction, such as those
prevalent in display advertising. How would she test whether the
auction is incentive compatible? To bid effectively, she is
interested in whether the auction is single-shot incentive
compatible---a pure second-price auction, with fixed reserve
price---and also dynamically incentive compatible---her bids
are not used to set future reserve prices. In this work we develop
tests based on simple bid perturbations that a buyer can use to
answer these questions, with a focus on dynamic incentive
compatibility.
There are many potential A/B testing setups that one could use, but
we find that many natural experimental designs are, in fact, flawed.
For instance, we show that additive perturbations can lead to paradoxical results,
where higher bids lead to lower optimal reserve prices. We precisely
characterize this phenomenon and show that reserve prices are only
guaranteed to be monotone for distributions satisfying the Monotone
Hazard Rate (MHR) property. The experimenter must also decide how to split traffic to apply
systematic perturbations. It is tempting to have this split be
randomized, but we demonstrate empirically that unless the perturbations are
aligned with the partitions used by the seller to compute
reserve prices, the results are guaranteed to be inconclusive.
We validate our results with experiments on real display auction
data and show that a buyer can quantify both single-shot and dynamic
incentive compatibility even under realistic conditions where only
the cost of the impression is observed (as opposed to the exact
reserve price). We analyze the cost of running such experiments,
exposing trade-offs between test accuracy, cost, and underlying
market dynamics.
View details

Learning mobile phone battery consumptions

Ashish Sharma

Paul Eastham

Workshop on On Device Intelligence (2016)

Preview abstract
We introduce a novel, data-driven way for predicting battery consumption of apps. The state-of-the-art models used to blame battery consumption on apps are based on micro-benchmark experiments. These experiments are carried out on controlled setups where one can measure how much battery is consumed by each internal resource (CPU, bluetooth, WiFi...). The battery blame allocated to an app is simply the sum of the blames of the resources consumed by the app. We argue that this type of models do not capture the way phones work "in the wild" and propose instead to train a regression model using data collected from logs. We show that this type of learning is correct in the sense that under some assumptions, we can recover the true battery discharge rate of each component. We present experimental results where we consistently do better predictions than a model trained on micro-benchmarks.
View details

Adaptation algorithm and theory based on generalized discrepancy

Preview
Proceedings of the 21st ACM Conference on Knowledge Discovery and Data Mining (KDD 2015)

No Results Found