Google Research

On Approximability of Clustering Problems Without Candidate Centers

  • Vincent Pierre Cohen-addad
  • Euiwoong Lee
  • Karthik C. S.
Symposium On Discrete Algorithms SODA'21 (2021)

Abstract

The k-means objective is arguably the most widely-used cost function for modeling clustering tasks in a metric space. In practice and historically, k-means is thought of in a continuous setting, namely where the centers can be located anywhere in the metric space. For example, the popular Lloyd’s heuristic locates a center at the mean of each cluster. Despite persistent efforts on understanding the approximability of k-means, and other classic clustering problems such as k-median and k-minsum, our knowledge of the hardness of approximation factors of these problems remains quite poor. In this paper, we significantly improve upon the hardness of approximation factors known in the literature for these objectives. We show that if the input lies in a general metric space, it is NP-hard to approximate: • Continuous k-median to a factor of 2 − o(1); this improves upon the previous inapproximability factor of 1.36 shown by Guha and Khuller (J. Algorithms ’99). • Continuous k-means to a factor of 4 − o(1); this improves upon the previous inapproximability factor of 2.10 shown by Guha and Khuller (J. Algorithms ’99). • k-minsum to a factor of 1.415; this improves upon the APX-hardness shown by Guruswami and Indyk (SODA ’03). Our results shed new and perhaps counter-intuitive light on the differences between clustering problems in the continuous setting versus the discrete setting (where the candidate centers are given as part of the input).

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work