Hardness of Learning Boolean Functions from Label Proportions

Venkatesan Guruswami
Proc. FSTTCS (2023)
Google Scholar

Abstract

In recent years the framework of learning from label proportions (LLP) has been gaining importance in machine learning. In this, the training examples are aggregated into subsets or bags and only the average label per bag is available for learning an example-level predictor. This generalizes traditional PAC learning which is the special case of unit-sized bags. The computational learning aspects of LLP were studied in recent works [Saket 21, 22] which showed algorithms and hardness for learning halfspaces in the LLP setting. In this work we focus on the intractability of LLP learning boolean functions.

Our first result shows that given a collection of bags of size at most 2 which are consistent with an OR function, it is NP-hard to find a CNF of constantly many clauses which "satisfies" any constant-fraction of the bags. This is in contrast with the work of [Saket 21] which gave a (2/5)-approximation using a halfspace, thus our result provides a separation between the ORs and halfspaces as hypotheses for LLP learning ORs.

Next, we prove the hardness of satisfying more than 1/2 + o(1) fraction of such bags using a t-DNF (i.e. DNF where each term has <= t literals) for any constant t. In usual PAC learning such a hardness was known [Khot-Saket 08] only for learning noisy ORs.

We also study the learnability of parities and show that it is NP-hard to satisfy more than (q/2^{q-1} + o(1))-fraction of q-sized bags which are consistent with a parity using a parity, while the random parity achieves a (1/2^{q-1})-approximation.