Aravindan Raghuveer

Aravindan Raghuveer

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Generalization and Learnability in Multiple Instance Regression
    Lorne Applebaum
    Ashwinkumar Badanidiyuru Varadaraja
    Chandan Giri
    Proc. UAI (2024)
    Preview abstract Multiple instance regression (MIR) introduced by (Ray & Page, 2001) as an analogue of multiple instance learning (MIL) in which we are given bags of feature-vectors (instances) and for each bag there is a bag-label which is matches the label of one (unknown) primary instance from that bag. The goal is to compute a hypothesis regressor consistent with the underlying instance-labels. A natural approach is to find the best primary instance assignment and regressor optimizing (say) the mse loss on the bags though no formal generalization guarantees were known. Our work is the first to prove generalization error bounds for MIR when the bags are drawn iid at random. Essentially, w.h.p. any MIR regressor with low error on sampled bags also has low error on the underlying instance-label distribution. We next study the complexity of linear regression on MIR bags, shown to be NP-hard in general by (Ray & Page, 2001) who however left open the possibility of arbitrarily good approximations. Significantly strengthening previous work, we prove a strong inapproximability bound: even if there exists zero-loss MIR linear regressor on a collection of 2-sized bags with labels in [−1, 1], it is NP-hard to find an MIR linear regressor with loss < C for some absolute constant C > 0. Our work also proposes two novel model training methods on MIR bags based on (i) weighted assignment loss and, (ii) EM pseudo-labeling, handling the case of overlapping bags which has not previously been studied. We conduct extensive empirical evaluations on synthetic and real-world datasets showing that our method outperforms the baseline MIR methods. View details
    Fairness under Covariate Shift: Improving Fairness-Accuracy tradeoff with few Unlabeled Test Samples
    Shreyas Havaldar
    Jatin Chauhan
    Jay Nandy
    The 38th Annual AAAI Conference on Artificial Intelligence (2024)
    Preview abstract Covariate shift in the test data is a common practical phenomena that can significantly downgrade both the accuracy and the fairness performance of the model. Ensuring fairness across different sensitive groups under covariate shift is of paramount importance due to societal implications like criminal justice. We operate in the unsupervised regime where only a small set of unlabeled test samples along with a labeled training set is available. Towards improving fairness under this highly challenging yet realistic scenario, we make three contributions. First is a novel composite weighted entropy based objective for prediction accuracy which is optimized along with a representation matching loss for fairness. We experimentally verify that optimizing with our loss formulation outperforms a number of state-of-the-art baselines in the pareto sense with respect to the fairness-accuracy tradeoff on several standard datasets. Our second contribution is a new setting we term Asymmetric Covariate Shift that, to the best of our knowledge, has not been studied before. Asymmetric covariate shift occurs when distribution of covariates of one group shifts significantly compared to the other groups and this happens when a dominant group is over-represented. While this setting is extremely challenging for current baselines, We show that our proposed method significantly outperforms them. Our third contribution is theoretical, where we show that our weighted entropy term along with prediction loss on the training set approximates test loss under covariate shift. Empirically and through formal sample complexity bounds, we show that this approximation to the unseen test loss does not depend on importance sampling variance which affects many other baselines. View details
    Preview abstract Learning from Label Proportions (LLP) is a learning problem where only aggregate level labels are available for groups of instances, called bags, during training, and the aim is to get the best performance at the instance-level on the test data. This setting arises in domains like advertising and medicine due to privacy considerations. We propose a novel algorithmic framework for this problem that iteratively performs two main steps. For the first step (Pseudo Labeling) in every iteration, we define a Gibbs distribution over binary instance labels that incorporates a) covariate information through the constraint that instances with similar covariates should have similar labels and b) the bag level aggregated label. We then use Belief Propagation (BP) to marginalize the Gibbs distribution to obtain pseudo labels. In the second step (Embedding Refinement), we use the pseudo labels to provide supervision for a learner that yields a better embedding. Further, we iterate on the two steps again by using the second step's embeddings as new covariates for the next iteration. In the final iteration, a classifier is trained using the pseudo labels. Our algorithm displays strong gains against several SOTA baselines for the LLP Binary Classification problem on various dataset types - Small Tabular, Large Tabular and Images. We achieve these improvements with minimal computational overhead above standard supervised learning due to Belief Propagation, for large bag sizes, even for a million samples. View details
    Preview abstract With large neural models becoming increasingly accurate and powerful, they have raised privacy and transparency concerns on data usage. Therefore, data platforms, regulations and user expectations are rapidly evolving leading to enforcing privacy via aggregation. We focus on the use case of online advertising where the emergence of aggregate data is imminent and can significantly impact the multi- billion dollar industry. In aggregated datasets, labels are assigned to groups of data points rather than individual data points. This leads to a formulation of a weakly supervised task - Learning from Label Proportions where a model is trained on groups (a.k.a bags) of instances and their corresponding label proportions to predict labels for individual instances. While learning on aggregate data due to privacy concerns is becoming increasingly popular there is no large scale benchmark for measuring performance and guiding improvements on this important task. We propose LLP-Bench - a web scale benchmark with ∼ 70 datasets and 45 million datapoints. To the best of our knowledge, LLP-Bench is the first large scale tabular LLP benchmark with an extensive diversity in constituent datasets, realistic in terms of the sponsored search datasets used and aggregation mechanisms followed. Through more than 3000 experiments we compare the performance of 9 SOTA methods in detail. To the best of our knowledge, this is the first study that compares diverse approaches in such depth. View details
    Preview abstract We study the problem of adversarial attack and robustness on tabular datasets with discrete features. The discrete features of a tabular dataset represent high-level meaningful concepts, with different sets of vocabularies, leading to requiring non-uniform robustness. Further, the notion of distance between tabular input instances is not well defined, making the problem of producing adversarial examples with minor perturbations qualitatively more challenging compared to existing methods. Towards this, our paper defines the notion of distance through the lens of feature embeddings, learnt to represent the discrete features. We then formulate the task of generating adversarial examples as a binary set selection problem under non-uniform feature importance. Next, we propose an efficient approximate gradient-descent based algorithm, called Discrete Non-uniform Approximation (DNA) attack, by reformulating the problem into a continuous domain to solve the original optimization problem for generating adversarial examples. We demonstrate the effectiveness of our proposed DNA attack using two large real-world discrete tabular datasets from e-commerce domains for binary classification, where the datasets are heavily biased for one-class. We also analyze challenges for existing adversarial training frameworks for such datasets under our DNA attack. View details
    Preview abstract Learning from label proportions (LLP) is a generalization of supervised learning in which the training data is available as sets or bags of feature-vectors (instances) along with the average instance-label of each bag. The goal is to train a good instance classifier. While most previous works in LLP have focused on training models on such training data, computational learnability in LLP only recently been explored by [Saket21,Saket22], who showed worst case intractability of properly learning linear threshold functions (LTFs) from label proportions while not ruling out efficient algorithms for this problem under distributional assumptions. In this work we show that it is indeed possible to efficiently learn LTFs using LTFs when given access to random bags of some label proportion in which feature-vectors are independently sampled from a fixed Gaussian distribution N(mu, Sigma), conditioned on the label assigned by the target LTF. Our method estimates a matrix by sampling pairs of feature-vector from the bags with and without replacement, and we prove that the principal component of this matrix necessarily yields the normal vector of the LTF. For some special cases with N(0, I) we provide a simpler expectation based algorithm. We include an experimental evaluation of our learning algorithms along with a comparison of with those of [Saket21, Saket22] and random LTFs, demonstrating the effectiveness of our techniques. View details
    Bi-Phone: Modeling Inter Language Phonetic Influences in Text
    Ananya B. Sai
    Richard William Sproat
    Yuri Vasilevski
    James Ren
    Ambarish Jash
    Sukhdeep Sodhi
    ACL, Association for Computational Linguistics, Toronto, Canada (2023), 2580–2592
    Preview abstract A large number of people are forced to use the Web in a language they have low literacy in due to technology asymmetries. Written text in the second language (L2) from such users often contains a large number of errors that are influenced by their native language (L1). We propose a method to mine phoneme confusions (sounds in L2 that an L1 speaker is likely to conflate) for pairs of L1 and L2. These confusions are then plugged into a generative model (Bi-Phone) for synthetically producing corrupted L2 text. Through human evaluations, we show that Bi-Phone generates plausible corruptions that differ across L1s and also have widespread coverage on the Web. We also corrupt the popular language understanding benchmark SuperGLUE with our technique (FunGLUE for Phonetically Noised GLUE) and show that SoTA language understating models perform poorly. We also introduce a new phoneme prediction pre-training task which helps byte models to recover performance close to SuperGLUE. Finally, we also release the SuperGLUE benchmark to promote further research in phonetically robust language models. To the best of our knowledge, FunGLUE is the first benchmark to introduce L1-L2 interactions in text. View details
    Preview abstract Unavailability of parallel corpora for training text style transfer (TST) models is a very challenging yet common scenario. Also, TST models implicitly need to preserve the content while transforming a source sentence into the target style. To tackle these problems, an intermediate representation is often constructed that is devoid of style while still preserving the meaning of the source sentence. In this work, we study the usefulness of using Abstract Meaning Representation (AMR) graph as the intermediate style agnostic representation. We posit that semantic notations like AMR are a natural choice for an intermediate representation. Hence, we propose the \textbf{T-STAR} model comprising of two components, text-to-AMR and AMR-to-text. We ensure that the intermediate representation is style agnostic, and use style-aware pretraining to improve the AMR-to-text performance. We show that the proposed model outperforms the state of the art TST models with improved content preservation and style accuracy numbers via automatic and human evaluations. View details
    CoCoa : An Encoder-Decoder Model for Controllable Code-switched Generation
    Sneha Mondal
    Shreya Pathak
    Ritika Goyal
    Preethi Jyothi
    Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, December 7 - December 11, 2022
    Preview abstract Code-switching has seen growing interest in recent years as an important multilingual NLP phenomenon. Generating code-switched text for data augmentation has been sufficiently well-explored. However, there is no prior work on generating code-switched text with fine-grained control on the degree of code-switching and the lexical choices used to convey formality. We present CoCoa, an encoder-decoder translation model that converts monolingual Hindi text to Hindi-English code-switched text with both encoder-side and decoder-side interventions to achieve fine-grained controllable generation. CoCoa can be invoked at test-time to synthesize code-switched text that is simultaneously faithful to syntactic and lexical attributes relevant to code-switching. CoCoa outputs were subjected to rigorous subjective and objective evaluations. Human evaluations establish that our outputs are of superior quality while being faithful to desired attributes. We show significantly improved BLEU scores when compared with human-generated code-switched references. Compared to competitive baselines, we show $10\%$ reduction in perplexity on a language modeling task and also demonstrate clear improvements on a downstream code-switched sentiment analysis task. View details
    Preview abstract We study the weak supervision learning problem of Learning from Label Proportions (LLP) where the goal is to learn an instance-level classifier using proportions of various class labels in a bag – a collection of input instances that often can be highly correlated. While representation learning for weakly-supervised tasks is found to be effective, they often require domain knowledge. To the best of our knowledge, representation learning for tabular data (unstructured data containing both continuous and categorical features) are not studied. In this paper, we propose to learn diverse representations of instances within the same bags to effectively utilize the weak bag-level supervision. We propose a domain agnostic LLP method, called "Self Contrastive Representation Learning for LLP" (SelfCLR-LLP) that incorporates a novel self– contrastive function as an auxiliary loss to learn representations on tabular data for LLP. We show that diverse representations for instances within the same bags aid efficient usage of the weak bag- level LLP supervision. We evaluate the proposed method through extensive experiments on real-world LLP datasets from e-commerce applications to demonstrate the effectiveness of our proposed SelfCLR-LLP. View details