Jump to Content
Hussein Hazimeh

Hussein Hazimeh

See more at https://www.hazimeh.com
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Benchmarking Robustness to Adversarial Image Obfuscations
    Florian Stimberg
    Yintao Liu
    Merve Kaya
    Cyrus Rashtchian
    Ariel Fuxman
    Mehmet Tek
    Advances in Neural Information Processing Systems (2023)
    Preview abstract Automated content filtering and moderation is an important tool that allows online platforms to build striving user communities that facilitate cooperation and prevent abuse. Unfortunately, resourceful actors try to bypass automated filters in a bid to post content that violate platform policies and codes of conduct. To reach this goal, these malicious actors obfuscate policy violating content to prevent machine learning models from reaching the correct decision. In this paper, we invite researchers to tackle this specific issue and present a new image benchmark. This benchmark, based on ImageNet, simulates the type of obfuscations created by malicious actors. It goes beyond ImageNet-C and ImageNet-C-Bar by proposing general, drastic, adversarial modifications that preserve the original content intent. It aims to tackle a more common adversarial threat than the one considered by Lp-norm bounded adversaries. Our hope is that this benchmark will encourage researchers to test their models and methods and try to find new approaches that are more robust to these obfuscations. View details
    L0Learn: A Scalable Package for Sparse Learning using L0 Regularization
    Rahul Mazumder
    Tim Nonet
    JMLR Machine Learning Open Source Software (MLOSS) (2023)
    Preview abstract We introduce L0Learn: an open-source package for sparse regression and classification using L0 regularization. L0Learn implements scalable, approximate algorithms, based on coordinate descent and local combinatorial optimization. The package is built using C++ and has a user-friendly R interface. Our experiments indicate that L0Learn can scale to problems with millions of features, achieving competitive run times with state-of-the-art sparse learning packages. L0Learn is available on both CRAN and GitHub. View details
    Preview abstract Adversarial nets have proved to be powerful in various domains including generative modeling (GANs), transfer learning, and fairness. However, successfully training adversarial nets using first-order methods remains a major challenge. Typically, careful choices of the learning rates are needed to maintain the delicate balance between the competing networks. In this paper, we design a novel learning rate scheduler that dynamically adapts the learning rate of the adversary to maintain the right balance. The scheduler is driven by the fact that the loss of an ideal adversarial net is a constant known a priori. The scheduler is thus designed to keep the loss of the optimized adversarial net close to that of an ideal network. We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation. Our experiments indicate that adversarial nets trained with the scheduler are less likely to diverge and require significantly less tuning. For example, on CelebA, a GAN with the scheduler requires only one-tenth of the tuning budget needed without a scheduler. Moreover, the scheduler leads to statistically significant improvements in model quality, reaching up to 27% in Frechet Inception Distance for image generation and 3% in test accuracy for domain adaptation. View details
    Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
    Riade Benbaki
    Wenyu Chen
    Meng Xiang
    Natalia Ponomareva
    Zhe Zhao
    Rahul Mazumder
    ICML 2023 (2023)
    Preview abstract The sheer size of modern neural networks makes model serving a serious computational challenge. A popular class of compression techniques overcomes this challenge by pruning or sparsifying the weights of pretrained networks. While useful, these techniques often face serious tradeoffs between computational requirements and compression quality. In this work, we propose a novel optimization-based pruning framework that considers the combined effect of pruning (and updating) multiple weights subject to a sparsity constraint. Our approach, CHITA, extends the classical Optimal Brain Surgeon framework and results in significant improvements in speed, memory, and performance over existing optimization-based approaches for network pruning. CHITA's main workhorse performs combinatorial optimization updates on a memory-friendly representation of local quadratic approximation(s) of the loss function. On a standard benchmark of pretrained models and datasets, CHITA leads to significantly better sparsity-accuracy tradeoffs than competing methods. For example, for MLPNet with only 2% of the weights retained, our approach improves the accuracy by 63% relative to the state of the art. Furthermore, when used in conjunction with fine-tuning SGD steps, our method achieves significant accuracy gains over the state-of-the-art approaches. View details
    COMET: Learning Cardinality Constrained Mixture of Experts with Trees and Local Search
    Shibal Ibrahim
    Wenyu Chen
    Natalia Ponomareva
    Zhe Zhao
    Rahul Mazumder
    KDD 2023 (2023)
    Preview abstract The sparse Mixture-of-Experts (Sparse-MoE) framework efficiently scales up model capacity in various domains, such as natural language processing and vision. Sparse-MoEs select a subset of the “experts” (thus, only a portion of the overall network) for each input sample using a sparse, trainable gate. Existing sparse gates are prone to convergence and performance issues when training with first-order optimization methods. In this paper, we introduce two improvements to current MoE approaches. First, we propose a new sparse gate: COMET, which relies on a novel tree-based mechanism. COMET is differentiable, can exploit sparsity to speed up computation, and outperforms state-of-the-art gates. Second, due to the challenging combinatorial nature of sparse expert selection, first-order methods are typically prone to low-quality solutions. To deal with this challenge, we propose a novel, permutation-based local search method that can complement first-order methods in training any sparse gate, e.g., Hash routing, Top-k, DSelect-k, and COMET. We show that local search can help networks escape bad initializations or solutions. We performed large-scale experiments on various domains, including recommender systems, vision, and natural language processing. On standard vision and recommender systems benchmarks, COMET+ (COMET with local search) achieves up to 13% improvement in ROC AUC over popular gates, e.g., Hash routing and Top-k, and up to 9% over prior differentiable gates e.g., DSelect-k. When Top-k and Hash gates are combined with local search, we see up to 100× reduction in the budget needed for hyperparameter tuning. Moreover, for language modeling, our approach improves over the state-of-the-art MoEBERT model for distilling BERT on 5/7 GLUE benchmarks as well as SQuAD dataset. View details
    DSelect-k: Differentiable Selection in the Mixture of Experts with Applications to Multi-Task Learning
    Zhe Zhao
    Aakanksha Chowdhery
    Maheswaran Sathiamoorthy
    Yihua Chen
    Rahul Mazumder
    Lichan Hong
    35th Conference on Neural Information Processing Systems (NeurIPS 2021) (2021)
    Preview
    The Tree Ensemble Layer: Differentiability meets Conditional Computation
    Natalia Ponomareva
    Petros Mol
    Zhenyu Tan
    Rahul Mazumder
    The 37th International Conference on Machine Learning (ICML) (2020)
    Preview abstract Neural networks and tree ensembles are state-of-the-art learners, each with its unique statistical and computational advantages. We aim to combine these advantages by introducing a new layer for neural networks, composed of an ensemble of differentiable decision trees (a.k.a. soft trees). While differentiable trees demonstrate promising results in the literature, they are typically slow in training and inference as they do not support conditional computation. We mitigate this issue by introducing a new sparse activation function for sample routing, and implement true conditional computation by developing specialized forward and backward propagation algorithms that exploit sparsity. Our efficient algorithms pave the way for jointly training over deep and wide tree ensembles using first-order methods (e.g., SGD). Experiments on 23 classification datasets indicate over 10x speed-ups compared to the differentiable trees used in the literature and over 20x reduction in the number of parameters compared to gradient boosted trees, while maintaining competitive performance. Moreover, experiments on CIFAR, MNIST, and Fashion MNIST indicate that replacing dense layers in CNNs with our tree layer reduces the test loss by 7-53% and the number of parameters by 8x. We provide an open-source TensorFlow implementation with a Keras API. View details
    Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives
    Antoine Dedieu
    Rahul Mazumder
    Journal of Machine Learning Research (JMLR) (2021)
    Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization
    Rahul Mazumder
    Ali Saab
    Mathematical Programming (2021)
    Learning Hierarchical Interactions at Scale: A Convex Optimization Approach
    Rahul Mazumder
    The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) (2020)
    Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms
    Rahul Mazumder
    Operations Research (2020)