Zakaria Haque

Zakaria Haque

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    TFX: A TensorFlow-Based Production-Scale Machine Learning Platform
    Akshay Naresh Modi
    Chiu Yuen Koo
    Chuan Yu Foo
    Clemens Mewald
    Denis M. Baylor
    Jarek Wilkiewicz
    Levent Koc
    Lukasz Lew
    Martin A. Zinkevich
    Mustafa Ispir
    Neoklis Polyzotis
    Steven Whang
    Sudip Roy
    Sukriti Ramesh
    Vihan Jain
    Xin Zhang
    KDD 2017
    Preview abstract Creating and maintaining a platform for reliably producing and deploying machine learning models requires careful orchestration of many components—a learner for generating models based on training data, modules for analyzing and validating both data as well as models, and finally infrastructure for serving models in production. This becomes particularly challenging when data changes over time and fresh models need to be produced continuously. Unfortunately, such orchestration is often done ad hoc using glue code and custom scripts developed by individual teams for specific use cases, leading to duplicated effort and fragile systems with high technical debt. We present TensorFlow Extended (TFX), a TensorFlow-based general-purpose machine learning platform implemented at Google. By integrating the aforementioned components into one platform, we were able to standardize the components, simplify the platform configuration, and reduce the time to production from the order of months to weeks, while providing platform stability that minimizes disruptions. We present the case study of one deployment of TFX in the Google Play app store, where the machine learning models are refreshed continuously as new data arrive. Deploying TFX led to reduced custom code, faster experiment cycles, and a 2% increase in app installs resulting from improved data and model analysis. View details
    TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks
    Cassandra Xia
    Clemens Mewald
    George Roumpos
    Illia Polosukhin
    Jamie Alexander Smith
    Jianwei Xie
    Lichan Hong
    Mustafa Ispir
    Philip Daniel Tucker
    Yuan Tang
    Proceedings of the 23th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, Canada (2017)
    Preview abstract We present a framework for specifying, training, evaluating, and deploying machine learning models. Our focus is to simplify writing cutting edge machine learning models in a way that enables bringing those models into production. Recognizing the fast evolution of the field of deep learning, we make no attempt to capture the design space of all possible model architectures in a DSL or similar configuration. We allow users to write code to define their models, but provide abstractions that guide developers to write models in ways conducive to productionization, as well as providing a unifying Estimator interface, a unified interface making it possible to write downstream infrastructure (distributed training, hyperparameter tuning, …) independent of the model implementation. We balance the competing demands for flexibility and simplicity by offering APIs at different levels of abstraction, making common model architectures available “out of the box”, while providing a library of utilities designed to speed up experimentation with model architectures. To make out of the box models flexible and usable across a wide range of problems, these canned Estimators are parameterized not only over traditional hyperparameters, but also using feature columns, a declarative specification describing how to interpret input data. We discuss our experience in using this framework in research and production environments, and show the impact on code health, maintainability, and development speed. View details
    Wide & Deep Learning for Recommender Systems
    Levent Koc
    Tal Shaked
    Glen Anderson
    Wei Chai
    Mustafa Ispir
    Rohan Anil
    Lichan Hong
    Vihan Jain
    Xiaobing Liu
    Hemal Shah
    arXiv:1606.07792 (2016)
    Preview abstract Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. View details