Jump to Content
Kevin Regan

Kevin Regan

Kevin received his PhD from the University of Toronto where he focused on models of decision making and preference elicitation (homepage). Recently he has been working on large-scale machine learning and ad click prediction at Google Pittsburgh.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with preset feature representation; and existing margin methods for neural networks only enforce margin at the output layer, or are formulated with weak approximations to the true margin. This keeps margin methods inaccessible to models like deep networks. In this paper, we propose a novel loss function to impose a margin on any set of layers of deep network and show promising empirical results that consistently outperform cross-entropy based models across different application scenarios such as adversarial examples and generalization from small training sets. Our formulation allows choosing any norm for the margin. The resulting loss is general and complementary to existing regularization techniques such as weight decay, dropout and batch norm. It is applicable to any classification task where cross-entropy is used. View details
    Regret-based Reward Elicitation for Markov Decision Processes
    CoRR, vol. abs/1205.2619 (2012)
    Robust Online Optimization of Reward-Uncertain MDPs
    IJCAI (2011), pp. 2165-2171
    Eliciting Additive Reward Functions for Markov Decision Processes
    IJCAI (2011), pp. 2159-2164
    Robust Policy Computation in Reward-Uncertain MDPs Using Nondominated Policies
    Simultaneous Elicitation of Preference Features and Utility
    Paolo Viappiani
    AAAI (2010)
    Online feature elicitation in interactive optimization
    Paolo Viappiani
    ICML (2009), pp. 73-80
    Regret-based Reward Elicitation for Markov Decision Processes
    UAI (2009), pp. 444-451
    Preference elicitation with subjective features
    Paolo Viappiani
    RecSys (2009), pp. 341-344
    An Analytic Solution to Discrete Bayesian Reinforcement Learning
    Pascal Poupart
    Nikos Vlassis
    Jesse Hoey
    ICML (2006)
    Bayesian Reputation Modeling in E-Marketplaces Sensitive to Subjectivity, Deception and Change
    Pascal Poupart
    Robin Cohen
    AAAI (2006)