Si Si
Si Si (PhD in computer science, University of Texas at Austin, 2016) is a Research Scientist at Google. Si conducts research in many areas of machine learning such as large-scale problems and on-device machine learning, etc. More information can be found on her webpage (https://springdaisy.github.io/).
Research Areas
Authored Publications
Sort By
Serving Graph Compression for Graph Neural Networks
Cho-Jui Hsieh
International Conference on Learning Representations (2023) (to appear)
Preview abstract
Serving a GNN model in online applications is challenging --- one has to propagate the information from training nodes to testing nodes to achieve the best performance, while storing the whole training set (including training graph and node features) during inference time is prohibitive for most of the real world applications. We tackle this serving space compression problem in the paper, where the goal is to compress the storage requirement for GNN serving. Given a model to be served, the proposed method constructs a small set of virtual representative nodes to replace the original training nodes, so that users just need to replace the original training set by this virtual representative set to reduce the space requirement for serving, without the need of changing the actual GNN model and the forward pass.
We carefully analyze the error in the forward pass and derive simple ways to construct the node features and graph of virtual representative nodes to minimize the approximation error. Experimental results demonstrate that the proposed method can significantly reduce the serving space requirement for GNN inference.
View details
How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework
Cho-Jui Hsieh
Tesi Xiao
Xuanqing Liu
Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Preview abstract
Neural Ordinary Differential Equation (Neural ODE) has been proposed as a continuous approximation to the traditional ResNet structure. However, the resulting ODE system is often unstable---a small input perturbation will be amplified through the ODE system and could eventually blow up. In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE) which injects random noise by forming a stochastic differential equation. Our framework can model different noise injection regularization techniques in discrete networks, such as dropout and additive/multiplicative noise injection at each block. We provide a theoretical analysis showing the improved robustness of Neural SDE against small input perturbations. Furthermore, we show that the Neural SDE framework can achieve better generalization error than Neural ODE on real datasets and is more stable to small adversarial and non-adversarial input perturbations in practice.
View details
Preview abstract
Existing attention mechanisms, are mostly point-based in that a model is designed to attend to a single item in a collection of items (the memory). Intuitively, an area in the memory that may contain multiple items can be worth attending to as well. Although Softmax, which is typically used for computing attention alignments, assigns non-zero probability for every item in memory, it tends to converge to a single item and cannot efficiently attend to a group of items that matter. We propose area attention: a way to attend to an area of the memory, where each area contains a group of items that are either spatially adjacent when the memory has a 2-dimensional structure, such as images, or temporally adjacent for 1-dimensional memory, such as natural language sentences. Importantly, the size of an area, i.e., the number of items in an area, can vary depending on the learned coherence of the adjacent items. Using an area of items, instead of a single, we hope attention mechanisms can better capture the nature of the task. Area attention can work along multi-head attention for attending multiple areas in the memory. We evaluate area attention on two tasks: character-level neural machine translation and image captioning, and improve upon strong (state-of-the-art) baselines in both cases. In addition to proposing the novel concept of area attention, we contribute an efficient way for computing it by leveraging the technique of summed area tables.
View details
Preview abstract
Neural language models have been widely used in various NLP tasks, including
machine translation, next word prediction and conversational agents. However, it
is challenging to deploy these models on mobile devices due to their slow prediction speed, where the bottleneck is to compute top candidates in the softmax
layer. In this paper, we introduce a novel softmax layer approximation algorithm
by exploiting the clustering structure of context vectors. Our algorithm uses a
light-weight screening model to predict a much smaller set of candidate words
based on the given context, and then conducts an exact softmax only within that
subset. Training such a procedure end-to-end is challenging as traditional clustering methods are discrete and non-differentiable, and thus unable to be used with
back-propagation in the training process. Using the Gumbel softmax, we are able
to train the screening model end-to-end on the training set to exploit data distribution. The algorithm achieves an order of magnitude faster inference than the
original softmax layer for predicting top-k words in various tasks such as beam
search in machine translation or next words prediction. For example, for machine translation task on German to English dataset with around 25K vocabulary,
we can achieve 20.4 times speed up with 98.9% precision@1 and 99.3% precision@5 with the original softmax layer prediction, while state-of-the-art (Zhang
et al., 2018) only achieves 6.7x speedup with 98.7% precision@1 and 98.1% precision@5 for the same task.
View details
Preview abstract
Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. For advanced NLP problems, a neural language model usually consists of recurrent layers (e.g., using LSTM cells), an embedding matrix for representing input tokens, and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves state-of-the-art performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90\% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). We start by grouping words into
c
blocks based on their frequency, and then refine the clustering iteratively by constructing weighted low-rank approximation for each block, where the weights are based the frequencies of the words in the block. The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6x compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26x compression rate without losing prediction accuracy.
View details