My primary research interest is to develop generic machine learning algorithms for: (1) Efficient modeling. Learning optimal neural architectures from data instead of manual design; (2) Efficient training. Training huge deep learning models an order of magnitude faster; (3) Efficient inference. High performance neural modeling under extreme memory and latency constraints. I am also interested in developing algorithms toward true natural language understanding. For more information about me, please visit my personal homepage and Google Scholar.