He is broadly interested in machine learning (ML) and natural language processing (NLP). His primary research topics include the development of practical large-scale learning methods for natural language and knowledge. He has proposed many unique learning algorithms in this topic, e.g., an algorithm for calculating discrete kernels on sophisticated grammatical and semantic structures, a large-scale semi-supervised learning algorithm that can provide compact models with the state-of-the-art performance of target tasks, learning algorithms of neural word embeddings that enable incremental training and the incorporation of out-of-vocabulary words, and published many papers at leading NLP and ML conferences and journals.
In the last few years, he has also been investigating the interpretability of deep neural networks, including interpretable adversarial training, as his new research project. His projects at Google involve exploring how to improve the interpretability of neural text generation models, such as neural language models and encoder-decoder models. Specifically, he is developing “evidence-driven” prediction methods, an approach of interpretable models, to realize more meaningful feedback to users and developers.