I'm a Machine Learning research scientist working on generative modeling, semi-supervised and unsupervised deep learning, and reinforcement learning. I'm most well known for my work on GANs, showing how they can be used for semi-supervised learning and how we can evaluate them using the Inception score, and for my work on VAEs. I did some of the early work in variational inference using reparameterization, for which I received the 2014 Lindley prize. I also like to tackle practical problems and I am the winner of multiple Kaggle competitions. I have a PhD from Erasmus University Rotterdam.
Selected publications:
Improved techniques for training GANs
T Salimans, I Goodfellow, W Zaremba, V Cheung, A Radford, X Chen
Advances in Neural Information Processing Systems, 2234-2242
Weight normalization: A simple reparameterization to accelerate training of deep neural networks
T Salimans, DP Kingma
Advances in Neural Information Processing Systems, 901-909
Improved variational inference with inverse autoregressive flow
DP Kingma, T Salimans, R Jozefowicz, X Chen, I Sutskever, M Welling
Advances in Neural Information Processing Systems, 4743-4751
Variational dropout and the local reparameterization trick
DP Kingma, T Salimans, M Welling
Advances in Neural Information Processing Systems, 2575-2583
Markov chain monte carlo and variational inference: Bridging the gap
T Salimans, D Kingma, M Welling
International Conference on Machine Learning, 1218-1226
Fixed-form variational posterior approximation through stochastic linear regression
T Salimans, DA Knowles
Bayesian Analysis 8 (4), 837-882
About
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work