My interests are in machine learning and natural language understanding. I work in the NYC language research team where we built state of the art syntactic parsers SyntaxNet and Parsey McParseface, as well as their successors, DRAGNN and ParseySaurus. I focus on building large-scale distributed training for modular neural network models that learn from several NLU subtasks simultaneously and accumulate common knowledge across different facets of understanding language (transfer learning). I also worked briefly on transfer learning for computer vision at Stanford.
More recently, I have been looking into cryptography and game theory, discovering and helping fix a large-scale Ethereum smart contract vulnerability.
Before Google, I did competitive programming (top-12 nationwide in Russia) and studied mathematics. I graduated with a joint BSc/MSc degree in mathematics from Moscow State University (thesis), where I focused on probability theory and discrete mathematics.
Fun fact: I used to be pretty serious about online poker, turning $50 into $25000 over my freshman and sophomore years in college. The largest bluff I ever called was ~4x my monthly living budget at the time. Regrettably, my parents eventually counseled me out of this exciting career choice.