Jump to Content
Joe Jiang

Joe Jiang

Wenjie (Joe) Jiang currently works in Google Brain team, where he is focused on applying ML to system design problems. He also worked in network optimization and analysis team, where he applied a variety of mathematical techniques to solving the network capacity planning and demand forecast problem. He received a B.S. from USTC China; M.Phil from Chinese University of Hong Kong; and Ph.D. from Princeton University, all in Computer Science.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Learning Semantic Representations to Verify Hardware Designs
    Shobha Vasudevan
    Rishabh Singh
    Hamid Shojaei
    Richard Ho
    Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS) (2021)
    Preview abstract We introduce Design2Vec, a representation learning approach to learn semantic abstractions of hardware designs at the Register Transfer Level (RTL). The key idea of our approach is to design a graph convolution based neural architecture that embeds RTL syntax and semantics. We train the architecture on the task of predicting coverage in the design, given some input test stimulus. We then present an approach to use the learnt RTL representation to automatically generate new tests for unseen coverage locations in the design. Our experimental results demonstrate that Design2Vec outperforms several baseline approaches that do not incorporate the RTL semantics and it can be used to generate instantaneous coverage predictions compared to nightly simulation times. Moreover, the tests generated using Design2Vec result in coverage of design points that are difficult to cover for design verification experts using the current manual approaches for test generation. View details
    Efficient Imitation Learning with Local Trajectory Optimization
    Jialin Song
    Anna Darling Goldie
    Navdeep Jaitly
    Azalia Mirhoseini
    ICML 2020 Workshop on Inductive Biases, Invariances and Generalization in RL (2020)
    Preview abstract Imitation learning is a powerful approach to optimize sequential decision making policies from demonstrations. Most strategies in imitation learning rely on per-step supervision from pre-collected demonstrations as in behavioral cloning or from interactive expert policy queries such as DAgger. In this work, we present a unified view of behavioral cloning and DAgger through the lens of local trajectory optimization, which offers a means of interpolating between them. We provide theoretical justification for the proposed local trajectory optimization algorithm and show empirically that our method, POLISH (Policy Optimization by Local Improvement through Search), is much faster than methods that plan globally, speeding up training by a factor of up to 14 in wall clock time. Furthermore, the resulting policy outperforms strong baselines in both reinforcement learning and imitation learning. View details
    Capacity planning for the Google backbone network
    Ajay Kumar Bangla
    Ben Preskill
    Christoph Albrecht
    Emilie Danna
    Xiaoxue Zhao
    ISMP 2015 (International Symposium on Mathematical Programming) (to appear)
    Preview abstract Google operates one of the largest backbone networks in the world. In this talk, we present optimization and simulation techniques we use to design the network topology and provision its capacity to achieve conflicting objectives such as scale, cost, availability, and latency. View details
    No Results Found