Jump to Content

Dave Andersen

It's rumored that his unofficial undergraduate focus involved far too much mountain biking and rock climbing, leading him to flee the mountains for his subsequent education in the east.

His work on energy-efficient computing received awards ranging from SOSP's best paper award to Carnegie Mellon's Newell Award for Research Excellence. In 1995, he was the CTO of a regional Internet Service Provider in Salt Lake City, and was recently a co-founder of a (since acquired) deep learning startup with some of his colleagues at CMU.

He has an abiding passion for creating efficient and secure distributed systems, particularly when the solutions involve integrating practical algorithmic advances or capitalizing on upcoming advances in computer architecture. Most recently, he and his students have engaged in a wide-ranging exploration of how to use network stack bypass techniques to accelerate applications such as packet forwarding and key-value storage.

From his experience at CMU developing a high-performance parameter server to support machine learning, he became convinced that machine learning and large-scale data analysis was going to be one of the key consumers of computation over the next several years, to the point where it would affect everything from the design of distributed systems, down through the decisions of CPU architects.

His goals for his time at Google involved both gaining more familiarity with the cutting edge of machine learning at scale, as well as spending time doing real engineering, to see what parts of Google's highly-tuned engineering process might be effectively transitioned back to research and teaching. He therefore joined the Google Brain team to work on problems involving TensorFlow, Google's open source library for numerical computation and machine intelligence, which powers applications ranging from search ranking (RankBrain) to conversational assistants (SmartReply).

His projects at Google involve exploring how to scale the training of Deep Neural Networks onto even larger clusters, and making TensorFlow even more user-friendly by helping it provide more meaningful feedback to programmers that have incorrectly used the TensorFlow primitives.