Google Research

TensorFlow-Serving: Flexible, High-Performance ML Serving

Workshop on ML Systems at NIPS 2017


We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations.

The paper covers the architecture of the extensible serving library, as well as the distributed system for multi-tenant model hosting. Along the way it points out which extensibility points and performance optimizations turned out to be especially important based on production experience.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work