Martin Wicke
Martin Wicke completed his PhD at ETH Zurich on geometric modeling and simulation using particle methods. As a fellow of the Max Planck Center for Visual Computing and Communication and visiting Assistant Professor at Stanford University, he worked on learning system behavior to speed up fluid simulations, to improve routing in sensor networks, and localization in indoor environments. He worked on unified solid and fluid simulations at UC Berkeley and on video analysis as VP R&D at HighlightCam. Before joining Google, Martin worked on consumer-focused products simplifying and automating CAD and CAM, video-processing, VR, and using machine learning for developer tools.
Research Areas
Authored Publications
Sort By
Preview abstract
We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the scene, enforcing consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures.
We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists.
We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.
View details
TFX: A TensorFlow-Based Production-Scale Machine Learning Platform
Akshay Naresh Modi
Chiu Yuen Koo
Chuan Yu Foo
Clemens Mewald
Denis M. Baylor
Jarek Wilkiewicz
Levent Koc
Lukasz Lew
Martin A. Zinkevich
Mustafa Ispir
Neoklis Polyzotis
Steven Whang
Sudip Roy
Sukriti Ramesh
Vihan Jain
Xin Zhang
Zakaria Haque
KDD 2017
Preview abstract
Creating and maintaining a platform for reliably producing and deploying machine learning models requires careful orchestration of many components—a learner for generating models based on training data, modules for analyzing and validating both data as well as models, and finally infrastructure for serving models in production. This becomes particularly challenging when data changes over time and fresh models need to be produced continuously. Unfortunately, such orchestration is often done ad hoc using glue code and custom scripts developed by individual teams for specific use cases, leading to duplicated effort and fragile systems with high technical debt.
We present TensorFlow Extended (TFX), a TensorFlow-based general-purpose machine learning platform implemented at Google. By integrating the aforementioned components into one platform, we were able to standardize the components, simplify the platform configuration, and reduce the time to production from the order of months to weeks, while providing platform stability that minimizes disruptions.
We present the case study of one deployment of TFX in the Google Play app store, where the machine learning models are refreshed continuously as new data arrive. Deploying TFX led to reduced custom code, faster experiment cycles, and a 2% increase in app installs resulting from improved data and model analysis.
View details
TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks
Cassandra Xia
Clemens Mewald
George Roumpos
Illia Polosukhin
Jamie Alexander Smith
Jianwei Xie
Lichan Hong
Mustafa Ispir
Philip Daniel Tucker
Yuan Tang
Zakaria Haque
Proceedings of the 23th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, Canada (2017)
Preview abstract
We present a framework for specifying, training, evaluating, and deploying machine learning models. Our focus is to simplify writing cutting edge machine learning models in a way that enables bringing those models into production. Recognizing the fast evolution of the field of deep learning, we make no attempt to capture the design space of all possible model architectures in a DSL or similar configuration. We allow users to write code to define their models, but provide abstractions that guide developers to write models in ways conducive to productionization, as well as providing a unifying Estimator interface, a unified interface making it possible to write downstream infrastructure (distributed training, hyperparameter tuning, …) independent of the model implementation.
We balance the competing demands for flexibility and simplicity by offering APIs at different levels of abstraction, making common model architectures available “out of the box”, while providing a library of utilities designed to speed up experimentation with model architectures. To make out of the box models flexible and usable across a wide range of problems, these canned Estimators are parameterized not only over traditional hyperparameters, but also using feature columns, a declarative specification describing how to interpret input data.
We discuss our experience in using this framework in research and production environments, and show the impact on code health, maintainability, and development speed.
View details
Preview abstract
We consider the problem of next frame prediction
from video input. A recurrent convolutional neural network is
trained to predict depth from monocular video input, which,
along with the current video image and the camera trajectory,
can then be used to compute the next frame. Unlike prior next-
frame prediction approaches, we take advantage of the scene
geometry and use the predicted depth for generating the next
frame prediction. Our approach can produce rich next frame
predictions which include depth information attached to each
pixel. Another novel aspect of our approach is that it predicts
depth from a sequence of images (e.g. in a video), rather than
from a single still image.
We evaluate the proposed approach on the KITTI dataset,
a standard dataset for benchmarking tasks relevant to au-
tonomous driving. The proposed method produces results which
are visually and numerically superior to existing methods that
directly predict the next frame. We show that the accuracy of
depth prediction improves as more prior frames are considered.
View details
TensorFlow: A system for large-scale machine learning
Jianmin Chen
Matthieu Devin
Geoffrey Irving
Manjunath Kudlur
Rajat Monga
Benoit Steiner
Paul Tucker
Vijay Vasudevan
Pete Warden
Yuan Yu
Xiaoqiang Zheng
12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), USENIX Association (2016), pp. 265-283
Preview abstract
TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous “parameter server” designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that Tensor- Flow achieves for several real-world applications.
View details
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Ashish Agarwal
Eugene Brevdo
Craig Citro
Matthieu Devin
Ian Goodfellow
Andrew Harp
Geoffrey Irving
Yangqing Jia
Rafal Jozefowicz
Lukasz Kaiser
Manjunath Kudlur
Dan Mané
Rajat Monga
Chris Olah
Mike Schuster
Jonathon Shlens
Benoit Steiner
Ilya Sutskever
Kunal Talwar
Paul Tucker
Vijay Vasudevan
Pete Warden
Yuan Yu
Xiaoqiang Zheng
tensorflow.org (2015)
Preview abstract
TensorFlow is an interface for expressing machine learning
algorithms, and an implementation for executing such algorithms.
A computation expressed using TensorFlow can be
executed with little or no change on a wide variety of heterogeneous
systems, ranging from mobile devices such as phones
and tablets up to large-scale distributed systems of hundreds
of machines and thousands of computational devices such as
GPU cards. The system is flexible and can be used to express
a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been
used for conducting research and for deploying machine learning
systems into production across more than a dozen areas of
computer science and other fields, including speech recognition,
computer vision, robotics, information retrieval, natural
language processing, geographic information extraction, and
computational drug discovery. This paper describes the TensorFlow
interface and an implementation of that interface that
we have built at Google. The TensorFlow API and a reference
implementation were released as an open-source package under
the Apache 2.0 license in November, 2015 and are available at
www.tensorflow.org.
View details