Michael Rubinstein

Michael Rubinstein

See my MIT page for a full list of publications: http://people.csail.mit.edu/mrub/

Research Areas

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    SCOOP: Self-Supervised Correspondence and Optimization-Based Scene Flow
    Itai Lang
    Shai Avidan
    IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2023
    Preview abstract Scene flow estimation is a long-standing problem in computer vision, where the goal is to find the scene's 3D motion from its consecutive observations. Recently, there is a research effort to compute scene flow using 3D point clouds. A main approach is to train a regression model that consumes a source and target point clouds and outputs the per-point translation vector. An alternative approach is to learn point correspondence between the point clouds, concurrently with a refinement regression of the initial flow. In both approaches the task is very challenging, since the flow is regressed in the free 3D space, and a typical solution is to resort to a large annotated synthetic dataset. We introduce CorrFlow, a new method for scene flow estimation that can be learned on a small amount of data without using ground-truth flow supervision. In contrast to previous works, we train a pure correspondence model that is focused on learning point feature representation, and initialize the flow as the difference between a source point and its softly corresponding target point. Then, at test time, we directly optimize a flow refinement component with a self-supervised objective, which leads to a coherent flow field between the point clouds. Experiments on widely used datasets demonstrate the performance gains achieved by our method compared to existing leading techniques. View details
    Prospective validation of smartphone-based heart rate and respiratory rate measurement algorithms
    Sean K Bae
    Yunus Emre
    Jonathan Wang
    Jiang Wu
    Mehr Kashyap
    Si-Hyuck Kang
    Liwen Chen
    Melissa Moran
    Julie Cannon
    Eric Steven Teasley
    Allen Chai
    Neal Wadhwa
    Alejandra Maciel
    Mike McConnell
    Shwetak Patel
    Jim Taylor
    Jiening Zhan
    Ming Po
    Nature Communications Medicine(2022)
    Preview abstract Background: Measuring vital signs plays a key role in both patient care and wellness, but can be challenging outside of medical settings due to the lack of specialized equipment. Methods: In this study, we prospectively evaluated smartphone camera-based techniques for measuring heart rate (HR) and respiratory rate (RR) for consumer wellness use. HR was measured by placing the finger over the rear-facing camera, while RR was measured via a video of the participants sitting still in front of the front-facing camera. Results: In the HR study of 95 participants (with a protocol that included both measurements at rest and post exercise), the mean absolute percent error (MAPE) ± standard deviation of the measurement was 1.6% ± 4.3%, which was significantly lower than the pre-specified goal of 5%. No significant differences in the MAPE were present across colorimeter-measured skin-tone subgroups: 1.8% ± 4.5% for very light to intermediate, 1.3% ± 3.3% for tan and brown, and 1.8% ± 4.9% for dark. In the RR study of 50 participants, the mean absolute error (MAE) was 0.78 ± 0.61 breaths/min, which was significantly lower than the pre-specified goal of 3 breaths/min. The MAE was low in both healthy participants (0.70 ± 0.67 breaths/min), and participants with chronic respiratory conditions (0.80 ± 0.60 breaths/min). Conclusions: These results validate the accuracy of our smartphone camera-based techniques to measure HR and RR across a range of pre-defined subgroups. View details
    TL;DW? Summarizing Instructional Videos with Task Relevance & Cross-Modal Saliency
    Anna Rohrbach
    Chen Sun
    Cordelia Schmid
    Medhini Narasimhan
    Trevor Darrell
    European Conference on Computer Vision(2022)
    Preview abstract YouTube users looking for instructions for a specific task may spend a long time browsing content trying to find the right video that matches their needs. Creating a visual summary (abridged version of a video) provides viewers with a quick overview and massively reduces search time. In this work, we focus on summarizing nstructional videos, an under-explored area of video summarization. In comparison to generic videos, instructional videos can be parsed into semantically meaningful segments that correspond to important steps of the demonstrated task. Existing video summarization datasets rely on manual frame-level annotations, making them subjective and limited in size. To overcome this, we first automatically generate pseudo summaries for a corpus of instructional videos by exploiting two key assumptions: (i) relevant steps are likely to appear in multiple videos of the same task (Task Relevance), and (ii) they are more likely to be described by the demonstrator verbally (Cross-Modal Saliency). We propose an instructional video summarization network that combines a context-aware temporal video encoder and a segment scoring transformer. Using pseudo summaries as weak supervision, our network constructs a visual summary for an instructional video given only video and transcribed speech. To evaluate our model, we collect a high-quality test set, WikiHow Summaries, by scraping WikiHow articles that contain video demonstrations and visual depictions of steps allowing us to obtain the ground-truth summaries. We outperform several baselines and a state-of-the-art video summarization model on this new benchmark. View details
    Preview abstract We wish to automatically predict the "speediness" of moving objects in videos---whether they move faster, at, or slower than their "natural" speed. The core component in our approach is SpeedNet---a novel deep network trained to detect if a video is playing at normal rate, or if it is sped up. SpeedNet is trained on a large corpus of natural videos in a self-supervised manner, without requiring any manual annotations. We show how this single, binary classification network can be used to detect arbitrary rates of speediness of objects. We demonstrate prediction results by SpeedNet on a wide range of videos containing complex natural motions, and examine the visual cues it utilizes for making those predictions. Importantly, we show that through predicting the speed of videos, the model learns a powerful and meaningful space-time representation that goes beyond simple motion cues. We demonstrate how those learned features can boost the performance of self-supervised action recognition, and can be used for video retrieval. Furthermore, we also apply SpeedNet for generating time-varying, adaptive video speedups, which can allow viewers to watch videos faster, but with less of the jittery, unnatural motions typical to videos that are sped up uniformly. View details
    Layered Neural Rendering for Retiming People in Video
    Erika Lu
    Tali Dekel
    Weidi Xie
    Andrew Zisserman
    William T. Freeman
    ACM Transactions on Graphics (Proc. SIGGRAPH Asia)(2020)
    Preview abstract We present a method for retiming people in an ordinary, natural video---manipulating and editing the time in which different motions of individuals in the video occur. We can temporally align different motions, change the speed of certain actions (speeding up/slowing down, or entirely "freezing" people), or "erase" selected people from the video altogether. We achieve these effects computationally via a dedicated learning-based layered video representation, where each frame in the video is decomposed into separate RGBA layers, representing the appearance of different people in the video. A key property of our model is that it not only disentangles the direct motions of each person in the input video, but also correlates each person automatically with the scene changes they generate---e.g., shadows, reflections, and motion of loose clothing. The layers can be individually retimed and recombined into a new video, allowing us to achieve realistic, high-quality renderings of retiming effects for real-world videos depicting complex actions and involving multiple individuals, including dancing, trampoline jumping, or group running. View details
    Preview abstract We present a model for isolating and enhancing speech of desired speakers in a video. The input is a video with one or more people speaking, where the speech of interest is interfered by other speakers and/or background noise. We leverage both audio and visual features for this task, which are fed into a joint audio-visual source separation model we designed and trained using thousands of hours of video segments with clean speech from our new dataset, AVSpeech-90K. We present results for various real, practical scenarios involving heated debates and interviews, noisy bars and screaming children, only requiring users to specify the face of the person in the video whose speech they would like to isolate. View details
    Preview abstract We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to "focus" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest). View details
    On the Effectiveness of Visible Watermarks
    Bill Freeman
    Ce Liu
    Tali Dekel
    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)
    Preview abstract Visible watermarking is a widely-used technique for marking and protecting copyrights of many millions of images on the web, yet it suffers from an inherent security flaw—watermarks are typically added in a consistent manner to many images. We show that this consistency allows to automatically estimate the watermark and recover the original images with high accuracy. Specifically, we present a generalized multi-image matting algorithm that takes a watermarked image collection as input and automatically estimates the “foreground” (watermark), its alpha matte, and the “background” (original) images. Since such an attack relies on the consistency of watermarks across image collection, we explore and evaluate how it is affected by various types of inconsistencies in the watermark embedding that could potentially be used to make watermarking more secured. We demonstrate the algorithm on stock imagery available on the web, and provide extensive quantitative analysis on synthetic watermarked data. A key takeaway message of this paper is that visible watermarks should be designed to not only be robust against removal from a single image, but to be more resistant to mass-scale removal from image collections as well. View details
    A World of Movement
    Fredo Durand
    William T. Freeman
    Scientific American, 312, no. 1(2015)
    Preview
    A Computational Approach for Obstruction-Free Photography
    Tianfan Xue
    Ce Liu
    William T. Freeman
    ACM Transactions on Graphics, 34, no. 4 (Proc. SIGGRAPH)(2015)
    Preview abstract We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows. View details
    Best-Buddies Similarity for Robust Template Matching
    Tali Dekel
    Shaul Oron
    Shai Avidan
    William T. Freeman
    IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)(2015)
    Preview abstract We propose a novel method for template matching in unconstrained environments. Its essence is the Best Buddies Similarity (BBS), a useful, robust, and parameter-free similarity measure between two sets of points. BBS is based on a count of Best Buddies Pairs (BBPs)—pairs of points in which each one is the nearest neighbor of the other. BBS has several key features that make it robust against complex geometric deformations and high levels of outliers, such as those arising from background clutter and occlusions. We study these properties, provide a statistical analysis that justifies them, and demonstrate the consistent success of BBS on a challenging real-world dataset. View details
    Visual Vibrometry: Estimating Material Properties from Small Motion in Video
    Abe Davis
    Katherine L. Bouman
    Justin G. Chen
    Fredo Durand
    William T. Freeman
    IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)(2015)
    Preview abstract The estimation of material properties is important for scene understanding, with many applications in vision, robotics, and structural engineering. This paper connects fundamentals of vibration mechanics with computer vision techniques in order to infer material properties from small, often imperceptible motion in video. Objects tend to vibrate in a set of preferred modes. The shapes and frequencies of these modes depend on the structure and material properties of an object. Focusing on the case where geometry is known or fixed, we show how information about an object’s modes of vibration can be extracted from video and used to make inferences about that object’s material properties. We demonstrate our approach by estimating material properties for a variety of rods and fabrics by passively observing their motion in high-speed and regular frame-rate video. View details
    The Visual Microphone: Passive Recovery of Sound from Video
    Abe Davis
    Neal Wadhwa
    Gautham Mysore
    Fredo Durand
    William T. Freeman
    ACM Transactions on Graphics, 33, no. 4 (Proc. SIGGRAPH)(2014)
    Riesz Pyramids for Fast Phase-Based Video Magnification
    Neal Wadhwa
    Fredo Durand
    William T. Freeman
    IEEE International Conference on Computational Photography (ICCP)(2014)
    Refraction Wiggles for Measuring Fluid Depth and Velocity from Video
    Tianfan Xue
    Neal Wadhwa
    Anat Levin
    Fredo Durand
    William T. Freeman
    Proc. of the European Conference on Computer Vision (ECCV)(2014)
    Unsupervised Joint Object Discovery and Segmentation in Internet Images
    Armand Joulin
    Johannes Kopf
    Ce Liu
    IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)(2013)
    Phase-based Video Motion Processing
    Neal Wadhwa
    Fredo Durand
    William T. Freeman
    ACM Transactions on Graphics, 32, no. 4 (Proc. SIGGRAPH)(2013)
    Annotation Propagation in Large Image Databases via Dense Image Correspondence
    Ce Liu
    William T. Freeman
    Proc. of the European Conference on Computer Vision (ECCV)(2012)
    Eulerian Video Magnification for Revealing Subtle Changes in the World
    Hao-Yu Wu
    Eugene Shih
    John Guttag
    Fredo Durand
    William T. Freeman
    ACM Transactions on Graphics, 31, no. 4 (Proc. SIGGRAPH)(2012)
    Motion Denoising with Application to Time-lapse Photography
    Ce Liu
    Peter Sand
    Fredo Durand
    William T. Freeman
    IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)(2011)
    A Comparative Study of Image Retargeting
    Diego Gutierrez
    Olga Sorkine
    Ariel Shamir
    ACM Transactions on Graphics, 29, no. 5 (Proc. SIGGRAPH Asia)(2010)
    Multi-operator Media Retargeting
    Ariel Shamir
    Shai Avidan
    ACM Transactions on Graphics, 28, no. 3 (Proc. SIGGRAPH)(2009)
    Improved Seam Carving for Video Retargeting
    Ariel Shamir
    Shai Avidan
    ACM Transactions on Graphics, 27, no. 3 (Proc. SIGGRAPH)(2008)