Alireza Fathi
Alireza is currently a staff research scientist at Google Research.
He was a Postdoctoral Fellow at FeiFei Li's team at the Computer Science Department at Stanford University.
He received his Ph.D. degree from Georgia Institute of Technology, and his B.Sc. degree from Sharif University of Technology.
Personal webpage: (http://ai.stanford.edu/~alireza/)
Research Areas
Authored Publications
Sort By
Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation
Kyle Genova
Xiaoqi Yin
Leonidas Guibas
Frank Dellaert
Conference on Computer Vision and Pattern Recognition (2022)
Preview abstract
We present Panoptic Neural Fields (PNF), an object-aware neural scene representation that decomposes a scene into a set of objects (things) and background (stuff). Each object is represented by an oriented 3D bounding box and a multi-layer perceptron (MLP) that takes position, direction, and time and outputs density and radiance. The background stuff is represented by a similar MLP that additionally outputs semantic labels. Each object MLPs are instance-specific and thus can be smaller and faster than previous object-aware approaches, while still leveraging category-specific priors incorporated via meta-learned initialization. Our model builds a panoptic radiance field representation of any scene from just color images. We use off-the-shelf algorithms to predict camera poses, object tracks, and 2D image semantic segmentations. Then we jointly optimize the MLP weights and bounding box parameters using analysis-by-synthesis with self-supervision from color images and pseudo-supervision from predicted semantic segmentations. During experiments with real-world dynamic scenes, we find that our model can be used effectively for several tasks like novel view synthesis, 2D panoptic segmentation, 3D scene editing, and multiview depth prediction.
View details
Pillar-based Object Detection for Autonomous Driving
Yue Wang
Justin Solomon
ECCV (2020)
Preview abstract
We present a simple and flexible object detection framework optimized for autonomous driving. Building on the observation that point clouds in this application are extremely sparse, we propose a practical pillar-based approach to fix the imbalance issue caused by anchors. In particular, our algorithm incorporates a cylindrical projection into multi-view feature learning, predicts bounding box parameters per pillar rather than per point or per anchor, and includes an aligned pillar-to-point projection module to improve the final prediction. Our anchor-free approach avoids hyperparameter search associated with past methods, simplifying 3D object detection while significantly improving upon state-of-the-art.
View details
An LSTM Approach to Temporal 3D Object Detection in LiDAR Point Clouds
Rui Huang
Wanyue Zhang
ECCV (2020)
Preview abstract
Detecting objects in 3D LiDAR data is a core technology for autonomous driving and other robotics applications. Although LiDAR data is acquired over time, most of the 3D object detection algorithms propose object bounding boxes independently for each frame and neglect the useful information available in the temporal domain. To address this problem, in this paper we propose a sparse LSTM-based multi-frame 3d object detection algorithm. We use a U-Net style 3D sparse convolution network to extract features for each frame's LiDAR point-cloud. These features are fed to the LSTM module together with the hidden and memory features from last frame to predict the 3d objects in the current frame as well as hidden and memory features that are passed to the next frame. Experiments on the Waymo Open Dataset show that our algorithm outperforms the traditional frame by frame approach by 7.5% mAP@0.7 and other multi-frame approaches by 1.2% while using less memory and computation per frame. To the best of our knowledge, this is the first work to use an LSTM for 3D object detection in sparse point clouds.
View details
Virtual Multi-view Fusion for 3D Semantic Segmentation
Xiaoqi(Michael) Yin
Brian Brewington
European Conference on Computer Vision (2020)
Preview abstract
Semantic segmentation of 3D meshes is an important problem for 3D scene understanding. In this paper we revisit the classic multiview representation of 3D meshes and study several techniques that make them effective for 3D semantic segmentation of meshes. Given a 3D mesh reconstructed from RGBD sensors, our method effectively chooses different virtual views of the 3D mesh and renders multiple 2D channels for training an effective 2D semantic segmentation model. Features from multiple per view predictions are finally fused on 3D mesh vertices to predict mesh semantic segmentation labels. Using the large scale indoor 3D semantic segmentation benchmark of ScanNet, we show that our virtual views enable more effective training of 2D semantic segmentation networks than previous multiview approaches. When the 2D per pixel predictions are aggregated on 3D surfaces, our virtual multiview fusion method is able to achieve significantly better 3D semantic segmentation results compared to all prior multiview approaches and competitive with recent 3D convolution approaches.
View details
DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes
Mahyar Najibi
Zhichao Lu
Vivek Mansing Rathod
Larry S. Davis
CVPR 2020
Preview abstract
We propose DOPS, a fast single-stage 3D object detection method for LIDAR data. Previous methods often make
domain-specific design decisions, for example projecting
points into a bird-eye view image in autonomous driving scenarios. In contrast, we propose a general-purpose
method that works on both indoor and outdoor scenes. The
core novelty of our method is a fast, single-pass architecture
that both detects objects in 3D and estimates their shapes.
3D bounding box parameters are estimated in one pass for
every point, aggregated through graph convolutions, and
fed into a branch of the network that predicts latent codes
representing the shape of each detected object. The latent shape space and shape decoder are learned on a synthetic dataset and then used as supervision for the end-toend training of the 3D object detection pipeline. Thus our
model is able to extract shapes without access to groundtruth shape information in the target dataset. During experiments, we find that our proposed method achieves stateof-the-art results by ∼5% on object detection in ScanNet
scenes, and it gets top results by 3.4% in the Waymo Open
Dataset, while reproducing the shapes of detected cars.
View details
3D-MPA: Multi Proposal Aggregation for 3D Semantic Instance Segmentation
Francis Engelmann
Bastian Leibe
Matthias Niessner
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Preview abstract
We present 3D-MPA, a method for instance segmentation on 3D point clouds.
Given an input point cloud, we propose an object-centric approach where each point votes for its object center.
We sample object proposals from the predicted object centers.
Then, we learn proposal features from grouped point features that voted for the same object center.
A graph convolutional network introduces inter-proposal relations, providing higher-level feature learning in addition to the lower-level point features.
Each proposal comprises a semantic label, a set of associated points over which we define a foreground-background mask, an objectness score and aggregation features.
Previous works usually perform non-maximum-suppression (NMS) over proposals to obtain the final object detections or semantic instances.
However, NMS can discard potentially correct predictions.
Instead, our approach keeps all proposals and groups them together based on the learned aggregation features.
We show that grouping proposals improves over NMS and outperforms previous state-of-the-art methods on the tasks of 3D object detection and semantic instance segmentation on the ScanNetV2 benchmark and the S3DIS dataset.
View details
Floors are flat: Leveraging Semantics for Reliable and Real-Time Surface Normal Prediction
Proceedings of the IEEE International Conference on Computer Vision Workshops (2019)
Preview abstract
We propose 4 insights that help to significantly improve the performance of deep learning models that predict surface normals and semantic labels from a single RGB image.
These insights are: (1) denoise the ”ground truth” surface normals in the training set to ensure consistency with the semantic labels; (2) concurrently train on a mix of real and synthetic data, instead of pretraining on synthetic and finetuning on real; (3) jointly predict normals and semantics using a shared model, but only backpropagate errors on pixels that have valid training labels; (4) slim down the model and use grayscale instead of color inputs. Despite the simplicity of these steps, we demonstrate consistently improved state of the art results on several datasets, using a model that runs at 12 fps on a standard mobile phone.
View details
The Devil is in the Decoder: Classification, Regression and GANs
Zbigniew Wojna
Vittorio Ferrari
Nathan Silberman
Liang-chieh Chen
IJCV (2019) (to appear)
Preview abstract
Many machine vision applications require
predictions for every pixel of the input image (for exam-
ple semantic segmentation, boundary detection). Mod-
els for such problems usually consist of encoders which
decreases spatial resolution while learning a high-di-
mensional representation, followed by decoders who re-
cover the original input resolution and result in low-
dimensional predictions. While encoders have been stud-
ied rigorously, relatively few studies address the decoder
side. Therefore this paper presents an extensive com-
parison of a variety of decoders for a variety of pixel-
wise tasks ranging from classification, regression to syn-
thesis. Our contributions are: (1) Decoders matter: we
observe significant variance in results between different
types of decoders on various problems. (2) We introduce
new residual-like connections for decoders. (3) We in-
troduce a novel decoder: bilinear additive upsampling.
(4) We explore prediction artefacts.
View details
Instance Embedding Transfer to Unsupervised Video Object Segmentation
Siyang Li
Alexey Vorobyov
Qin Huang
C.-C. Jay Kuo
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Preview abstract
We propose an unsupervised video object segmentation method in this work by transferring the knowledge of image-based instance embedding networks. The instance embedding networks produce an embedding for each pixel and identify all pixels belonging to the same object. It is observed that instance embeddings trained by static images are stable over consecutive video frames. Thus, we apply the trained networks to video object segmentation without model retraining or online fine-tuning and incorporate them with objectness from instance segmentation model and optical flow features. The stability of instance embedding is analyzed, and instability mitigation is studied. Our method outperforms state-of-the-art unsupervised segmentation methods in the DAVIS dataset and is competitive on the Segtrack-v2 data set.
View details
Preview abstract
We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform optical flow based methods. Finally, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking.
View details