Thomas K. Leung
Research Areas
Authored Publications
Sort By
Preview abstract
Text-guided diffusion models such as DALLE-2, IMAGEN, and Stable Diffusion are able to generate an effectively endless variety of images given only a short text prompt describing the desired image content. In many cases the images are very high quality as well. However, these models often struggle to compose scenes containing several key objects such as characters in specified positional relationships. Unfortunately, this capability to ``direct'' the placement of characters and objects both within and across images is crucial in storytelling, as recognized in the literature on film and animation theory. In this work we take a particularly straightforward approach to providing the needed direction, by injecting ``activation'' at desired positions in the cross-attention maps corresponding to the objects under control, while attenuating the remainder of the map. The resulting approach is a step toward generalizing the applicability of text-guided diffusion models beyond single images to collections of related images, as in storybooks. To the best of our knowledge, our Directed Diffusion method is the first diffusion technique that provides positional control over multiple objects, while making use of an existing pre-trained model and maintaining a coherent blend between the positioned objects and the background. Moreover, it requires only a few lines to implement.
View details
Recognizing Multimodal Entailment (tutorial at ACL 2021)
Afsaneh Hajiamin Shirazi
Blaž Bratanič
Christina Liu
Gabriel Fedrigo Barcik
Georg Fritz Osang
Jared Frank
Lucas Smaira
Ricardo Abasolo Marino
Roma Patel
Vaiva Imbrasaite
(2021) (to appear)
Geo-Aware Networks for Fine-Grained Recognition
Grace Chu
Brian Potetz
Weijun Wang
Andrew Howard
Fernando Andres Brucher
ICCV 2019
Preview abstract
Fine grained recognition distinguishes among categories with subtle visual differences. To help identify fine grained categories, other information besides images has been used. However, there has been little effort on using geolocation information to improve fine grained classification accuracy. Our contributions to this field are twofold. First, to the best of our knowledge, this is the first paper which systematically examined various ways of incorporating geolocation information to fine grained images classification - from geolocation priors, to post-processing, to feature modulation. Secondly, to overcome the situation where no fine grained dataset has complete geolocation information, we introduce, and will make public, two fine grained datasets with geolocation by providing complementary information to existing popular datasets - iNaturalist and YFCC100M. Results on these datasets show that, the best geo-aware network can achieve 8.9% top-1 accuracy increase on iNaturalist and 5.9% increase on YFCC100M, compared with image only models' results. In addition, for small image baseline models like Mobilenet V2, the best geo-aware network gives 12.6% higher top-1 accuracy than image only model, achieving even higher performance than Inception V3 models without geolocation. Our work gives incentives to use geolocation information to improve fine grained recognition for both server and on-device models.
View details
Preview abstract
Recent deep networks are capable of memorizing the entire data even when the labels are completely random. To overcome the overfitting on corrupted labels, we propose a novel technique of learning another neural network, called MentorNet, to supervise the training of the base deep networks, namely, StudentNet. During training, MentorNet provides a curriculum (sample weighting scheme) for StudentNet to focus on the sample the label of which is probably correct. Unlike the existing curriculum that is usually predefined by human experts, MentorNet learns a data-driven curriculum dynamically with StudentNet. Experimental results demonstrate that our approach can significantly improve the generalization performance of deep networks trained on corrupted training data. Notably, to the best of our knowledge, we achieve the best-published result on WebVision, a large benchmark containing 2.2 million images of real-world noisy labels.
View details
Towards a Semantic Perceptual Image Metric
Johannes Ballé
Sung Jin Hwang
Sergey Ioffe
Sean O'Malley
Charles Rosenberg
2018 25th IEEE Int. Conf. on Image Processing (ICIP)
Preview abstract
We present a full reference, perceptual image metric based on VGG-16, an artificial neural network trained on object classification. We fit the metric to a new database based on 140k unique images annotated with ground truth by human raters who received minimal instruction. The resulting metric shows competitive performance on TID 2013, a database widely used to assess image quality assessments methods. More interestingly, it shows strong responses to objects potentially carrying semantic relevance such as faces and text, which we demonstrate using a visualization technique and ablation experiments. In effect, the metric appears to model a higher influence of semantic context on judgements, which we observe particularly in untrained raters. As the vast majority of users of image processing systems are unfamiliar with Image Quality Assessment (IQA) tasks, these findings may have significant impact on real-world applications of perceptual metrics.
View details
No Fuss Distance Metric Learning using Proxies
Alexander Toshev
Sergey Ioffe
International Conference on Computer Vision (ICCV), IEEE (2017)
Preview abstract
We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship -- an anchor point x is similar to a set of positive points Y, and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized.
While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15% points, while converging three times as fast as other triplet-based losses.
View details
Preview abstract
In this paper we address the issue of output instability
of deep neural networks: small perturbations in the visual
input can significantly distort the feature embeddings and
output of a neural network. Such instability affects many
deep architectures with state-of-the-art performance on a
wide range of computer vision tasks. We present a general
stability training method to stabilize deep networks against
small input distortions that result from various types of common
image processing, such as compression, rescaling, and
cropping. We validate our method by stabilizing the stateof-the-art
Inception architecture [11] against these types of
distortions. In addition, we demonstrate that our stabilized
model gives robust state-of-the-art performance on largescale
near-duplicate detection, similar-image ranking, and
classification on noisy datasets.
View details
Pose Embeddings: A Deep Architecture for Learning to Match Human Poses
Greg Mori
Nisarg Kothari
Alexander Toshev
arXiv (2015)
Preview abstract
We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.
View details
Learning Fine-grained Image Similarity with Deep Ranking
Jiang Wang
Chuck Rosenberg
Jingbin Wang
James Philbin
Bo Chen
Ying Wu
CVPR'2014, IEEE
Preview abstract
Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.
View details
Deep Convolutional Ranking for Multilabel Image Annotation
Preview
Yunchao Gong
Yangqing Jia
Alexander Toshev
Sergey Ioffe
International Conference on Learning Representations (2014) (to appear)