VISER: Visual Self-Regularization

Hamid Izadinia Pierre Garrigues


A novel method to harness real adversarial examples from unlabeled images as a source of regularization data for learning robust visual representation.



Abstract


In this work, we propose the use of large set of unlabeled images as a source of regularization data for learning robust visual representation. Given a visual model trained by a labeled dataset in a supervised fashion, we augment our training samples by incorporating large number of unlabeled data and train a semi-supervised model. We demonstrate that our proposed learning approach leverages an abundance of unlabeled images and boosts the visual recognition performance which alleviates the need to rely on large labeled datasets for learning robust representation. To increment the number of image instances needed to learn robust visual models in our approach, each labeled image propagates its label to its nearest unlabeled image instances. These retrieved unlabeled images serve as local perturbations of each labeled image to perform Visual Self-Regularization (VISER). To retrieve such visual self regularizers, we compute the cosine similarity in a semantic space defined by the penultimate layer in a fully convolutional neural network. We use the publicly available Yahoo Flickr Creative Commons 100M dataset as the source of our unlabeled image set and propose a distributed approximate nearest neighbor algorithm to make retrieval practical at that scale. Using the labeled instances and their regularizer samples we show that we significantly improve object categorization and localization performance on the MS COCO and Visual Genome datasets where objects appear in context.

Keywords: Visual self regularizers, approximate nearest neighbor, real adversarial example, adversarial regularization, regularization for training ConvNets, object-in-context retrieval and visual localization, semi-supervised and weakly-supervised deep learning, fully convolutional deep neural network, t-SNE embedding map, MS COCO and Visual Genome and YFCC100M dataset.



The t-SNE map of the whole set of images (including MS COCO and YFCC images) labeled as "Bus" category after applying our proposed VISER approach. Can you guess whether green or blue background correspond to the human annotated images of MS COCO dataset? Click to see the Answer key ...



Object-in-Context Retrieval


Top regularizer examples from unlabeled YFCC dataset (row2-6) that are retrieved form multi-label image queries in several of the MS COCO categories (first row).





VISER Fully Convolutional Network Arcutecture


We use a Fully Convolutional Network to simultaneously categorize images and localize the objects of interest in a single forward pass. The last layer of the network produces an tensor of N heatmaps for localizing objects where each corresponds to one of the Nth object. The green areas correspond to regions with high probability for the object produced by our network.





Multilabel Object Categorization and Localization


Localization results of our proposed model on MS COCO validation set. The model trained on MS COCO training set and YFCC100M as the source of unlabeled set. The score map and localization of positive categories are overlaid on each image. Some failure examples are highlighted with red box for the skateboard, handbag and backpack object categories.