Please make sure to register to Dropbox to edit this document!
Date:20th June 2020
Title: VirTex: Learning Visual Representations from Textual Annotations
Authors: Karan Desai, Justin Johnson
Abstract:The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex-- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet-- supervised or unsupervised-- despite using up to ten times fewer images.
While reading the paper, we encourage you to post your notes/comments/summaries of what you understood from the paper(Use the sections below to determine where notes/comments/summaries should go). You can also include your questions below.
Discussion 🤓
Introduction ⚡
Discuss the motivation and objectives of this paper at a high level. As we read through the paper we can all take notes on the points we found important to emphasize and have further discussion about.
“Semantically dense representation”[Explain like I am 5].
Agenda/Housekeeping
Discussion 🤓