Paper Reading/Discussion #1
  • If you can, please make sure to register to Edit this document!

Date: May 2, 2020, May 9 2020

Title: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

Authors: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu


Slack channel: paper_reading_t5 (join the Slack group)





Why Paper Reading/Discussion?
  • Enable a fun and open place to discuss about the latest research in NLP and ML
  • Keeping up with the fast pace of ML and NLP research
  • Create a community where you can feel free to bounce off ideas/start conversations and always know that you are welcome to do so 
  • Connect and engage with academics and industry practitioners

Agenda/Housekeeping

  • Introductions in chat
  • Where you from?
  • One thing you would love to get from today’s session?
  • Format of the paper reading sessions
  • Silent reading (~90 minutes)?
  • Open Discussion (30 minutes)
  • Introduction (5 minutes)
  • Method/Results (15 minutes)
  • Practicality (5 minutes)
  • Questions (5 minutes)
  • Note taking, especially during discussions (Volunteers! 🙏 )
  • GitHub repo to upload notes and track discussions

Discussion 🤓 

Introduction  
Discuss the motivation and objectives of this paper at a high level. As we read through the paper we can all take notes on the points we found important to emphasize and have further discussion about.

  • Ever increasing research in the Transfer learning field, especially after the BERT paper, made it difficult to track all the various techniques like RoBERTa, AlBERTa etc. which might vary in their datasets, chosen tasks, method of pre-training etc. For example, RoBERTa was better than its predecessors just because it was trained more. So, the author wants to bring all of them to one platform i.e Text-to-Text platform, where all the downstream tasks are converted into a single task (seq2seq), for checking the merit of various concepts proposed in various papers.

  • Crucially, the text-to-text framework allows us to directly apply the same model, objective, training procedure, and decoding process to every task we consider. With this unified approach, we can compare the effectiveness of different transfer learning objectives, unlabeled datasets, and other factors, while exploring the limits of transfer learning for NLP by scaling up models and datasets beyond what has previously been considered.