Adapt-NLP: The Second Workshop on Domain Adaptation for NLP

The growth in computational power and the rise of Deep Neural Networks (DNNs) have revolutionized the field of Natural Language Processing (NLP). The ability to collect massive dataset with the capacity to train big models on powerful GPUs, has yielded NLP-based technology that was beyond imagination only a few years ago.

However, many NLP algorithms rely on the fundamental assumption that the training and the test sets follow the same underlying distribution. When these distributions do not match, a phenomenon known as domain shift, such models are likely to encounter performance drops. Despite the growing availability of heterogeneous data, many NLP domains still lack the amounts of labeled data required to feed data-hungry neural models, and in some domains and languages even unlabeled data is scarce. As a result, the problem of domain adaptation, training an algorithm on annotated data from one or more source domain and applying it to other target domains, is a fundamental NLP challenge.

Important Dates:

Note: All deadlines are 11:59PM UTC-12:00.

Important News and Updates:

Workshop Goals

We propose a workshop that aims to bring together researchers and ideas in the broad spectrum of NLP research in domain adaptation, zero-shot transfer learning and related setups where the data distribution differs between the train and test phases. In spite of the wide interest in the research community and its high relevance to industrial text-based applications, the last workshop that was devoted to domain adaptation was held in 2010. Considering the substantial progress in NLP since 2010, and the particular advancements related to DA, we believe there is a strong need in such a workshop, that will facilitate a discussion among interested researchers, and bring together researchers who today focus on an isolated instance of the more general problem.

Main Workshop Topics

The topics of the workshop include, but are not restricted to:

Invited Speakers

Panel Discussion

Best Paper Prizes

Best paper submission prizes were granted to:

Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data
Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger and Hinrich Schütze
Conditional Adversarial Networks for Multi-Domain Text Classification
Yuan Wu, Diana Inkpen and Ahmed El-Roby
The prize was donated by our generous sponsors.

Sponsors