Conference Program (CEST time)

9:00am - 9:10am Introduction
Gathering and opening remarks.
9:10am - 10:00am Invited Talk - Hinrich Schutze - "Humans Learn From Task Descriptions and So Should Our Models"
Hinrich Schutze (University of Munich - LMU)
Abstract:

In many types of human learning (including human learning that involves some kind of adaptation), task descriptions are a central ingredient. They are usually accompanied by a few examples, but there is little human learning that is based on examples only. In contrast, the typical learning setup for NLP tasks lacks task descriptions and is supervised with 100s or 1000s of examples. This is even true for so-called few-shot learning, a term often applied to scenarios with tens of thousands of "shots".

Inspired by the GPT models, which also exploit task descriptions, we introduce Pattern-Exploiting Training (PET). PET reformulates task descriptions as cloze questions that can be effectively processed by pretrained language models. In contrast to GPT, PET combines task descriptions with supervised learning. We show that PET learns well from as little as ten training examples and outperforms GPT-3 on GLUE even though it has 99.9% fewer parameters.

10:00am - 11:00am Poster Session 1
11:00am - 11:30am Break
11:30am - 12:20pm Invited Talk - Isabelle Augenstein - "On the different stages of adaptation in domain adaptation"
Bio: Isabelle Augenstein is an associate professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. She also co-heads the research team at CheckStep Ltd, a content moderation start-up. Her main research interests are fact checking, low-resource learning and explainability. Before starting a faculty position, she was a postdoctoral research associate at UCL, mainly investigating machine reading from scientific articles. She has a PhD in Computer Science from the University of Sheffield. She currently hold a prestigious DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media'. Isabelle Augenstein is the current president of the ACL Special Interest Group on Representation Learning (SIGREP), as well a co-founder of the Widening NLP (WiNLP) initiative.
Abstract: Domain adaptation has been studied as a core solution to increasing the performance on new domains, with varying degrees of in-domain data begin used to learn to adapt representations. A related machine learning formalism is transfer learning, which is concerned with learning stable, often cross-domain representations, which are then applied to new tasks or domains, with varying degrees of fine-tuning. The success of recent large, pre-trained Transformer language models for transfer learning raises the questions: is better pre-training all that is needed for improving out-of-domain performance? And are traditional domain adaptation methods such as mixture-of-experts or domain adversarial training still needed? In this talk, I will discuss recent research in domain adaptation from the viewpoint of at which stage in the model training pipeline adaptation is performed: early on (at pre-training time), at an intermediate stage (at supervised task training time) or late (at the decision-making stage). Overall, results trends show that while early adaptation, i.e. better pre-training, leads to the highest gains, additional adaptation at intermediate or later stages can still result in moderate gains.
12:20pm - 12:40pm Oral Presentation 1 - "Conditional Adversarial Networks for Multi-Domain Text Classification"
Yuan Wu, Diana Inkpen and Ahmed El-Roby
12:40pm - 1:00pm Oral Presentation 2 - "Dependency Parsing Evaluation for Low-resource Spontaneous Speech"
Zoey Liu and Emily Prud'hommeaux
1:00pm - 1:20pm Oral Presentation 3 - "Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data"
Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger and Hinrich Schütze
1:20pm - 3:00pm Lunch
3:00pm - 3:20pm Oral Presentation 4 - "Semantic Parsing of Brief and Multi-Intent Natural Language Utterances"
Logan Lebanoff, Charles Newton, Victor Hung, Beth Atkinson, John Killilea and Fei Liu
3:20pm - 3:40pm Oral Presentation 5 - 'Locality Preserving Loss: Neighbors that Live together, Align together"
Ashwinkumar Ganesan, Francis Ferraro and Tim Oates
3:40pm - 4:40pm Poster Session 2
4:40pm - 5:00pm Break
5:00pm - 5:50pm Invited Talk - Jacob Eisenstein - "Dialects as Domains?"
Bio: Jacob Eisenstein is a research scientist at Google AI, where he is currently focused on making language technology more robust and trustworthy. He was previously on the faculty of the Georgia Institute of Technology, where he supervised four successful doctoral dissertations (with two more forthcoming) and received the NSF CAREER Award for research on computational sociolinguistics. He completed his Ph.D. at MIT with a dissertation on computational models of speech and gesture. Thanks to his brief appearance in the documentary film "If These Knishes Could Talk", Jacob has a Bacon number of 2.
Abstract: There is mounting evidence that natural language processing systems do not perform equally well across dialects. Domain adaptation could be viewed as a potential solution, with the source domain corresponding roughly to dialects that are present in the training data, and target domains corresponding to dialects that are not. This talk will explore the motivation for this idea, as well as theoretical problems with the approach of building domains from dialects. I will argue instead for a feature-based view of dialect robustness, and I will present recent work on the identification of Indian English dialect features using a system trained from a small number of minimal pairs.
5:50pm - 6:30pm Panel Discussion
Roi Reichart (Technion - Israel), Jacob Eisenstein (Google AI), Mari Ostendorf (University of Washington), Anders Søgaard (University of Copenhagen), and Sebastian Ruder (DeepMind)