Adapting Language Models for Conversational Tasks
Improvements in pretraining large language models (LM) made training large-scale conversational encoders impractical. In this talk, we show that the transformation of LMs towards conversational encoders and adaptation of the task towards the original pre-training one provides substantial improvements. The gains are the highest in the most challenging setups – few-shot cases often seen in real-world applications. As test beds we analyze two major use-cases in production-oriented conversational systems – intent detection and slot labeling.
Bio
Paweł Budzianowski is a machine learning lead at PolyAI. Prior to that, he completed his Masters at Adam Mickiewicz University and PhD in Dialogue Systems Group at Cambridge University. His research interests include multi-domain dialogue policy management, data collection for end-to-end dialogue systems, and applied Bayesian deep learning. He received the best paper awards at ICASSP 2018 and EMNLP 2018. He lectures the NLP course at Jagiellonian University.