Multi2ConvAI
Blog

“Hey Neo, I want to book a train ticket and a hotel room!” -
Domain Adaptation

How a chatbot solves complex tasks with a multi-domain task-oriented dialog system

06.12.2021

Voice assistants and chatbots often need to solve complex tasks in our everyday life. You might want to book a flight ticket and a hotel room when you’re planning for a trip to Berlin. Or if you want to book a table in the restaurant you might also need a taxi to go there. A multi-domain task-oriented dialog (TOD) system helps the chatbot to know specifically what the user needs and has a wide range of applied scenarios in our daily life. The ideas of either constructing a multi-domain task-oriented dialogue dataset [1][2] or utilizing the multi-domain datasets to build the task-oriented dialogue system have been used in recent research and industry applications.

In this blog post, we want to introduce domain specialization strategies generally in NLP fields, and the recent work for domain adaptation towards task-oriented dialogue systems for injecting domain-specific knowledge into the model behind the chatbot.

Methodology

Language models (LMs) pretrained on general domains of interest have been applied in a wide range of downstream NLP tasks (e.g., BERT, RoBERTa). However, in-domain concerns for the downstream domain-specific scenario still remain room for improvement.

Gururangan et al. (2020) [3] proposed the “Domain-Adaptive Pretraining” approach where they continued to pretrain language model on a large corpus of unlabeled domain-specific text with masked language modeling (MLM) objective and proved the effectiveness of injecting domain-specialized knowledge into pretrained language model on the classification tasks of four domains. A similar approach is also applied to hate speech detection [4] and is further extended to the multilingual scenario with the selected in-domain terms for extracting in-domain unsupervised texts.

In the narrower context of task-oriented dialog, Wu et al. (2020) [5] pretrained the language model on the concatenation of nine human-to-human multi-turn dialog datasets with MLM and response contrastive loss (RCL) objectives and proved the effectiveness on several TOD downstream tasks. This intuitively injects the conversational structural information into a pretrained language model with the usage of TOD datasets, instead of unsupervised “plain” texts (e.g., Wikipedia).

Henderson et al. (2020) [6] presented the general domain specialization for TOD on the large collection of Reddit data on response selection (RS) objective, which is also helping to inject the conversational structural information in the pretrained language model. Whang et al. (2020) [7] utilized the domain specialization for single domain retrieval based TOD on in-domain corpora, paired with both MLM and the response selection (RS) objectives as their post-training approach.

To put it in a nutshell, domain-specialization methods facilitate the multi-domain task-oriented dialog system to solve complex tasks. The current research goes further into the direction of how to efficiently adapt domain-specific knowledge into pretrained language models for multi-domain task-oriented dialog systems and will remain open for future work.


[1] Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. (2018). MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics.

[2] Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. (2020). Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8689-8696.

[3] Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. (2020). Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics.

[4] Goran Glavaš, Mladen Karan, and Ivan Vulić. (2020). XHate-999: Analyzing and detecting abusive language across domains and languages. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6350–6365, Barcelona, Spain (Online). International Committee on Computational Linguistics.

[5] Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. (2020). TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP), pages 917–929, Online. Association for Computational Linguistics.

[6] Matthew Henderson, Iñigo Casanueva, Nikola Mrkšić, Pei-Hao Su, Tsung-Hsien Wen, and Ivan Vulić. (2020). ConveRT: Efficient and Accurate Conversational Representations from Transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2161–2174, Online. Association for Computational Linguistics.

[7] Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and HeuiSeok Lim. (2020). An effective domain adaptive post-training method for bert in response selection. In Proc. Interspeech 2020, pages 1585–1589.


About the partners of the project

The consortium of the Multi2ConvAI research project consists of the University of Mannheim and two SMEs based in Karlsruhe, inovex GmbH and Neohelden GmbH. The three partners share their expertise within the project in the hope to learn and grow from the resulting synergies.


Contact

If you have any questions or suggestions, please do not hesitate to contact us at info@multi2conv.ai.



Funding

The project Mehrsprachige und domänenübergreifende Conversational AI is financially supported by the State of Baden-Württemberg as part of the “KI-Innovationswettbewerb” (an AI innovation challenge). The funding aims at overcoming technological hurdles when commercializing artificial intelligence (AI) and helping small and medium-sized enterprises (SMEs) to benefit from the great potential AI holds. With the innovation competition, its specifically promoted cooperation between companies and research institutions and the transfer of research and development from science to business are to be accelerated.