Multi2ConvAI
Blog

Tech Talk - Let's take a look on the frontend of Multi2ConvAI

From draft to website - insights into our concept and technical implementation of the Multi2ConvAI platform

31.01.2022

Within the Multi2ConvAI project we have developed a platform to foster the exchange and knowledge transfer between research and application. The goal is to provide an environment to bring state-of-the-art NLP models faster into conversational AI systems. In order to do so, users can share their data and models on the platform.


In this blog article we will give you an insight into the functionality of the platform, the technical development of the frontend and the concept behind the platform.


Welcome to Multi2ConvAI

Users enter Multi2ConvAI via the landing page depicted above. This landing page presents a project overview and a short introduction of the project partners. In addition, users can interact with the chatbot Neo. Neo is a digital assistant developed by the project partner Neohelden. On our landing page, Neo allows users to interact with our Corona- and Logistic-models. For more details about our use case we recommend to take a look at this blogpost.

The Multi2ConvAI-platform offers numerous options for exploring, testing and applying the models and datasets we developed in the project. Users can navigate to the different areas of the platform from the landing page. The section Tutorial aims for developers which want to use our python package to train and deploy conversational AI models. Datasets shows an overview of datasets that have been collected for the different use cases. Users can interact and start models through Models . Eventually, the tab Manage Endpoints gives an overview about currently running models, the chance to try them out and play around.

Trying to bring research and industrial application closer together

Multi2ConvAI aims to bring academic research and industrial application of conversational AI systems one step closer together. To this end, we aim to foster knowledge transfer and move models faster from the prototyping phase to production. Our project also aims to ease the transfer of digital assistants to new languages and domains.

While research is working with state-of-the-art technologies and approaches in conversational AI, industry can contribute through real value-added use cases, domain knowledge, and user feedback. For the processing of a concrete use case, e.g. the development of a FAQ bot for Corona, this means that industry collects domain-specific data sets and makes them available to research. Research then develops conversational AI models that attempt to solve the given problem. Once the quality of the models is sufficient, they are made available to the industry partner, who can then deploy them and make them available to their end users. This process brings together the strengths of both fields.

On the Dataset page, users can find an overview of the existing datasets. The datasets are grouped according to their respective domains. It is possible to upload new datasets and either assign them to an existing domain or create a new domain. With a click on Info & Preview, the user is presented with details of the corresponding datasets and can get a first impression of the dataset via a preview.

Analogously, the Models page provides an overview of available models. In addition to the name of the model, further metadata is displayed to the user. As shown above, these can already be specified when uploading the model. Furthermore, the container for certain models can be started and made available for inference via Models.

The Manage Endpoints page provides an overview of which endpoints are currently active. Active endpoints can be shut down or inactive models can be started. Each active endpoint can be tested directly. By clicking on Demo, users can interact with the corresponding model. For example, statements regarding Corona can be classified into categories such as vaccination, masks, etc.. When such a model is used in a chatbot, responses adapted to the classified intents would then be returned. In addition to trying out the models, the popup presented gives the user an overview of the metadata of the container, the available classes and the request / response format for communicating with the model.

The demo inference presented here with a Zeroshot model allows the user to classify intent on arbitrary classes. Especially when entering new use cases and lacking labeled training data, it represents a viable approach to prototype chatbots. In addition to the text to be classified, a list of possible classes can be input. The model then predicts the most likely class for the given text.

From the design sketch to the technical implementation


In order to be able to implement the features mentioned, we first created some design drafts and sketches to determine exactly what functionalities we needed in our backend and what needed to be developed in the frontend. This was then implemented using numerous libraries and frameworks from the web development space.

An excerpt from our concept board. Here we tried to identified which user persona needs which functionality and roughly sketched potential layouts.

Our premise for the design of the frontend was to keep it as lean and clear as possible. Both in the actual functionalities and in the design. To ensure this, we created a prototype of the application at an early stage to be able to test the "look and feel". This basis simplified adjustments to functionality and design in the further course of the project, so that we were able to implement changes quickly.

The technical implementation of the frontend is primarily based on the JavaScript framework Vue.js. Vue, or in our case the Vue CLI, provides the basic structure for the frontend, such as the use of Node.js for testing with a local server. In addition, the platform has long been a single-page application (SPA) at its core, but now it is a multi-page application (MPA), partly because of the login functionality.

In addition, we have used numerous libraries to improve our design and make it more attractive. The inovex elements library should be mentioned here in particular, because not only the basic design is based on the design of the example app from inovex elements, all page elements, e.g. the navigation in tab form or the buttons originate from this library. Furthermore, they are easy to implement and can be adapted relatively easily, e.g. like here to the Multi2ConvAI color scheme with the main color "coral" (#FF7F86).

Also, using the FontAwesome library, numerous icons were included to lighten up the design a bit. Additionally, we used Bootstrap to provide a basic structure for the pages.


About the partners of the project

The consortium of the Multi2ConvAI research project consists of the University of Mannheim and two SMEs based in Karlsruhe, inovex GmbH and Neohelden GmbH. The three partners share their expertise within the project in the hope to learn and grow from the resulting synergies.


Contact

If you have any questions or suggestions, please do not hesitate to contact us at info@multi2conv.ai.



Funding

The project Mehrsprachige und domänenübergreifende Conversational AI is financially supported by the State of Baden-Württemberg as part of the “KI-Innovationswettbewerb” (an AI innovation challenge). The funding aims at overcoming technological hurdles when commercializing artificial intelligence (AI) and helping small and medium-sized enterprises (SMEs) to benefit from the great potential AI holds. With the innovation competition, its specifically promoted cooperation between companies and research institutions and the transfer of research and development from science to business are to be accelerated.