Shared Task

Shared Task

Persona - Knowledge Chat Shared Task aims to build a customized and intelligent conversational agent. In particular, the conversational agent takes the role of providing knowledgable answers according to the user's background information. It consists of two sub-tasks, predicting which persona and knowledge are needed to answer the question and generating plausible answers.

Figure 1: Sample conversation of Persona-Knowledge Chat

Data

Our data contains Persona - Knowledge Chat between the user and conversational agent. Each turn is annotated with the grounding candidates on Persona and Knowledge, and Wikipedia documents on the topic. The topic is mostly about the landmark around the world such as Statue of Liberty, Eiffel Tower, The Great Wall, etc.
Please check out data, github repository, and leaderboard.

Subtasks

Subtask 1

Goal: Predicting the proper persona sentences and knowledge Input: Persona candidates (5 sentences), Knowledge candidates (10 paragraphs), document on the topic, and dialog historyOutput: Index of answer persona sentences, Index of answer knowledgeEvaluation: Accuracy

Subtask 2

Goal: Generating the next agent response in natural language using persona and knowledgeInput: Persona sentences, document on the topic, and dialog historyOutput: Agent utteranceEvaluation: CharF++, BLEU, ROUGE-L

Ranking

Evaluation

Please send an email with five submission files that contain the machine's predicted output and team name until July 4th, 2022. We will notify the final ranking with the average of the given submissions. Our email address is pkchat.focus@gmail.com

Call For Participation

Each participant will be asked to complete at least one of the two subtasks. All accepted submissions will be presented at the workshop.

Submission Details

Please submit a paper that describes your model and systems. You can choose to describe all the methods for all participated tasks in one single paper submission.

Review Process

All submissions will be peer-reviews by at least three reviewers. The reviewing process will be double-blinded at the level of the reviewers. Authors are responsible for anonymizing the submissions.

Important Dates

  • Start of Shared Task Evaluation 1 (Official Test Set Open): April 12, 2022
  • Start of Shared Task Evaluation 2 (Workshop Test Set Open): May 12, 2022
  • End of Shared Task Evaluation (Leaderboard Close - Workshop phase only): July 15, 2022