Results
You can see the details of the results of the preliminary round in the article of DRC2022 overview in the Proceedings.
Preliminary round
Team
CIS / d-itlab / DSML-TDU / Flow / irisapu / ISC22 / LINE / MIYABI / MIYAMA / OS / ponponkichi / SZK-L / baselineEvaluation
Impression evaluation (9 items, total 63 points)
Satisfaction with choice: ”Were you satisfied with your choice of tourist attraction to visit?” (7 points)
Informativeness: ”Were you able to obtain sufficient information about the sightseeing spots?” (7 points)
Naturalness: ”Did you have a natural dialogue with the robot?” (7 points)
Appropriateness: ”Was the robot’s service appropriate?” (7 points)
Likeability: ”Was the robot likable in providing the service?” (7 points)
Satisfaction with dialogue: ”Were you satisfied with your interaction with the robot?” (7 points)
Trustworthiness: ”Did you trust the robot?” (7 points)
Usefulness: ”Did you use the information obtained from the robot to select the sightseeing spot?” (7 points)
Intention to reuse: ”Would you like to visit this travel agency again?” (7 points)
Effectiveness of the robot’s recommendation (ranging from -100 to 100)
It was evaluated by the change in the degree to which the customer wanted to visit the sightseeing spot recommended by the robot before and after the dialogue.
To comprehensively evaluate the two factors of impression and recommendation effect, the two evaluation scores (total of averaged impression scores and averaged robot’s recommendation effect score) for each participating team were plotted on a scatter plot with two axes. In this plot, a team belonging to the cluster formed at the position with the highest values on both axes was considered a top team of preliminary round and advanced to the final round. The teams belonging to the cluster formed at the second highest position received an honorable mention award.
Result
Three teams were selected as finalists.
MIYAMA / LINE / OS
Five teams received an honorable mention.
CIS / DSML-TDU / irisapu / MIYABI / SZK-L
Evaluation at final round
Judges (4 from DRC2022 Executive Committee and one from experts of counter salesperson)
Ryuichiro Higashinaka (Nagoya University)
Takashi Minato (RIKEN / ATR)
Hiromitsu Nishizaki (University of Yamanashi)
Takayuki Nagai (Osaka University)
Mitsue Nakamura (JTB Publishing Inc.)The judges will try a dialogue with the robot controlled by the participant program. The researchers and experts evaluate from different evaluation perspectives.
Two trials of dialogue were conducted per team. In each trial, randomly assigned judges out of five judges interacted with each other, and the judges who interacted evaluated their impressions of their own conversations, while the other four evaluated their impressions from a third-party perspective. In addition, technical points are also evaluated.
Evaluation items are the following five viewpoints. The evaluation of each team was 10 points per item (5 full marks x 2 trials) x 5 items x 5 people x 2 trials = 250 points maximum.
Informativeness (5points)
Naturalness (5 points)
Likeability (5 points)
Satisfaction with dialogue (5 points)
Technical score (5 points)
Four judges selected from the executive committee evaluate technical points in terms of robotics, speech recognition/dialogue, dialogue systems, etc., and judges with experience in travel agency work evaluate technical points in terms of customer service.
Result of final round
3rd place (Outstanding performance award) : Team OS
Judge Nagai
(in Japanese)
Judge Nakamura
(in Japanese)
2nd place (Outstanding performance award) : Team MIYAMA
Judge Minato
(in Japanese)
Judge Nishizaki
(in Japanese)
1st place (Best performance award) : Team LINE
Judge Higashinaka
(in Japanese)
Judge Nakamura
(in Japanese)