Multiple Question Generation from Presentation Transcripts (MQG)
Introduction
Preparing for an oral presentation is a common task in various domains, particularly in professional settings. For instance, researchers who have had their papers accepted at conferences need to deliver either an oral or poster presentation to share their findings with fellow researchers. Politicians must prepare for debates during election periods, while company managers are required to deliver speeches to update investors on company operations. When crafting their presentation drafts, a fundamental concern arises: what kinds of questions might the audience ask? We plan to explore the ability of LLMs from this aspect. We prepared a dataset called MQG [1], which contains the questions that professional analysts asked after listening to managers' presentations in the earnings conference call.
Both automatic evaluation and human evaluation will be included in the final assessment.
Dataset
Each instance contains "presentation" and "questions". Our goal is to generate "questions" based on the given "presentation" We will use ROUGE-L for auto-evaluation, and the participants will evaluate other teams' system outputs manually. All evaluation records will be shared for future research. The guidelines for manual evaluation will be shared later.
Important Dates: Time zone: Anywhere On Earth (AOE)
Registration Open: March 15th, 2024
We will send the test set to the registered teams on April 16th
Training set release: March 15th
Test set release: April 16th
System's outputs submission deadline (Registration Close): April 25th
Release of results: April 30th
Shared task paper submissions due: May 15th
Shared Task Paper Submission System: https://easychair.org/conferences/?conf=finnlpagentscen2024
Notification: June 4th
Camera-Ready Version of Shared Task Paper Due: June 25th
Policies
The ACL Template MUST be used for your submission(s).
The reviewing process will be single-blind. Accepted papers proceedings will be published at ACL Anthology.
Shared task participants will be asked to review other teams' papers during the review period.
Participants need to help in the human evaluation part if they submit their models' outputs.
Submissions must be in electronic form using the paper submission software linked above.
At least one author of each accepted paper should register and present their work in person in FinNLP-AgentScen 2024. Papers with “No Show” may be redacted. Authors will be required to agree to this requirement at the time of submission. It's a rule for all IJCAI-2024 workshops.
Shared Task Organizers
Chung-Chi Chen - AIRC, AIST, Japan
Yining Juan - Department of Computer Science and Information Engineering, National Taiwan University, Taiwan
Hsin-Hsi Chen - Department of Computer Science and Information Engineering, National Taiwan University, Taiwan