CAI + CAI = CAI^2

When creative AI meets conversational AI (CAI + CAI = CAI^2)

Welcome!

This is a joint workshop with COLING 2022.

Please submit your paper to:

https://www.softconf.com/coling2022/Cr_AI_Conv_AI/

You can share your paper at Arxiv (or other places) anytime. This doese not influence the scoring/evaluating of your paper.


Extended Deadline: 2022-Sep-30(anywhere on earth=aoe) 23:59 [JST time:2022-Oct-1,20:59]


Attention:

In case you have a requirement of arranging the travel and hotel booking, visa application earlier, we are able to send you the notification as soon as possible (2022 Sep 7 around). [you can still update your paper after acceptance].

please connect us: cainlp2021 at gmail.com; or, xianchaow at nvidia.com; wuxianchao at gmail.com.

In case you need more time to prepare your paper, you can submit anytime before Sep-30 (aoe) and we will manage to notify you the result in one week.


CAI+CAI second workshop will be co-held at COLING 2022.

The first CAI+CAI workshop can be found HERE.


For paper submission and any workshop related issues, please connect us: cainlp2021 at gmail.com; or, xianchaow at nvidia.com; wuxianchao at gmail.com

Creative AI, training generative deep neural networks for NLP (such as poems, Haiku, stories), image (such as painting, animation), and speech (such as classic and popular music generation, singing), has achieved impressive milestones during recent years, thanks to deep networks such as attentive encoder-decoder architectures (Transformers), generative-discriminative frameworks (GANs) and self-supervised encoders (VAEs). On the other hand, conversational AI products, text and speech based multi-modal communication between chatbots and human beings, have obtained million-level users in Japan and globally. In order to construct the strong persona of conversational AI products, chatbots are enhanced to be able to interactively write poems, create songs, sing, and even tell stories, through multi-turn communications. Furthermore, QA-style and IR-oriented chatbots of general domains and vertical domains such as finance, healthcare and even emotional pure-chatting are also requiring generative, creative and explainable AI models to support the multi-modal and multi-turn interactions with human beings. In this workshop, we are aiming at collecting, sharing, and discussing state-of-the-art research on creative AI and conversational AI, empowered by large-scale open datasets, open-source architectures, and distributed GPU platforms.

Most importantly, with creative AI combined with conversational AI, we are aiming at bringing AI to help assisting under-represented groups’ learning and communicating with the world, such as interactive music therapy, children’s painting guiding, emotional caring for social phobia and elderly cognition guarding.


Note that the images come from the following references:

  1. https://chatbotslife.com/conversational-ai-code-no-code-53b33e5eb3ea

  2. https://forums.fast.ai/t/cycle-gan-art-completing-visual-loop/15279

  3. https://wiki.pathmind.com/generative-adversarial-network-gan


Music Generation

Music Generation is a task of automatically generating music, guided by textual inputs or other types of multi-modal hint information.

Objective

– Targets: melody, polyphony, accompaniment or counterpoint.

– Usage: to be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file).

Architecture

–feedforward network, recurrent network, autoencoder or generative adversarial networks, Transformers and their variants.

Challenges

- variability, interactivity and creativity

References:

  1. Deep Learning Techniques for Music Generation--A Survey. https://arxiv.org/pdf/1709.01620.pdf

  2. Music Transformer: Generating Music with Long-Term Structure. https://magenta.tensorflow.org/music-transformer

  3. Transformer-XL Based Music Generation with Multiple Sequences of Time-valued Notes. https://arxiv.org/pdf/2007.07244.pdf

  4. Papers with Code: https://paperswithcode.com/task/music-generation

  5. https://magenta.tensorflow.org/gansynth

AI Painting

References:

  1. CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms. https://arxiv.org/pdf/1706.07068.pdf

  2. Progressive Growing of GANs for Improved Quality, Stability, and Variation. https://arxiv.org/pdf/1710.10196.pdf

  3. Self-Attention Generative Adversarial Networks. https://arxiv.org/pdf/1805.08318.pdf

  4. https://www.fastcompany.com/90376689/what-you-look-like-as-an-renaissance-painting-according-to-ai

  5. https://topten.ai/ai-painting-generators/

Submission Guidance


We prefer the paper is 4 to 8-page (include references). However, you can make it arbitrarily long (or short) as far as you think the content is complete enough.

For example:

  1. short papers, 4 pages content and plus unlimited pages of references;

  2. long papers, 8 pages content and plus unlimited pages of references.

For paper submission and any workshop related issues,


Please submit your paper to:

https://www.softconf.com/coling2022/Cr_AI_Conv_AI/


Deadline:

2022-Sep-30(anywhere on earth) 23:59

[JST time:2022-Oct-1,20:59]


CAI+CAI second workshop will be co-held at COLING 2022.


The first CAI+CAI workshop can be found HERE.


For paper submission and any workshop related issues, please connect us: cainlp2021 at gmail.com; or, xianchaow at nvidia.com; wuxianchao at gmail.com

cainlp2021 at gmail.com


Schedule:

2022.May.01 First Call for Papers

2022.Aug.30 -> [updated to] Sep. 30 (anywhere on Earth) Deadline for long/short papers

2022.Oct.7 Result Notification

2022.Oct.10 Schedule Open.



We welcome your submissions to this workshop of and not limited to the following fields:

  1. Conversational AI, chatbots, dialog systems [対話AI、チャットボート、対話システム]

    • single-turn/multi-turn 「シングルターン・マルチターン対話」

    • question-answering 「質問応答」

    • dialog management 「対話管理」

    • task-oriented conversational AI 「タスク向け対話AI」

    • multi-modal conversational AI 「マルチモーダル対話AI、音声、テキストやビデオ」

  2. Creative AI of NLP, image, speech and related fields [クリエイティブAI、自然言語処理、画像、音声と関連する分野]

    • music generation 「音楽生成」

    • AI painting 「AI絵画」

    • video parsing/understanding 「ビデオ解析・理解」

    • multi-modal creative AI 「マルチモーダルクリエイティブAI」

    • text generation 「テキスト生成、俳句、小説、物語文など」

  3. All types of scenarios of creative AI + conversational AI 「クリエイティブAI+対話AIのあたゆるシナリオ」

    • interactive AI creation 「インタラクティブAIものつくり」

    • deep learning algorithms/frameworks of CAI+CAI 「深層学習アルゴリズム・フレームワーク」

    • information retrieval (example HERE) with conversations 「対話による情報検索」

    • information retrieval of AI creative contents 「AIが作ったコンテンツの情報検索」

  4. All other topics related to AI that you prefer to share [他のAIに関するテーマ、分野は自由に選べるのです]

Accepted Papers

We are delighted to announce that the following papers are accepted:

【bib, pdf and related information can be find:】

https://aclanthology.org/events/coling-2022/#2022-cai-1


1

Prompting for a conversation: How to control a dialog model?

Josef Valvoda, Yimai Fang and David Vandyke.


2

Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio

Allen Roush, Sanjay Basu, Akshay Moorthy and Dmitry Dubovoy


3

An Emotion-based Korean Multimodal Empathetic Dialogue System

Minyoung Jung, Yeongbeom Lim, San Kim, Jin Yea Jang, Saim Shin and Ki-Hoon Lee


4

BETOLD: A Task-Oriented Dialog Dataset for Breakdown Detection

Silvia Terragni, Bruna Guedes, Andre Manso, Modestas Filipavicius, Nghia Khau and Roland Mathis


5

Insurance Question Answering via Single-turn Dialogue Modeling

Seon-Ok Na, Young-Min Kim and Seung-Hwan Cho


6

Can We Train a Language Model Inside an End-to-End ASR Model? - Investigating Effective Implicit Language Modeling

Zhuo Gong, Daisuke Saito, Sheng Li, Hisashi Kawai and Nobuaki Minematsu


7

Semantic Content Prediction for Generating Interviewing Dialogues to Elicit Users' Food Preferences

Jie Zeng, Tatsuya Sakato and Yukiko Nakano


8

Creative Painting with Latent Diffusion Models

Xianchao Wu


9

Learning to Evaluate Humor in Memes Based on the Incongruity Theory

Kohtaro Tanaka, Hiroaki Yamane, Yusuke Mori, Yusuke Mukuta and Tatsuya Harada

Schedule

16th Oct (Sunday, UTC+9 Seoul/Tokyo Time Zone)

Morning [half-day workshop, Virtual only.

Zoom - Zoom URL to be shared with authors/presenters of the papers]


9:00 – 12:45

9:00-9:05 Opening

Prompting for a conversation: How to control a dialog model?

Josef Valvoda, Yimai Fang and David Vandyke.

9:05 – 9:25 (15minute presentation + 5minute QA)

Learning to Evaluate Humor in Memes Based on the Incongruity Theory

Kohtaro Tanaka, Hiroaki Yamane, Yusuke Mori, Yusuke Mukuta and Tatsuya Harada

9:25 – 9:45 (15minute presentation + 5minute QA)

An Emotion-based Korean Multimodal Empathetic Dialogue System

Minyoung Jung, Yeongbeom Lim, San Kim, Jin Yea Jang, Saim Shin and Ki-Hoon Lee

9:45 – 10:05 (15minute presentation + 5minute QA)



BETOLD: A Task-Oriented Dialog Dataset for Breakdown Detection

Silvia Terragni, Bruna Guedes, Andre Manso, Modestas Filipavicius, Nghia Khau and Roland Mathis

10:05 – 10:25 (15minute presentation + 5minute QA)



10:25 – 11:00 coffee break and online free communication [35 minutes]



Can We Train a Language Model Inside an End-to-End ASR Model? - Investigating Effective Implicit Language Modeling

Zhuo Gong, Daisuke Saito, Sheng Li, Hisashi Kawai and Nobuaki Minematsu

11:00 – 11:20 (15minute presentation + 5minute QA)

Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio

Allen Roush, Sanjay Basu, Akshay Moorthy and Dmitry Dubovoy

11:20 – 11:40 (15minute presentation + 5minute QA)

Insurance Question Answering via Single-turn Dialogue Modeling

Seon-Ok Na, Young-Min Kim and Seung-Hwan Cho

11:40 – 12:00 (15minute presentation + 5minute QA)

Semantic Content Prediction for Generating Interviewing Dialogues to Elicit Users' Food Preferences

Jie Zeng, Tatsuya Sakato and Yukiko Nakano

12:00-12:20 (15minute presentation + 5minute QA)

Creative Painting with Latent Diffusion Models

Xianchao Wu

12:20 – 12:40 (15minute presentation + 5minute QA)

12:40 – 12:45 Ending



Organizers

Xianchao Wu (He, NVIDIA)

Gang Niu (He, RIKEN)

Lin Gu (He, RIKEN)

Peiying Ruan (She, NVIDIA)

Haitao Yu (He, University of Tsukuba)

Xuemeng (Maggie) Zhang (She, NVIDIA)

Yi Dong (He, NVIDIA)

Hao Gong (He, NVIDIA)


PC Members

Yi Zhao (She, Kwai)

Yulan Yan (She, Databricks)

Sheng Li (He, NICT)

Chunmeng Ma (He, Fujitsu)


For paper submission and any workshop related issues, please connect us:

cainlp2021 at gmail.com


Note: Please directly submit your PDF file to cainlp2021 at gmail.com!

Recommended format template is here (same with COLING 2022):

https://coling2022.org/Cpapers

https://coling2022.org/Submission


Style Files and Formatting


The *ACL template MUST be used for your submission(s). If not, your submission(s) will be rejected.

  • Both long and short papers must follow the paper formatting guidelines of *ACL conferences, using the supplied style files.

  • Style files are directly available (LaTeX, Word).

  • The Overleaf template is also available here

  • It is highly recommended that appendices, which are material that can be read but are not critical, should be included in a single manuscript file, coming after the references of the main paper; In this standard guideline, both the main text and appendices should appear in a `single' manuscript file, without being separately maintained. But, authors also may place an appendix in a separate supplementary file, when the size of an appendix is too long to be included in a main PDF file.

  • Please do not modify these style files, or use templates designed for other conferences. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review.

Virtual Join Guidance


Step 1: [check the screen shot below as well]

https://coling2022.org/index

click, “Underline” and then login (you are required to supply your email box address used during registration, a resetting of password is frequently required)

Step 2: [check the screen shot below as well]

https://www.underline.io/events/360/reception

you will be able to see the schedule and related.

Step 3: find our workshop: [check the screen shot below as well]

find date of "16 Oct"

then find our workshop10, click.


Step 4:

https://www.underline.io/events/360/sessions?eventSessionId=13196

click the "Join Zoom Room" button


Step 5:

When we clicked the "Join Zoom Room" button

https://us06web.zoom.us/j/[you will know~~~]

[not sure if this URL will be changed or not,

so please enter the zoom room by clicking the “Join Zoom Room” button, at our workshop day,

16th Oct 9:00-12:45 UTC+9 ]

--

In addition,

Our COLING proceedings have been successfully ingested to ACL anthology, so now being available at:

https://aclanthology.org/events/coling-2022/

And for our workshop:

https://aclanthology.org/events/coling-2022/#2022-cai-1


Thank you very much again for preparing all the materials timely despite the urgent request!


As already published now in ACL anthology, making a correction would be more difficult.

In the case that a further revision is required, the anthology directors tell us that authors have to make a regular correction request on acl github, by taking the following guideline.

https://aclanthology.org/info/corrections/


We sincerely hope that you enjoy conference and our workshop.


Step 1:

https://coling2022.org/index

click, “Underline” and then login (you are required to supply your email box address used during registration, a resetting of password is frequently required)


Step 2:

https://www.underline.io/events/360/reception

you are here and click the "schedule" icon.



Step 3: find our workshop:

find date of "16 Oct"

then find our workshop10, click.



Step 1:

https://coling2022.org/index

click, “Underline” and then login (you are required to supply your email box address used during registration, a resetting of password is frequently required)


Step 2:

https://www.underline.io/events/360/reception

you are here and click the "schedule" icon.


Step 3: find our workshop:

find date of "16 Oct"

then find our workshop10, click.


Step 1:

https://coling2022.org/index

click, “Underline” and then login (you are required to supply your email box address used during registration, a resetting of password is frequently required)


Step 2:

https://www.underline.io/events/360/reception

you are here and click the "schedule" icon.


Step 3: find our workshop:

find date of "16 Oct"

then find our workshop10, click.