Ryo Ishii, Fumio Nihei, Yoko Ishii, Atsushi Otsuka, Kazuya Matsuo, Narichika Nomoto, Atsushi Fukayama, Takao Nakamura , "Prediction of Love-Like Scores After Speed Dating Based on Pre-obtainable Personal Characteristic Information", IFIP Conference on Human-Computer Interaction, pp. 551-556, 2023.
Atsushi Otsuka, Kenta Hama, Narichika Nomoto, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, "Learning User Embeddings with Generating Context of Posted Social Network Service Texts", International Conference on Human-Computer Interaction, pp.106-115, 2023.
●【国際会議】ICASSP2023でポスター発表。(2023/6/4-6/10)
Fumio Nihei, Ryo Ishii, Yukiko I Nakano, Atsushi Fukayama, Takao Nakamura, "Whether Contribution of Features Differ Between Video-Mediated and In-Person Meetings in Important Utterance Estimation", IEEE International Conference on Acoustics, Speech and Signal Processing (C) , pp.1-5, 2023.
石井亮:言語/非言語コミュニケーション統合モデルに関する研究,2023
石井亮,Xutong Ren,Michal Muszynski,Louis-Philippe Morency,"マルチモーダル特徴量を用いたターン管理の意欲と実際のターン交替の同時予測",信学技報, vol. 121, no. 143, HCS2021-22, pp. 28-33, 2021.
大串旭, 大西俊輝, 田原陽平, 石井亮, 深山篤, 中村高雄, 宮田章裕: 言語特徴に着目した褒め方の上手さの推定モデルの検討. 情報処理学会グループウェアとネットワークサービスワークショップ2021論文集, Vol.2021, pp.1–8 (2021).
Chihiro Takayama, Mitsuhiro Goto, Shinichirou Eitoku, Ryo Ishii, Hajime Noto, Shiro Ozawa, Takao Nakamura, "How People Distinguish Individuals from their Movements: Toward the Realization of Personalized Agents", International Conference on Human-Agent Interaction (HAI), pp. 66–74, 2021.
大西 俊輝, 山内 愛里沙, 大串 旭, 石井 亮, 青野 裕司, 宮田 章裕, "褒める行為における頭部・顔部の振舞いの分析", 情報処理学会論文誌, Vol. 62, No. 9, pp.1620-1628, 2021.
中野 有紀子, 大山 真央, 二瓶 芙巳雄, 東中 竜一郎, 石井 亮, "性格特性を表現するエージェントジェスチャの生成", ヒューマンインタフェース学会論文誌, Vol. 23, No. 2, pp. 153-164, 2021.
明日のトップランナー NTT 人間情報研究所 石井亮 特別研究員 音声・言語・身体動作を複合的に扱い対話の仕組みを解明.「マルチモーダルインタラクション」 の研究, 2021年9月号.
Ryo Ishii*, Xutong Ren*, Michal Muszynski*, Louis-Philippe Morency, "Multimodal and Multitask Approach to Listener's Backchannel Prediction: Can Prediction of Turn-changing and Turn-management Willingness Improve Backchannel Modeling?", ACM International Conference on Intelligent Virtual Agents (IVA), pp. 131-138, 2021. (* denotes joint first-author)
Paul Pu Liang*, Terrance Liu*, Anna Cai, Michal Muszynski, Ryo Ishii, Nick Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency, "Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data", Annual Meeting of the Association for Computational Linguistics (ACL), 2021. (* denotes joint first-authors)
石井 亮, 大塚 和弘, 熊野 史朗, 東中 竜一郎, 青野 裕司, "話者継続・交替時における対話行為と視線行動に基づく共感スキルの推定", 情報処理学会論文誌, Vol.62, No.1, pp. 100-114. 2021. [特選論文]
Ryo Ishii, Ryuichiro Higashinaka, Koh Mitsuda, Taichi Katayama, Masahiro Mizukami, Junji Tomita, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Yushi Aono, "Methods of Efficiently Constructing Text-dialogue-agent System using Existing Anime Character", Journal of Information Processing , Vol.62, No.1, 2021.
二瓶 芙巳雄,中野 有紀子,東中 竜一郎,石井 亮,"対象物のイメージに基づく図像的ジェスチャの形状推定",情報科学技術フォーラム講演論文集 (FIT),19巻,第3分冊,pp.39-42,2020.[FIT論文賞]
山内 愛里沙,大西 俊輝,武藤 佑太,石井 亮,青野 裕司,宮田 章裕 ,“音声および視線・表情・頭部運動に基づく上手い褒め方の評価システムの検討”,マルチメディア、分散、協調とモバイル(DICOMO2020)シンポジウム,2020.[DICOMO優秀論文賞]
Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, Louis-Philippe Morency. "No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures ", Findings of EMNLP, 2020.
Ryo Ishii, Chaitanya Ahuja, Yukiko Nakano, Louis-Philippe Morency, "Impact of Personality on Nonverbal Behavior Generation", Proceedings of ACM International Conference on Intelligent Virtual Agents (IVA), 2020.
Ryo Ishii*, Xutong Ren*, Michal Muszynski, Louis-Philippe Morency, "Can Prediction of Turn-management Willingness Improve Turn-changing Modeling?", Proceedings of ACM International Conference on Intelligent Virtual Agents (IVA), 2020. (* denotes joint first-authors)
Toshiki Onishi, Arisa Yamauchi, Ryo Ishii, Yushi Aono, Akihiro Miyata, "Analyzing Nonverbal Behaviors along with Praising", Proceedings of ACM International Conference on Multimodal Interaction (ICMI), 2020.
Ryo Ishii, Ryuichiro Higashinaka, Koh Mitsuda, Taichi Katayama, Masahiro Mizukami, Junji Tomita, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Yushi Aono, "Methods of Efficiently Constructing Text-Dialogue-Agent System Using Existing Anime Character", Proceedings of International Conference on Human-Computer Interaction (HCII), 2020.
最優秀インタラクティブ発表賞,電子情報通信学会 HCGシンポジウム2024,“ダンス構成を条件指定可能な音楽に合わせたダンス生成技術”,2024
優秀論文賞,情報処理学会 マルチメディア, 分散, 協調とモバイル(DICOMO)シンポジウム2024,“話者の個人性を考慮して推定した対話相手からの印象変化に基づく発話選択手法の性能評価”,2024
優秀事業貢献賞,日本電信電話株式会社 人間情報研究所 所長表彰,“身体モーション生成技術の実用化によるNTTデジタルヒューマン事業への貢献”,2024
優秀論文賞,情報処理学会 マルチメディア, 分散, 協調とモバイル(DICOMO)シンポジウム2023,“マルチモーダル情報に基づく多様な相槌の予測の検討”,2023
令和5年度科学技術分野の文部科学大臣表彰 若手科学者賞,“石井亮:言語/非言語コミュニケーション統合モデルに関する研究”,2023
特別賞,日本電信電話株式会社 サービスイノベーション総合研究所 所長表彰,“ヒトデジタルツインの実現に向けた音声・モーション生成技術のプロモーション活動”,2023
ベストペーパー賞,情報処理学会 グループウェアとネットワークサービスワークショップ (GNWS) 2022,“対面・遠隔対話からの称賛行為検出の基礎検討”,2022
優秀論文賞,情報処理学会 マルチメディア, 分散, 協調とモバイル (DICOMO) シンポジウム2022,“マルチモーダル情報に基づく褒める行為の判定の基礎検討”,2022
HCS研究会賞,電子情報通信学会 ヒューマンコミュニケーション基礎 (HCS) 研究会,“身体動作と個人識別との関係の分析:個性を持ったエージェントの実現に向けて”,2022
優秀インタラクティブ賞,日本データベース学会 DEIMフォーラム2022,“テキストの文脈生成に基づくユーザ埋め込み表現学習”,2022
ヒューマンコミュニケーション賞 (HC賞),電子情報通信学会 ヒューマンコミュニケーショングループ (HCG), “マルチモーダル特徴量を用いたターン管理の意欲と実際のターン交替の同時予測”,2021
ベストペーパー賞,情報処理学会 グループウェアとネットワークサービス (GN) ワークショップ 2021,“言語特徴に着目した褒め方の上手さの推定モデルの検討”,2021
特別賞,日本電信電話株式会社 メディアインテリジェンス研究所 所長表彰,“情報処理学会論文誌 特選論文の受賞および指導学生の国内表彰”,2021
特選論文,情報処理学会 論文誌ジャーナル,"話者継続・交替時における対話行為と視線行動に基づく共感スキルの推定",2021
FIT論文賞,電子情報通信学会/情報処理学会 情報科学技術フォーラム (FIT)、"対象物のイメージに基づく図像的ジェスチャの形状推定",2020
優秀論文賞,情報処理学会 マルチメディア, 分散, 協調とモバイル (DICOMO) シンポジウム2020,“音声および視線・表情・頭部運動に基づく上手い褒め方の評価システムの検討”,2020
Collaborate賞,日本電信電話株式会社 メディアインテリジェンス研究所 所長表彰,“なりきりAI 展示・対話キャラクタ活用”,2019
報道賞,日本電信電話株式会社 メディアインテリジェンス研究所 所長表彰,“アンドロイドtotto のメディア掲載”,2019
特集テーマセッション賞,電子情報通信学会 HCGシンポジウム2018,“話者継続・交替時の視線行動と対話行為に基づく共感スキルの推定”,2018
特別賞,日本電信電話株式会社 サービスイノベーション総合研究所 所長表彰,“アンドロイドtotto へのマルチモーダル音声対話技術の投入”,2018
Collaborate賞,日本電信電話株式会社 メディアインテリジェンス研究所 所長表彰,“夏季実習において17 名の学生を受け入れ環境を整備”,2018
野口賞(第一位),情報処理学会 マルチメディア, 分散, 協調とモバイルシンポジウム(DICOMO2018) 仙台応用情報学研究振興財団,“発話言語に基づく身体モーションの自動生成”,2018
優秀インタラクティブセッション賞,電子情報通信学会 HCGシンポジウム2015,“複数人対話における視線と呼吸動作に基づく次話者予測”,2015
優秀インタラクティブセッション賞,電子情報通信学会 HCGシンポジウム2015,“集団的一人称視点映像解析に基づく複数人対話の自動視線分析”,2015
特集テーマセッション賞,電子情報通信学会 HCGシンポジウム2015,“集団的一人称視点映像解析に基づく複数人対話の自動視線分析”,2015
ヒューマンコミュニケーション賞 (HC賞),電子情報通信学会 ヒューマンコミュニケーショングループ (HCG), “複数人対話での話者交替に関する呼吸動作の分析 ~次話者と発話開始タイミングの予測モデルの構築に向けて~”,2014
Outstanding Paper Award, The 16th ACM International Conference on Multimodal Interaction (ICMI2020), "Analysis of Respiration for Prediction of Who Will Be Next Speaker and When in Multi-Party Meetings", 2014
FIT奨励賞,電子情報通信学会/情報処理学会 第13 回情報科学技技術フォーラム (FIT), “MIOSS:3次元空間を重畳する鏡インタフェース”,2014
研究開発奨励賞,日本電信電話株式会社 サイバースペース研究所 所長表彰,“運動視差映像表現技術の研究”,2011
優秀展示賞(副社長表彰),日本電信電話株式会社 NTT R&D フォーラム2011,“MoPaCo:運動視差映像コミュニケーションシステム”,2011
Toshiki Onishi, Asahi Ogushi, Ryo Ishii, and Akihiro Miyata, "Detecting Praising Behaviors Based on Multimodal Information", The IEICE Transactions on Information and Systems, Vol. XX, pp. XX–XX, 2025.
Toshiki Onishi, Asahi Ogushi, Shunichi Kinoshita, Ryo Ishii, Atsushi Fukayama, and Akihiro Miyata, "Detecting Praising Behavior Based on Multimodal Information in Remote Dialogue", Journal of Information Processing, Vol.33, pp. 31–39, 2025.
Asahi Ogushi, Toshiki Onishi, Ryo Ishii, Atsushi Fukayama, and Akihiro Miyata, "Predicting Praising Skills Using Multimodal Information in Remote Dialogue", Transactions of the Virtual Reality Society of Japan, Vol.29, No.3, pp.127–138, 2024.
Koya Ito, Yoko Ishii, Ryo Ishii, Shin-ichiro Eitoku, Kazuhiro Otsuka, "Exploring Multimodal Nonverbal Functional Features for Predicting the Subjective Impressions of Interlocutors", IEEE Access, vol. 12, pp. 96769-96782, 2024.
高山 千尋, 後藤 充裕, 永徳 真一郎, 石井 亮, 能登 肇, 小澤 史朗, 中村 高雄, “身体動作と個人識別との関係の分析―個性を持ったエージェントの実現に向けて”, 情報処理学会論文誌 65 (1), 177-185, 2024.
Yukiko I. Nakano, Fumio Nihei, Ryo Ishii, Ryuichiro Higashinaka, "Selecting Iconic Gesture Forms Based on Typical Entity Images", Journal of Information Processing vol. 32, pp. 196-205, 2024.
Takato Hayashi, Candy Olivia Mawalim, Ryo Ishii, Akira Morikawa, Atsushi Fukayama, Takao Nakamura, Shogo Okada, A Ranking Model for Evaluation of Conversation Partners Based on Rapport Levels, IEEE Access, 2023.
大土 隼平, 三好 一輝, 石井 陽子, 石井 亮, 永徳 真一郎, 大塚 和弘, "頭部運動機能特徴に基づく対話者の主観的印象の予測・説明モデルの構築", 人工知能学会論文誌, 38 巻 3 号 , H-M74_1-13, 2023.
Atsushi Ito, Yukiko I. Nakano, Fumio Nihei, Tatsuya Sakato, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, "Estimating and Visualizing Persuasiveness of Participants in Group Discussions", Journal of Information Processing, Vol. 31, pp. 34-44, 2023.
Ryo Ishii, Xutong Ren, Michal Muszynski, Louis-Philippe Morency, "Trimodal prediction of speaking and listening willingness to help improve turn-changing modeling", Frontiers in Psychology, 13:774547, 2022.
Toshiki Onishi, Arisa Yamauchi, Asahi Ogushi, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata, "Modeling Japanese Praising Behavior by Analyzing Audio and Visual Behaviors", Frontiers in Computer Science, 4, 2022.
大西 俊輝, 山内 愛里沙, 大串 旭, 石井 亮, 青野 裕司, 宮田 章裕, "褒める行為における頭部・顔部の振舞いの分析", 情報処理学会論文誌, Vol. 62, No. 9, pp.1620-1628, 2021.
中野 有紀子, 大山 真央, 二瓶 芙巳雄, 東中 竜一郎, 石井 亮, "性格特性を表現するエージェントジェスチャの生成", ヒューマンインタフェース学会論文誌, Vol. 23, No. 2, pp. 153-164, 2021.
石井 亮, 大塚 和弘, 熊野 史朗, 東中 竜一郎, 青野 裕司, "話者継続・交替時における対話行為と視線行動に基づく共感スキルの推定", 情報処理学会論文誌, Vol.62, No.1, pp. 100-114, 2021. [特選論文]
Ryo Ishii, Ryuichiro Higashinaka, Koh Mitsuda, Taichi Katayama, Masahiro Mizukami, Junji Tomita, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Yushi Aono, "Methods of Efficiently Constructing Text-dialogue-agent System using Existing Anime Character", Journal of Information Processing , Vol.62, No.1, 2021.
Ryo Ishii, Kazuhiro Otsuka, Ryuichiro Higashinaka, Junji Tomita, "Prediction of Who Will Be Next Speaker and When Using Mouth-Opening Pattern in Multi-Party Conversation", Multimodal Technologies Interact, 3(4), 70, 2019.
松元 崇裕, 後藤 充裕, 石井 亮, 渡部 智樹, 山田 智広, 今井 倫太, “複数ロボットとの位置関係がユーザの対話負荷に与える影響”, 情報処理学会論文誌, vol. 60, No.2, pp. 1-14, 2019.
石井 亮, 熊野 史朗, 大塚 和弘, “話者継続・交替時における参与役割に応じた視線行動に基づく共感スキルの推定”, ヒューマンインタフェース学会論文誌, No.20, vol.4, pp.447-456, 2018.
Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, Junji Yamato, “Collective First-Person Vision for Automatic Gaze Analysis in Multiparty Conversations”, IEEE Transactions on Multimedia, Vol. 19, No. 1, pp. 107–122, 2017.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato, "Using Respiration of Who Will be the Next Speaker and When in Multiparty Meetings", The ACM Transactions on Interactive Intelligent Systems (TiiS), Article No. 20, 2016.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato, "Prediction of Who Will be Next Speaker and When using Gaze Behavior in Multi-Party Meetings", The ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 6 Issue 1, Article No. 4, 2016.
石井 亮, 大塚 和弘, 熊野 史朗, 大和 淳司, "複数人対話における頭部運動に基づく次話者の予測", 情報処理学会論文誌, Vol.56, No.4, pp.1116-1127, 2016.
石井 亮,小澤 史朗,小島 明,大塚 和弘,林 佑樹,中野 有紀子,"3次元空間を重畳する鏡インタフェースMIOSSの提案と評価",ヒューマンインタフェース学会論文誌,Vol.17,No.3,2015.
石井 亮,大塚 和弘,熊野 史朗,松田 昌史,大和 淳司,"複数人対話における注視遷移パターンに基づく次話者と発話開始タイミングの予測",電子情報通信学会論文誌,Vol.J97-A,No.6,pp.453-468,2014.
石井 亮,小澤 史朗,川村 春美,小島 明,中野 有紀子,"窓越しインタフェースMoPaCoによる指示作業への効果検証",電子情報通信学会論文誌,Vol. J96-D,No. 12,pp. 3044-3054,2013.
Ryo Ishii, Yukiko, I. Nakano, Toyoaki Nishida, “Gaze Awareness in Conversational Agents: Estimating User's Conversational Engagement using Eye-gaze”, The ACM Transactions on Interactive Intelligent Systems (TiiS), Special issue on interaction with smart objects, Special section on eye gaze and conversation archive, Vol. 3 Issue 2, 2013.
石井 亮,小澤 史朗,川村 春美,小島 明,中野 有紀子,"映像コミュニケーションにおける窓越しインタフェースMoPaCoによるテレプレゼンスの増強",電子情報通信学会論文誌,Vol. J96-D,No. 1,pp. 110-119,2013.
石井 亮,大古 亮太,中野 有紀子,西田 豊明,"視線と頭部動作に基づくユーザの会話参加態度の推定",情報処理学会論文誌,Vol. 52,No. 12,pp. 3625-3636,2011.
石井 亮,中野 有紀子,"ユーザの視線行動に基づく会話参加態度の推定 -会話エージェントにおける適応的な会話制御に向けて",情報処理学会論文誌,Vol. 49,No. 1,pp. 3835-3846,2008.
石井 亮,宮島 俊光,藤田 欣也,"アバタ音声チャットシステムにおける会話促進のための注視制御",ヒューマンインタフェース学会論文誌,Vol. 10,No. 1,2008.
Sadahiro Yoshikawa, Ryo Ishii, Atsushi Fukayama, Shogo Okada, "Is Corpus Truth for Human Perception?: Quality Assessment of Voice Response Timing in Conversational Corpus through Timing Replacement", Asia Pacific Signal and Information Processing Association, Annual Summit and Conference (APSIPA ASC), 2024.
Ryo Ishii, Shinichiro Eitoku, Shohei Matsuo, Motohiro Makiguchi, Ayami Hoshi, Louis-philippe Morency, "Let's Dance Together! AI Dancers Can Dance to Your Favorite Music and Style", Companion Proceedings of the 26th International Conference on Multimodal Interaction (ICMI), pp.88-90, 2024.
Kalin Stefanov, Yukiko I. Nakano, Chisa Kobayashi, Ibuki Hoshina, Tatsuya Sakato, Fumio Nihei, Chihiro Takayama, Ryo Ishii, Masatsugu Tsujii, "Participation Role-Driven Engagement Estimation of ASD Individuals in Neurodiverse Group Discussions", Proceedings of the 26th International Conference on Multimodal Interaction (ICMI), pp.556-564, 2024.
Gaoussou Youssouf Kebe, Mehmet Deniz Birlikci, Auriane Boudin, Ryo Ishii, Jeffrey M. Girard and Louis-Philippe Morency, "GeSTICS: A Multimodal Corpus for Studying Gesture Synthesis in Two-party Interactions with Contextualized Speech", Proc. the 24th ACM International Conference on Intelligent Virtual Agents (IVA), 2024.
Toshiki Onishi, Asahi Ogushi, Ryo Ishii, Atsushi Fukayama, Akihiro Miyata, "Prediction of Praising Skills Based on Multimodal Information", Proc. 12th International Conference on Affective Computing and Intelligent Interaction (ACII), pp.xx–xx, 2024.
Kenta Hama, Atsushi Otsuka, Ryo Ishii, "Emotion Recognition in Conversation with Multi-step Prompting Using Large Language Model", International Conference on Human-Computer Interaction (HCII), pp. 338-346, 2024.
Takato Hayashi, Ryusei Kimura, Ryo Ishii, Fumio Nihei, Atsushi Fukayama, Shogo Okada, "Rapport Prediction Using Pairwise Learning in Dyadic Conversations Among Strangers and Among Friends", International Conference on Human-Computer Interaction (HCII), pp. 17-28, 2024.
Shumpei Otsuchi, Koya Ito, Yoko Ishii, Ryo Ishii, Shinichirou Eitoku, Kazuhiro Otsuka, "Identifying Interlocutors' Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic Features", ACM International Conference on Multimodal Interaction (ICMI), pp. 336-344, 2023.
Chaitanya Ahuja, Pratik Joshi, Ryo Ishii, Louis-Philippe Morency, "Continual Learning for Personalized Co-Speech Gesture Generation", Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 20893-20903, 2023,
Toshiki Onishi, Naoki Azuma, Shunichi Kinoshita, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, and Akihiro Miyata, "Prediction of Various Backchannel Utterances Based on Multimodal Information", Proc. the 23rd ACM International Conference on Intelligent Virtual Agents (IVA), Article No. 47, pp. 1-4, 2023.
Ryo Ishii, Akira Morikawa, Shinichiro Eitoku, Atsushi Fukayama, Takao Nakamura, "How far in ahead can model predict nonverbal behavior from speech and text?", Proc. the 23rd ACM International Conference on Intelligent Virtual Agents (IVA), Article No. 39, pp. 1-3, 2023.
Shunichi Kinoshita, Toshiki Onishi, Naoki Azuma, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, and Akihiro Miyata, "A Study of Prediction of Listener’s Comprehension Based on Multimodal Information", Proc. the 23rd ACM International Conference on Intelligent Virtual Agents (IVA), Article No. 30, pp. 1-4, 2023.
Ryo Ishii, Fumio Nihei, Yoko Ishii, Atsushi Otsuka, Kazuya Matsuo, Narichika Nomoto, Atsushi Fukayama, Takao Nakamura, "Prediction of Love-Like Scores After Speed Dating Based on Pre-obtainable Personal Characteristic Information", IFIP Conference on Human-Computer Interaction, pp. 551-556, 2023.
Atsushi Otsuka, Kenta Hama, Narichika Nomoto, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, "Learning User Embeddings with Generating Context of Posted Social Network Service Texts", International Conference on Human-Computer Interaction, pp.106-115, 2023.
Fumio Nihei, Ryo Ishii, Yukiko I Nakano, Atsushi Fukayama, Takao Nakamura, "Whether Contribution of Features Differ Between Video-Mediated and In-Person Meetings in Important Utterance Estimation", IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1-5, 2023.
Akira Morikawa, Ryo Ishii, Hajime Noto, Atsushi Fukayama, Takao Nakamura, "Determining most suitable listener backchannel type for speaker's utterance", ACM International Conference on Intelligent Virtual Agents (IVA), 2022.
Toshiki Onishi, Asahi Ogushi, Yohei Tahara, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata, "A Comparison of Praising Skills in Face-to-Face and Remote Dialogues", Language Resources and Evaluation Conference (LREC), pp. 5805-5812, 2022.
Atsushi Ito, Yukiko I Nakano, Fumio Nihei, Tatsuya Sakato, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, "Predicting Persuasiveness of Participants in Multiparty Conversations", ACM International Conference on Intelligent User Interfaces (IUI), pp. 85-88, 2022.
Fumio Nihei, Ryo Ishii, Yukiko I Nakano, Kyosuke Nishida, Ryo Masumura, Atsushi Fukayama, Takao Nakamura, "Dialogue Acts Aided Important Utterance Detection Based on Multiparty and Multimodal Information", INTERSPEECH, pp. 1086-1090, 2022.
Asahi Ogushi, Toshiki Onishi, Yohei Tahara, Ryo Ishii, Atsushi Fukayama, Takao Nakamura, Akihiro Miyata, "Analysis of Praising Skills Focusing on Utterance Contents", INTERSPEECH, pp. 2743-2747, 2022.
Yoshimasa Masuda, Ryo Ishii, Donald Shepard, Rashimi Jain, Osamu Nakamura, Tetsuya Toma, "Vision Paper for Enabling HCI-AI Digital Healthcare Platform Using AIDAF in Open Healthcare Platform 2030", Innovation in Medicine and Healthcare. Smart Innovation, Systems and Technologies, vol 308. Springer, 2022.
Chihiro Takayama, Mitsuhiro Goto, Shinichirou Eitoku, Ryo Ishii, Hajime Noto, Shiro Ozawa, Takao Nakamura, "How People Distinguish Individuals from their Movements: Toward the Realization of Personalized Agents", International Conference on Human-Agent Interaction (HAI), pp. 66–74, 2021.
Ryo Ishii*, Xutong Ren*, Michal Muszynski*, Louis-Philippe Morency, "Multimodal and Multitask Approach to Listener's Backchannel Prediction: Can Prediction of Turn-changing and Turn-management Willingness Improve Backchannel Modeling?", ACM International Conference on Intelligent Virtual Agents (IVA), pp. 131-138, 2021. (* denotes joint first-authors)
Paul Pu Liang*, Terrance Liu*, Anna Cai, Michal Muszynski, Ryo Ishii, Nick Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency, "Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data", Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP) (Volume 1: Long Papers), pp. 4170-4187, 2021. (* denotes joint first-authors)
Ryo Ishii, Shiro Kumano, Ryuichiro Higashinaka, Shiro Ozawa, Testuya Kinebuchi, "Estimation of Empathy Skill Level and Personal Traits Using Gaze Behavior and Dialogue Act During Turn-Changing", Proceedings of International Conference on Human-Computer Interaction (HCII), pp. 44-57, 2021.
Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita, "Automatic Head-Nod Generation Using Utterance Text Considering Personality Traits", International Workshop on Spoken Dialogue Systems, pp. 299-306, 2021.
Terrance Liu, Paul Pu Liang, Michal Muszynski, Ryo Ishii, David Brent, Randy Auerbach, Nick Allen, Louis-Philippe Morency, "Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study", In Proceedings of the 34th Conference on Neural Information Processing Systems: Machine Learning for Mobile Health workshop, 2020.
Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, Louis-Philippe Morency, "No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures", EMNLP: Findings, pp. 1884-1895, 2020.
Toshiki Onishi, Arisa Yamauchi, Ryo Ishii, Yushi Aono, Akihiro Miyata, "Analyzing Nonverbal Behaviors along with Praising", Proceedings of ACM International Conference on Multimodal Interaction (ICMI), pp. 609-613, 2020.
Ryo Ishii, Chaitanya Ahuja, Yukiko Nakano, Louis-Philippe Morency, "Impact of Personality on Nonverbal Behavior Generation", Proceedings of ACM International Conference on Intelligent Virtual Agents (IVA), No. 29, pp.1-8, 2020.
Ryo Ishii*, Xutong Ren*, Michal Muszynski, Louis-Philippe Morency, "Can Prediction of Turn-management Willingness Improve Turn-changing Modeling?", Proceedings of ACM International Conference on Intelligent Virtual Agents (IVA), No. 28, pp.1-8, 2020. (* denotes joint first-authors)
Ryo Ishii, Ryuichiro Higashinaka, Koh Mitsuda, Taichi Katayama, Masahiro Mizukami, Junji Tomita, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Yushi Aono, "Methods of Efficiently Constructing Text-Dialogue-Agent System Using Existing Anime Character", Proceedings of International Conference on Human-Computer Interaction (HCII), pp. 328-347, 2020.
Ryo Masumura, Mana Ihori, Tomohiro Tanaka, Atsushi Ando, Ryo Ishii, Takanobu Oba, Ryuichiro Higashinaka, "Improving speech-based end-of-turn detection via cross-modal representation learning with punctuated text data", Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 1062-1069, 2019.
Fumio Nihei, Yukiko Nakano, Ryuichiro Higashinaka, Ryo Ishii, "Determining Iconic Gesture Forms based on Entity Image Representation", Proceedings of ACM International Conference on Multimodal Interaction (ICMI), pp. 419-425, 2019.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita, "Estimating Interpersonal Reactivity Scores Using Gaze Behavior and Dialogue Act During Turn-Changing", Proceedings of International Conference on Human-Computer Interaction (HCII), pp. 45-53, 2019.
Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita, "Automatic Head-Nod Generation Using Utterance Text Considering Personality Traits", 10th International Workshop on Spoken Dialogue Systems (IWSDS), pp. 299-306, 2019.
Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita, “Automatic Generation System of Virtual Agent's Motion using Natural Language”, Proceedings of ACM International Conference on Intelligent Virtual Agents (IVA), pp. 357–358, 2018.
Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita, “Generating Body Motions using Spoken Language in Dialogue”, Proceedings of ACM International Conference on Intelligent Virtual Agents (IVA), pp. 87–92, 2018.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Ryuichiro Higashinaka, Junji Tomita, “Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level”, Proceedings of ACM International Conference on Multimodal Interaction (ICMI), pp. 31–39, 2018.
Ryo Ishii, Taichi Katayama, Ryuichiro Higashinaka, Junji Tomita, “Automatic Generation of Head Nods using Utterance Texts”, Proceedings of IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1143–1149, 2018.
Ryo Ishii, Ryuichiro Higashinaka, Kyosuke Nishida, Taichi Katayama, Nozomi Kobayashi, Junji Tomita, “Automatically Generating Head Nods with Linguistic Information”, Proceedings of International Conference on Human-Computer Interaction (HCII), pp. 383–391, 2018.
Takahiro Matsumoto, Mitsuhiro Goto, Ryo Ishii, Tomoki Watanabe, Tomohiro Yamada, Michita Imai, “Where Should Robots Talk?: Spatial Arrangement Study from a Participant Workload Perspective”, Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 270–278, 2018.
Ryo Ishii, Ryuichiro Higashinaka, Junji Tomita, “Predicting Nods by using Dialogue Acts in Dialogue”, Proceedings of Eleventh International Conference on Language Resources and Evaluation (LREC), pp. 2940-2944, 2018.
Ryo Masumura, Tomohiro Tanaka, Atsushi Ando, Ryo Ishii, Ryuichiro Higashinaka, Yushi Aono, “Neural Dialogue Context Online End-of-Turn Detection”, Proceedings of Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL), pp. 224–228, 2018.
Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka, “Analyzing gaze behavior during turn-taking for estimating empathy skill level”, Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI), pp. 365–373, 2017.
Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka, “Comparing empathy perceived by interlocutors in multiparty conversation and external observers”, Proceedings of Affective Computing and Intelligent Interaction (ACII), pp. 50–57, 2017.
Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka, “Computational model of idiosyncratic perception of others”, Proceedings of Affective Computing and Intelligent Interaction (ACII), pp. 42–49, 2017.
Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka, “Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings”, Proceedings of the 5th International Conference on Human Agent Interaction (HAI), pp. 181–187, 2017.
Ryo Masumura, Taichi Asami, Hirokazu Masataki, Ryo Ishii, Ryuichiro Higashinaka, “Online end-of-turn detection from speech based on stacked time-asynchronous sequential networks”, Proceedings of INTERSPEECH, pp. 1661–1665, 2017.
Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka, “Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings”, Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI), pp. 209–216, 2016.
Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka, "Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker Based in Multi-Party Meetings", International Conference on Multimodal Interaction (ICMI), pp. 99–106, 2015.
Ryo Ishii, Shiro Ozawa, Akira Kojima, Kazuhiro Otsuka, Yuki Hayashi, Yukiko I. Nakano, "Design and Evaluation of Mirror Interface MIOSS to Overlay Remote 3D Spaces", IFIP International Conference on Human-Computer Interaction (INTERACT), pp. 319–326, 2015.
Shiro Kumano, Kazuhiro Otsuka, Ryo Ishii, and Junji Yamato, "Automatic Gaze Analysis in Multiparty Conversations based on Collective First-Person Vision", International Workshop on Emotion Representation, Analysis and Synthesis in Continuous Time and Space (EmoSPACE), Vol. 5, pp. 1–8, 2015.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato, "Predicting Next Speaker Based on Head Movement in Multi-Party Meetings", International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2319–2323, 2015.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato, "Analysis of Timing Structure of Eye Contact in Turn-changing ", Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze and Multimodality (Gaze-In), pp. 15–20, 2014.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato, "Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings", International Conference on Multimodal Interaction (ICMI), pp. 18–25, 2014.
Ryo Ishii, Shiro Ozawa, Harumi Kawamura, Akira Kojima, Yukiko I. Nakano, Kazuhiro Otsuka, "Evaluation of Window Interface in Remote Collaboration involving Pointing Gesture", International Conference on Advances in Computer-Human Interactions (ACHI), 2014.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato, "Analysis and Modeling of Next Speaking Start Timing in Multi-party Meetings", International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 694–698, 2014.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato, "Predicting Next Speaker and Timing from Gaze Transition Patterns in Multi-Party Meetings", International Conference on Multimodal Interaction (ICMI), pp. 79–86, 2013.
Kazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato, "MM+Space: n x 4 Degree-of-Freedom Kinetic Display for Recreating Multiparty Conversation Spaces", International Conference on Multimodal Interaction (ICMI), pp. 389–396, 2013.
Shiro Kumano, Kazuhiro Otsuka, Masafumi Matsuda, Ryo Ishii, Junji Yamato, "Using A Probabilistic Topic Model to Link Observers' Perception Tendency to Personality", Proceedings of HUMAINE Assoc. Conf. on Affective Computing and Intelligent Interaction (ACII), 2013.
Ryota Ooko, Ryo Ishii, Yukiko I. Nakano, “Estimating a User’s Conversational Engagement Based on Head Pose Information”, Proceedings of the 11th International Conference on Intelligent Virtual Agents (IVA), pp. 262–268, 2011.
Ryo Ishii, Shiro Ozawa, Harumi Kawamura, Akira Kojima, “MoPaCo: Hightelepresence video communication system using motion parallax with monocular camera”, Proceedings of IEEE International Workshop on Human-Computer Interaction: Real-time vision aspects of natural user interfaces, pp. 463–464, 2011.
Ryo Ishii, Shiro Ozawa, Takafumi Mukouchi, Norihiko Matsuura, “MoPaCo: Psudo 3D Video Communication System”, Proceedings of International Conference on Human-Computer Interaction (HCII), vol. 12, pp. 131–140, 2011.
Ryo Ishii, Yuta Shinohara, Yukiko, I. Nakano, Toyoaki Nishida, “Combining multiple types of eye-gaze information to predict user’s conversational engagement”, Proceedings of Workshop on Eye Gaze on Intelligent Human Machine Interaction (Gaze-in) , 2011.
Ryo Ishii, Hajime Noto, Hideaki Takada, Munekazu Date, Norihiko Matsuura, “Precise Distance Expression Using Motion Parallax in Video Communication”, Proceedings of International Conference on 3D Systems and Applications (3DSA), 2010.
Ryo Ishii, Yukiko, I. Nakano, “An Empirical Study of Eye-gaze Behaviors: Towards the Estimation of Conversational Engagement in Human-Agent Communication”, Proceedings of the workshop on Eye gaze in intelligent human machine interaction (Gaze-in), pp. 33–40, 2010.
Yukiko, I. Nakano, Ryo Ishii, “Estimating User's Engagement from Eye-gaze Behaviors in Human-Agent Conversations”, Proceedings of Intelligent User Interface (IUI), pp. 139–148, 2010.
Ryo Ishii, Yukiko, I. Nakano, “ Estimating User's Conversational Engagement Based on Gaze Behaviors”, Proceedings of International Workshop on Intelligent Virtual Agents (IVA), pp. 200–207, 2008.
Ryo Ishii, Toshimitsu Miyajima, Kinya Fujita, Yukiko I. Nakano, "Avatar’s gaze control to facilitate conversational turn-taking in virtual-space multi-user voice chat system", Proceedings of International Workshop on Intelligent Virtual Agents (IVA), pp. 458-458, 2006.
Ryo Ishii, Ryota Ooko, Yukiko, I. Nakano, Toyoaki Nishida, “Eye Gaze in Intelligent Human Computer Interaction”, Springer, pp. 85–110, 2013.
大塚 淳史,野本 済央,石井 亮,深山 篤,“デジタルツインコンピューティング: 1. 人のデジタルツインの実現に向けた研究開発の取り組み”,情報処理学会誌,64 (11), e1-e6, 2023.
石井 亮, "明日のトップランナー NTT 人間情報研究所 石井亮 特別研究員 音声・言語・身体動作を複合的に扱い対話の仕組みを解明.「マルチモーダルインタラクション」 の研究", 2021.
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Junji Yamato, "Prediction of "Who Will Be Next Speaker and When" in Multi-Party Meetings", NTT Technical Review, Vol. 13, No. 7, 2015.
Frontiers, Research Topic: Multimodal Social Signal Processing and Application, Guest Editor, (2021-present)
The ACM International Conference on Multimodal Interaction (ICMI), Senior Program Committee Member, (2021)
The ACM International Conference on Multimodal Interaction (ICMI), Program Committee Member, (2019, 2020)
The ACM International Conference on Multimodal Interaction (ICMI), Late-breaking Program Committee Member, (2020)
The ACM International Conference on Multimodal Interaction (ICMI), Demo Chair, (2016)
The ACM International Conference on Intelligent Virtual Agents (IVA), Committee Member, (2021)
The ACM International Conference on Intelligent Virtual Agents (IVA), Program Committee Member, (2018)
The International Conference on Human-Computer Interaction (HCII), Session Organizer (Data driven social Machine Learning), (2019-2020)
ACHI Industry/Research Advisory Committee, (2017-present)
ACHI Industry/Research Technical Program Committee, (2016-present)
一般社団法人 電子情報通信学会 論文誌査読委員 (2016年–現在)
一般社団法人 電子情報通信学会 論文誌(ヒューマンコミュニケーション特集)編集委員(2018年–2021年)
一般社団法人 電子情報通信学会 ヒューマンコミュニケーション基礎研究会(HCS研究会)幹事補佐(2016、2021年-現在)
一般社団法人 電子情報通信学会 ヒューマンコミュニケーション基礎研究会(HCS研究会) 専門委員 (2015年、2017年-2021年)
一般社団法人 電子情報通信学会 ヒューマンコミュニケーション基礎研究会(VNV研究会) 専門委員 (2014年-現在)
一般社団法人 人工知能学会 代議員(2021年-現在)
一般社団法人 人工知能学会 全国大会 オーガナイズドセッション(社会的信号処理とAI)オーガナイザー(2017年-2019年)
一般社団法人 人工知能学会 論文誌(対話システム特集号)編集委員(2018年)
一般社団法人 情報処理学会シンポジウム インタラクション, プログラム委員(2017年–現在)
一般社団法人 情報処理学会 マルチメディア、分散、協調とモバイル(DICOMO)シンポジウム 実行委員(2019年)
一般社団法人 情報処理学会 学会誌編集委員(2018年–2020年)
一般財団法人 光産業技術振興協会 光ロードマップ策定委員(2021年-現在)
特定非営利活動法人 ヒューマンインタフェース学会 論文誌(ヒューマンコラボレーション特集号)副編集委員長(2018年)
特定非営利活動法人 ヒューマンインタフェース学会 ヒューマンインタフェースシンポジウム 実行委員(2016年)
The ACM International Conference on Intelligent Virtual Agents (IVA), (2018)
The IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), (2018)
The ACM Conference on Human Factors in Computing Systems (CHI), (2017)
The ACM International Conference on Multimodal Interaction (ICMI), (2016, 2019, 2020, 2021)
The International Conference on Advances in Computer-Human Interactions (ACHI), (2016-present)
Others, Journal and Conference Papers of The Institute of Electronics, Information and Communication Engineers, Information Processing Society of Japan, Artificial Intelligence Society, Human Interface Society, Journal of Electrical Society, Journal of Japan Society, etc.