Hikaru Yanagida, Yusuke Ijima, and Naohiro Tawara, ``One's own recorded voice is more intelligible than the voices of others in the presence of competing speech,'' Acoustical Science and Technology, Mar. 2025. (https://doi.org/10.1250/ast.e24.109)
Mizuki Nagano, Yusuke Ijima, and Sadao Hiroya, ``The influence of semantic primitives in an emotion-mediated willingness to buy model from advertising speech,'' Acoustical Science and Technology, vol. 46, no. 1, pp. 87-95, Jan. 2025. (https://doi.org/10.1250/ast.e24.14)
Takanori Ashihara, Marc Delcroix, Yusuke Ijima, and Makio Kashino, `Unveiling the linguistic capabilities of a self-supervised speech model through cross-lingual benchmark and layer-wise similarity analysis," IEEE Access, July 2024. (https://ieeexplore.ieee.org/document/10597571)
Kenichi Fujita, Atsushi Ando, and Yusuke Ijima, ``Speech Rhythm-Based Speaker Embeddings Extraction from Phonemes and Phoneme Duration for Multi-Speaker Speech Synthesis," IEICE Trans. on Information and Systems, vol. 107, no. 1, pp. 93-104, Jan. 2024. (https://www.jstage.jst.go.jp/article/transinf/E107.D/1/E107.D_2023EDP7039/_article)
本間幸徳, 金川裕紀, 小林のぞみ, 井島勇祐, 齋藤邦子, ``話し方種別情報を含むテキスト対話を活用した表現豊かなテキスト音声合成,'' 人工知能学会論文誌, vol.38, no.3, F-MA7_1-12, May 2023. (https://www.jstage.jst.go.jp/article/tjsai/38/3/38_38-3_F-MA7/_article/-char/ja/)
Mizuki Nagano, Yusuke Ijima, and Sadao Hiroya, ``Perceived Emotional States Mediate Willingness to Buy from Advertising Speech,'' Frontiers in Psychology, Dec. 2022. (https://doi.org/10.3389/fpsyg.2022.1014921)
Yuki Saito, Taiki Nakamura, Yusuke Ijima, Kyosuke Nishida, and Shinnosuke Takamichi, ``Non-parallel and many-to-many voice conversion using variational autoencoders integrating speech recognition and speaker verification,'' Acoustical Science and Technology, Vol. 42, No. 1, pp. 1-11, Jan. 2021. (https://www.jstage.jst.go.jp/article/ast/42/1/42_E1968/_article/-char/ja)
Katsuki Inoue, Sunao Hara, Masanobu Abe, Nobukatsu Hojo, and Yusuke Ijima, ``Model architectures to extrapolate emotional expressions in DNN-based text-to-speech,'' Speech Communication, Elsevier, Vol. 126, pp. 35-43, Jan. 2021. (https://www.sciencedirect.com/science/article/abs/pii/S0167639320302958)
北条伸克, 井島勇祐, 杉山弘晃, ``音声対話システムにおける音声合成のための対話行為情報を利用した文末音調ラベル推定,'' 人工知能学会論文誌, Vol. 35, No. 4, pp. A-J5_1-11, July 2020. (https://www.jstage.jst.go.jp/article/tjsai/35/4/35_A-JA5/_article/-char/ja)
北条伸克, 井島勇祐, 杉山弘晃, 宮崎昇, 川西隆仁, 柏野邦夫, ``対話行為情報を表現可能なDNN音声合成と発語内行為自然性に関する評価,'' 人工知能学会論文誌, Vol. 35, No. 2, pp. A-J81_1-17, Mar. 2020. (https://www.jstage.jst.go.jp/article/tjsai/35/2/35_A-J81/_article/-char/ja)
西田京介, 井島勇祐, 田良島周平, ``エンドツーエンド深層学習のフロンティア'' (解説), 電子情報通信学会誌, Vol.101, No.9, pp. 920-925, Aug. 2018.
Nobukatsu Hojo, Yusuke Ijima, and Hideyuki Mizuno, ``DNN-Based Speech Synthesis Using Speaker Codes," IEICE Trans. on Information and Systems, vol.E101-D, 2, pp.462-472, Feb. 2018. (https://www.jstage.jst.go.jp/article/transinf/E101.D/2/E101.D_2017EDP7165/_article/-char/ja)
Yusuke Ijima, Noboru Miyazaki, Hideyuki Mizuno, Sumitaka Sakauchi, ``Statistical model training technique based on speaker clustering approach for HMM-based speech synthesis,'' Speech Communication, Elsevier, Vol. 71, pp. 50-61, July 2015.
武藤博子, 井島勇祐, 宮崎昇, 水野秀之, 阪内澄宇, ``合成音声への自然なポーズ挿入のための音声の自然性に影響を与えるポーズ位置に関する要因の分析と評価,'' 情報処理学会論文誌, Vol. 56, No .3, pp. 993-1002, March 2015.
Yusuke Ijima and Hideyuki Mizuno, ``Similar speaker selection technique based on distance metric learning using highly correlated acoustic features with perceptual voice quality similarity,'' IEICE Trans. on Information and Systems, vol.E98-D, 1, pp.157-165, Jan. 2015. (https://www.jstage.jst.go.jp/article/transinf/E98.D/1/E98.D_2014EDP7183/_article/-char/ja)
Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka. ``Prosodic Variation Enhancement Using Unsupervised Context Labeling for HMM-based Expressive Speech Synthesis,'' Speech Communication, Elsevier, Vol. 57, No. 3, pp. 144-154, Feb. 2014. (https://www.sciencedirect.com/science/article/abs/pii/S0167639313001350)
Yusuke Ijima, Takashi Nose, Makoto Tachibana, Takao Kobayashi, ``A rapid model adaptation technique for emotional speech recognition with style estimation based on multiple-regression HMM,'' IEICE Trans. on Information and Systems, vol.E93-D, 1, pp.107-115, Jan. 2010. (https://www.jstage.jst.go.jp/article/transinf/E93.D/1/E93.D_1_107/_article/-char/ja)
Hiroki Kanagawa and Yusuke Ijima, ``Knowledge Distillation from Self-Supervised Representation Learning Model with Discrete Speech Units for Any-to-Any Streaming Voice Conversion,'' Proc. INTERSPEECH 2024, pp. 4393--4397, Sept. 2024.
Hiroki Kanagawa, Takafumi Moriya, and Yusuke Ijima, ``Pre-training Neural Transducer-based Streaming Voice Conversion for Faster Convergence and Alignment-free Training,'' Proc. INTERSPEECH 2024, pp. 2088--2092, Sept. 2024.
Kenichi Fujita, Takanori Ashihara, Marc Delcroix, and Yusuke Ijima, ``Lightweight Zero-shot Text-to-Speech with Mixture of Adapters,'' Proc. INTERSPEECH 2024, pp.692--696, Sept. 2024.
Hikaru Yanagida, Yusuke Ijima, and Naohiro Tawara, ``One's own recorded voice aids speech intelligibility in the presence of a competing speech,'' 33rd International Congress of Psychology (ICP 2024), July 2024.
Kenichi Fujita, Hiroshi Sato, Takanori Ashihara, Hiroki Kanagawa, Marc Delcroix, Takafumi Moriya, and Yusuke Ijima, ``Noise-robust zero-shot text-to-speech synthesis conditioned on self-supervised speech-representation model with adapters,'' Proc. IEEE ICASSP 2024, pp.11471--11475, Apr. 2024.
Takanori Ashihara, Marc Delcroix, Takafumi Moriya, Kohei Matsuura, Taichi Asami, and Yusuke Ijima, ``What Do Self-Supervised Speech and Speaker Models Learn? New Findings From a Cross Model Layer-Wise Analysis,'' Proc. IEEE ICASSP 2024, pp.10166--10170, Apr. 2024.
Kazuki Yamauchi, Yusuke Ijima, and Yuki Saito, ``StyleCap: Automatic Speaking-Style Captioning from Speech Based on Speech and Language Self-supervised Learning Models,'' Proc. IEEE ICASSP 2024, pp.11261--11265, Apr. 2024.
Hikaru Yanagida, Yusuke Ijima, and Naohiro Tawara, ``Influence of Personal Traits on Impressions of One's Own Voice,'' Proc. INTERSPEECH 2023, pp. 5212--5216, Aug. 2023.
Mizuki Nagano, Yusuke Ijima, and Sadao Hiroya, ``A stimulus-organism-response model of willingness to buy from advertising speech using voice quality,'' Proc. INTERSPEECH 2023, pp. 5202--5206, Aug. 2023.
Takanori Ashihara, Takafumi Moriya, Kohei Matsuura, Tomohiro Tanaka, Yusuke Ijima, Taichi Asami, Marc Delcroix, and Yukinori Honma, ``SpeechGLUE: How Well Can Self-Supervised Speech Models Capture Linguistic Knowledge?,'' Proc. INTERSPEECH 2023, pp. 2888--2892, Aug. 2023.
Hiroki Kanagawa, Takafumi Moriya, and Yusuke Ijima, ``VC-T: Streaming Voice Conversion Based on Neural Transducer,'' Proc. INTERSPEECH 2023, pp. 2088--2092, Aug. 2023.
Kenichi Fujita, Takanori Ashihara, Hiroki Kanagawa, Takafumi Moriya, and Yusuke Ijima, ``ZERO-SHOT TEXT-TO-SPEECH SYNTHESIS CONDITIONED USING SELF-SUPERVISED SPEECH REPRESENTATION MODEL,'' Proc. IEEE ICASSP 2023 workshop Self-supervision in Audio, Speech and Beyond, June 2023. (Best Paper Award)
Hiroki Kanagawa, and Yusuke Ijima, ``ENHANCEMENT OF TEXT-PREDICTING STYLE TOKEN WITH GENERATIVE ADVERSARIAL NETWORK FOR EXPRESSIVE SPEECH SYNTHESIS,'' Proc. ICASSP 2023, June 2023.
Hiroki Kanagawa, and Yusuke Ijima, ``SIMD-SIZE AWARE WEIGHT REGULARIZATION FOR FAST NEURAL VOCODING ON CPU,'' Proc. 2022 IEEE Spoken Language Technology Workshop (SLT 2022), Jan. 2023.
Kenichi Fujita, Yusuke Ijima, and Hiroaki Sugiyama, ``Direct speech-reply generation from text-dialogue context,'' Proc. APSIPA Annual Summit and Conference 2022, Nov. 2022.
Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, Yuki Saito, Yusuke Ijima, Ryo Masumura, and Hiroshi Saruwatari, ``Predicting VQVAE-based Character Acting Style from Quotation-Annotated Text for Audiobook Speech Synthesis,'' Proc. INTERSPEECH 2022, pp. 4551--4555, Sept. 2022.
Hiroki Kanagawa, Yusuke Ijima, and Hiroyuki Toda, ``Joint Modeling of Multi-Sample and Subband Signals for Fast Neural Vocoding on CPU,'' Proc. INTERSPEECH 2022, pp. 1626--1630, Sept. 2022.
Hiroki Kanagawa, and Yusuke Ijima, ``Multi-Sample Subband Wavernn Via Multivariate Gaussian,'' Proc. ICASSP, pp. 8427--8431, May 2022.
Naohiro Tawara, Atsunori Ogawa, Yuki Kitagishi, Hosana Kamiyama, and Yusuke Ijima, ``Robust Speech-Age Estimation Using Local Maximum Mean Discrepancy Under Mismatched Recording Conditions,'' Proc. 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 114--121, Dec. 2021.
Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, Naoko Tanji, Yusuke Ijima, Ryo Masumura, and Hiroshi Saruwatari, ``Audiobook Speech Synthesis Conditioned by Cross-Sentence Context-Aware Word Embeddings,'' Proc. 11th ISCA Speech Synthesis Workshop (SSW 11), pp. 211--215, Sept. 2021.
Kenichi Fujita, Atsushi Ando, and Yusuke Ijima, ``Phoneme Duration Modeling Using Speech Rhythm-Based Speaker Embeddings for Multi-Speaker Speech Synthesis,'' Proc. INTERSPEECH 2021, pp. 3141-3145, Sept. 2021.
Naoto Kakegawa, Sunao Hara, Masanobu Abe, and Yusuke Ijima, ``Phonetic and prosodic information estimation from texts for genuine Japanese end-to-end text-to-speech,'' Proc. INTERSPEECH 2021, pp. 3606--3610, Sept. 2021.
Mizuki Nagano, Yusuke Ijima, and Sadao Hiroya, ``Impact of Emotional State on Estimation of Willingness to Buy from Advertising Speech,'' Proc. INTERSPEECH 2021, pp. 2486--2490, Sept. 2021.
Atsushi Ando, Ryo Masumura, Hiroshi Sato, Takafumi Moriya, Takanori Ashihara, Yusuke Ijima, and Tomoki Toda, ``Speech Emotion Recognition Based on Listener Adaptive Models,'' Proc. ICASSP 2021, pp. 6274--6278, June 2021.
Takafumi Moriya, Takanori Ashihara, Tomohiro Tanaka, Tsubasa Ochiai, Hiroshi Sato, Atsushi Ando, Yusuke Ijima, Ryo Masumura, and Yusuke Shinohara, ``Simpleflat: A Simple Whole-Network Pre-Training Approach for RNN Transducer-Based End-to-End Speech Recognition,'' Proc. ICASSP 2021, pp. 5664--5668, June 2021.
Hiroki Kanagawa and Yusuke Ijima, ``Lightweight LPCNet-based Neural Vocoder with Tensor Decomposition,'' Proc. Interspeech 2020, pp. 205-209, Oct. 2020.
Yuki Yamashita, Tomoki Koriyama, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Ryo Masumura, and Hiroshi Saruwatari, ``Investigating Effective Additional Contextual Factors in DNN-based Spontaneous Speech Synthesis,'' Proc. Interspeech 2020, pp. 3201-3205, Oct. 2020.
Nobukatsu Hojo, Yusuke Ijima, Hiroaki Sugiyama, Noboru Miyazaki, Takahito Kawanishi, and Kunio Kashino, ``DNN-based Speech Synthesis considering Dialogue-Act Information and its Evaluation with Respect to Illocutionary Act Naturalness,'' Proc. Speech Prosody 2020, Tokyo, Japan, May 2020.
Takuya Ozuru, Yusuke Ijima, Daisuke Saito and Nobuaki Minematsu, ``Are you professional?: Analysis of prosodic features between a newscaster and amateur speakers through partial substitution by DNN-TTS,'' Proc. Speech Prosody 2020, Tokyo, Japan, May 2020.
Yuki Yamashita, Tomoki Koriyama, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Ryo Masumura, and Hiroshi Saruwatari, ``DNN-based Speech Synthesis Using Abundant Tags of Spontaneous Speech Corpus,'' Proc. LREC 2020, pp. 6438-6443, May 2020.
Ryo Masumura, Yusuke Ijima, Satoshi Kobashikawa, Takanobu Oba, and Yushi Aono, ``Can We Simulate Generative Process of Acoustic Modeling Data? Towards Data Restoration for Acoustic Modeling,'' Proc. APSIPA Annual Summit and Conference 2019, pp. 655--661, Lanzhou, China, Nov. 2019.
Taiki Nakamura, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, and Hiroshi Saruwatari, ``V2S attack: building DNN-based voice conversion from automatic speaker verification,'' Proc. 10th ISCA Speech Synthesis Workshop. pp. 161--165, Vienna, Austria, Sept. 2019.
Hiroki Kanagawa and Yusuke Ijima, ``Multi-Speaker Modeling for DNN-based Speech Synthesis Incorporating Generative Adversarial Networks,'' Proc. 10th ISCA Speech Synthesis Workshop. pp. 40--44, Vienna, Austria, Sept. 2019.
Ryo Masumura, Hiroshi Sato, Tomohiro Tanaka, Takafumi Moriya, Yusuke Ijima, and Takanobu Oba, ``End-to-End Automatic Speech Recognition with a Reconstruction Criterion Using Speech-to-Text and Text-to-Speech Encoder-Decoders,'' Proc. Interspeech 2019, pp. 1606-1610, Graz, Austria, Sept. 2019.
Ryo Masumura, Yusuke Ijima, Taichi Asami, Hirokazu Masataki, and Ryuichiro Higashinaka, ``Neural ConfNet Classification: Fully Neural Network based Spoken Utterance Classification Using Word Confusion Networks,'' Proc. ICASSP 2018, Calgary, Canada, Apr. 2018.
Atsushi Ando, Satoshi Kobashikawa, Hosana Kamiyama, Ryo Masumura, Yusuke Ijima, and Yushi Aono, ``Soft-Target Training with Ambiguous Emotional Utterances for DNN-based Speech Emotion Classification,'' Proc. ICASSP 2018, Calgary, Canada, Apr. 2018.
Yuki Saito, Yusuke Ijima, Kyosuke Nishida, and Shinnosuke Takamichi, ``Non-parallel Voice Conversion Using Variational Autoencoders Conditioned by Phonetic Posteriorgrams and d-vectors,'' Proc. ICASSP 2018, Calgary, Canada, Apr. 2018. (2018年度C&C若手優秀論文賞受賞)
Katsuki Inoue, Sunao Hara, Masanobu Abe, Nobukatsu Hojo, and Yusuke Ijima, ``An investigation to transplant emotional expressions in DNN-based TTS synthesis,'' APSIPA Annual Summit and Conference 2017, TP-P4.9, 6 pages, Kuala Lumpur, Malaysia, Dec. 2017.
Ryuichiro Higashinaka, Kazuki Sakai, Hiroaki Sugiyama, Hiromi Narimatsu, Tsunehiro Arimoto, Takaaki Fukutomi, Kiyoaki Matsui, Yusuke Ijima, Hiroaki Ito, Shoko Araki, Yuichiro Yoshikawa, Hiroshi Ishiguro, and Yoshihiro Matsuo, ``Argumentative dialogue system based on argumentation structures,'' SEMDIAL 2017, p.154, Aug. 2017.
Yusuke Ijima, Nobukatsu Hojo, Ryo Masumura, and Taichi Asami, ``Prosody Aware Word-level Encoder Based on BLSTM-RNNs for DNN-based Speech Synthesis,'' Proc. Interspeech 2017, pp.764-768, Stockholm, Sweden, Aug. 2017.
Nobukatsu Hojo, Yasuhito Ohsugi, Yusuke Ijima, and Hirokazu Kameoka, ``Generative Model of Voice F0 Contours for Statistical Phrase/Accent Command Estimation,'' Proc. Interspeech 2017, pp.1074-1078, Stockholm, Sweden, Aug. 2017.
Takuhiro Kaneko, Hirokazu Kameoka, Nobukatsu Hojo, Yusuke Ijima, Kaoru Hiramatsu, and Kunio Kashino, ``Generative adversarial network-based postfilter for statistical parametric speech synthesis,'' Proc. ICASSP 2017, pp. 4910-4914, New Orleans, U.S.A, March 2017.
Yusuke Ijima, Taichi Asami, and Hideyuki Mizuno, ``Objective Evaluation Using Association Between Dimensions Within Spectral Features for Statistical Parametric Speech Synthesis,'' Proc. Interspeech 2016, pp.337-341, San Francisco, U.S.A, Sept. 2016.
Nobukatsu Hojo, Yusuke Ijima, and Hideyuki Mizuno, ``An Investigation of DNN-Based Speech Synthesis Using Speaker Codes," Proc. Interspeech 2016, pp.2278-2282, San Francisco, U.S.A, Sept. 2016.
Tadashi Inai, Sunao Hara, Masanobu Abe, Yusuke Ijima, Noboru Miyazaki, and Hideyuki Mizuno, ``Sub-band text-to-speech combining sample-based spectrum with statistically generated spectrum.'', Proc. Interspeech 2015, Dresden, Germany, Sept. 2015.
Hiroko Muto, Yusuke Ijima, Noboru Miyazaki, Hideyuki Mizuno, ``Pause insertion prediction using evaluation model of perceptual pause insertion naturalness,'' Proc. Speech Prosody 2014, Dublin, Ireland, May 2014.
Yusuke Ijima, Noboru Miyazaki, Hideyuki Mizuno, ``Statistical model training technique for speech synthesis based on speaker class,'' Proc. SSW8, Barcelona, Spain, Aug. 2013.
Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka, ``HMM-based expressive speech synthesis based on phrase-level F0 context labeling,'' Proc. ICASSP 2013, pp.7859-7863, Vancouver, Canada, May 2013.
Yusuke Ijima, Mitsuaki Isogai, Hideyuki Mizuno, ``Similar speaker selection technique based on distance metric learning with perceptual voice quality similarity,'' Proc. INTERSPEECH 2012, Portland, U.S.A., Sept. 2012.
Yusuke Ijima, Mitsuaki Isogai, Hideyuki Mizuno, ``Correlation analysis of acoustic features with perceptual voice quality similarity for similar speaker selection,'' Proc. INTERSPEECH 2011, pp.2237-2240, Florence, Italy, Aug. 2011.
Yu Maeno, Takashi Nose, Takao Kobayashi, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka, ``HMM-based emphatic speech synthesis using unsupervised context labeling,'' Proc. INTERSPEECH 2011, pp.1849-1852, Florence, Italy, Aug. 2011.
Yusuke Ijima, Takeshi Matsubara, Takashi Nose, Takao Kobayashi, ``Speaking style adaptation for spontaneous speech recognition using multiple-regression HMM,'' Proc. INTERSPEECH 2009, pp.552-555, Brighton, U.K., Sept. 2009.
Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi, ``Emotional speech recognition based on style estimation and adaptation with multiple-regression HMM,'' Proc. ICASSP 2009, Taipei, Taiwan, April 2009.
Yusuke Ijima, Makoto Tachibana, Takashi Nose, Takao Kobayashi, “An on-line adaptation technique for emotional speech recognition using style estimation with multiple-regression HMM,” Proc. INTERSPEECH 2008, pp.1297-1300, Brisbane, Australia, Sept. 2008.
懸川直人, 原直, 阿部匡伸, 井島勇祐, ``Transformerを用いた日本語テキストからの読み仮名・韻律記号列推定,'' 日本音響学会秋季研究発表会, 3-2-17, pp. 829-832, Sept. 2020.
大鶴拓哉, 井島勇祐, 齋藤大輔, 峯松信明, ``[ポスター講演]音読スキルの定量評価に向けた大規模主観評価データの収集と分析,'' 電子情報通信学会技術研究報告, SP2019-84, pp. 95-100, Mar. 2020.
山下優樹, 郡山知樹, 齋藤佑樹, 高道慎之介, 井島勇祐, 増村亮, 猿渡洋, ``[ポスター講演]DNNに基づく話し言葉音声合成における追加コンテキストの効果 '' 電子情報通信学会技術研究報告, SP2019-61, pp. 65-70, Mar. 2020.
中村泰貴, 齋藤佑樹, 高道慎之介, 井島勇祐, 猿渡洋, ``話者V2S攻撃:話者認証から構築される声質変換とその音声なりすまし可能性の評価 ,'' ユーザブルセキュリティワークショップ2019, Oct. 2019.
大鶴拓哉, 井島勇祐, 齋藤大輔, 峯松信明, ``DNN音声合成を用いたアナウンサーと素人話者間の基本周波数についての分析,'' 日本音響学会秋季研究発表会, 1-P-31, Sept. 2019.
大鶴拓哉, 井島勇祐, 齋藤大輔, 峯松信明, ``[ポスター講演]DNN音声合成を用いたアナウンサーと素人話者間の韻律的特徴の分析,'' 電子情報通信学会技術研究報告, SP2019-11, pp. 13-18, Aug. 2019.
中村泰貴, 齋藤佑樹, 西田京介, 井島勇祐, 高道慎之介, ``音素事後確率とd-vector を用いたノンパラレル多対多VAE音声変換における学習データ量とd-vector 次元数に関する評価,'' 日本音響学会春季研究発表会, 2-P-30, Mar. 2019.
井上勝喜, 原直, 阿部匡伸, 井島勇祐, ``DNN音声合成における少量の目標感情音声を用いた感情付与方式の検討,'' 日本音響学会春季研究発表会, 1-P-20, Mar. 2019.
金川裕紀, 井島勇祐, ``DNN音声合成のための敵対的学習による複数話者モデリングの検討,'' 電子情報通信学会技術研究報告, SP2018-32, pp. 1-6, Oct. 2018.
井上勝喜, 原直, 阿部匡伸, 北条伸克, 井島勇祐, ``DNN音声合成における感情付与方式の評価,'' 日本音響学会秋季講演論文集, 1-4-9, Sept. 2018.
増村亮, 井島勇祐, 小橋川哲, 青野裕司, ``ニューラル言語モデルとニューラル音声合成を用いた音響モデル用学習データの生成モデルの検討," 日本音響学会秋季講演論文集, 1-2-1, Sept. 2018.
大庭隆伸, 吉川 貴, 福冨隆朗, 松井清彰, 井島勇祐, ``ドコモAIエージェント・オープンパートナーイニシアティブ ~ Project SEBASTIEN ~,'' 電子情報通信学会技術研究報告, SP2018-17, pp. 7-8, July 2018.
増村 亮, 井島勇祐, 浅見太一, 政瀧浩和, 東中竜一郎, ``音声認識誤りに頑健なニューラル発話意図推定のための コンフュージョンネットワークの連続表現,'' 人工知能学会全国大会, 3G2-04, 2018.
齋藤佑樹, 井島勇祐, 西田京介, 高道慎之介, ``音素事後確率とd-vectorを用いたVariational Autoencoderによるノンパラレル多対多音声変換,'' 電子情報通信学会技術研究報告, SP2017-88, pp. 21-26, Mar. 2018. (音声研究会研究奨励賞受賞)
プラフィアント ハフィヤン, 井島勇祐, 能勢隆, 伊藤彰則, ``DNN に基づく任意話者からの声質変換におけるVTLPと話者コードの利用,'' 日本音響学会春季講演論文集, 2-Q-34, Mar. 2018.
井上勝喜, 原直, 阿部匡伸, 北条伸克, 井島勇祐, ``DNN 音声合成における感情付与のための継続時間長モデルの検討,'' 日本音響学会春季講演論文集, 1-Q-30, Mar. 2018.
井上勝喜, 原直, 阿部匡伸, 北条伸克, 井島勇祐, ``DNN 音声合成における話者と感情の情報を扱うためのモデル構造の検討,'' 日本音響学会秋季講演論文集, 1-R-44, Sept. 2017.
井島勇祐, 北条伸克, 増村亮, 浅見太一, ``DNN 音声合成のためのProsody Aware Word-level Encoderの評価,'' 日本音響学会秋季講演論文集, 1-R-43, Sept. 2017. (日本音響学会 第43回 粟屋潔学術奨励賞 受賞)
増村亮, 井島勇祐, 浅見太一, 政瀧浩和 , 東中竜一郎, ``Confusion Network を用いた深層発話意図推定の検討,'' 日本音響学会秋季講演論文集, 1-10-11, Sept. 2017.
北条伸克, 井島勇祐, ``複数話者 DNN 音声合成のためのminority class data augmentation,'' 日本音響学会秋季講演論文集, 1-8-6, Sept. 2017.
井上勝喜, 原直, 阿部匡伸, 北条伸克, 井島勇祐, ``DNN音声合成における感情付与のためのモデル構造の検討,'' 電子情報通信学会技術研究報告, SP2017-5, pp. 23-28, June 2017.
北条伸克, 大杉康仁, 井島勇祐, 亀岡 弘和, ``DNN-SPACE:テキスト情報を利用した音声F0パターン生成過程の確率モデル,'' 日本音響学会春季講演論文集, 1-6-3, March 2017.
井島勇祐, 北条伸克, 浅見太一, ``DNN 音声合成のための大規模音声データを用いた韻律情報を考慮したWord Embeddingの検討,'' 日本音響学会春季講演論文集, 2-6-5, March 2017.
北条伸克, 井島勇祐, ``話者コードを用いたDNN音声合成における話者適応手法の検討,'' 電子情報通信学会技術研究報告, SP2016-103, pp.147-152, March 2017.
井島勇祐, 北条伸克, 増村亮, 浅見太一, ``DNN音声合成のためのProsodic Word Embeddingの検討,'' 電子情報通信学会技術研究報告, SP2016-104, pp.153-158, March 2017.
金子卓弘, 北条伸克, 井島勇祐, 平松薫, 柏野邦夫, "統計的パラメトリック音声合成のための敵対的学習に基づくポストフィルタリング," 電子情報通信学会技術研究報告, vol.116, no.378, pp. 89-94, Dec. 2016. (音声研究会研究奨励賞受賞)
北条伸克, 井島勇祐, 水野秀之, ``DNN 音声合成における複数話者モデル化のための話者コードの利用,'' 電子情報通信学会技術研究報告, Vol.116, No. 165, pp.13-18, July 2016.
北条伸克, 井島勇祐, 杉山弘晃, ``対話行為情報を表現可能な音声合成の検討,'' 人工知能学会全国大会, 2O4-OS-23a-4, June 2016.
井島勇祐, 浅見太一, 水野秀之, ``スペクトル特徴量の次元間の関係性を用いた合成音声の客観評価の性能評価,'' 日本音響学会春季講演論文集, 1-R-27, March 2016.
北条伸克, 井島勇祐, 青野裕司, ``話者コードを用いたDNN音声合成の評価,'' 日本音響学会春季講演論文集, 1-2-3, March 2016.
井島勇祐, 浅見太一, 水野秀之, ``スペクトル特徴量の次元間の関係性を用いた合成音声の客観評価,'' 電子情報通信学会技術研究報告, Vol. 115, No. 392, pp. 27-32, Jan. 2016. (音声研究会研究奨励賞受賞)
稻井禎, 原直, 阿部匡伸, 井島勇祐, 宮崎昇, 水野秀之, ``高域部への素片スペクトルとHMM生成スペクトルの導入によるHMM合成音声の品質改善の検討,'' 日本音響学会春季研究発表会, pp.383-384, 2-Q-36, March 2015.
北条伸克, 井島勇祐, 宮崎昇, ``話者コードを用いたDNN音声合成の検討,'' 日本音響学会春季講演論文集, 2-1-11, Sept. 2015.
井島勇祐, 宮崎 昇, 水野秀之, ``話者クラスコンテキストを用いたモデル学習法の評価,'' 日本音響学会春季講演論文集, 1-R5-12, March 2014.
井上拓真, 原 直, 阿部匡伸, 井島勇祐, 水野秀之, ``音声波形の高域利用によるHMM音声合成方式の評価,'' 日本音響学会春季講演論文集, 3-6-13, March 2014.
井島勇祐, 宮崎 昇, 水野秀之, ``話者クラスコンテキストを用いたモデル学習法の検討,'' 日本音響学会秋季講演論文集, 1-P-17a, Sept. 2013.
井上拓真, 原 直, 阿部匡伸, 井島勇祐, 水野秀之, ``音声波形の高域利用によるHMM音声合成方式の高品質化,'' 日本音響学会秋季講演論文集, 1-P-21a, Sept. 2013.
武藤博子, 井島勇祐, 宮崎 昇, 水野秀之, ``ポーズの位置・長さと内容の分かりやすさの関係の分析,'' 日本音響学会秋季講演論文集, 1-7-5, Sept. 2013.
井島勇祐, 水野秀之, ``統計的音声合成に基づくデータベースサイズに応じた合成音声品質向上の検討,'' 日本音響学会春季講演論文集, 3-P-10b, pp.477-478, Mar. 2013.
井島勇祐, 水野秀之, ``類似話者選択に基づく音声合成手法の検討,'' 日本音響学会春季講演論文集, 3-P-8d, pp.475-476, Mar. 2013.
井上拓真, 原 直, 阿部匡伸, 井島勇祐, 水野秀之, ``HMM音声合成と波形音声合成の混在による方式の評価,'' 日本音響学会春季講演論文集, 1-7-15, pp.297-298, Mar. 2013.
前野 悠, 能勢 隆, 小林隆夫, 郡山知樹, 井島勇祐, 中嶋秀治, 水野秀之, 吉岡 理, ``多様な韻律生成のための多クラス局所韻律コンテキストの検討,'' 電子情報通信学会技術研究報告, vol.112, no.422, SP2012-112, pp.85-90, Jan. 2013.
神山歩相名, 井島勇祐, 磯貝光昭, 水野秀之, ``雑音重畳音声の聞き取りやすさと音響特徴量の関係の分析,'' 電子情報通信学会技術研究報告, vol.112, no.81, SP2012-46, pp.69-74, June 2012.
前野 悠, 能勢 隆, 小林 隆夫, 井島 勇祐, 中嶋 秀治, 水野 秀之, 吉岡 理, ``強調音声合成のための局所韻律コンテキスト自動付与の検討,'' 電子情報通信学会技術研究報告, vol.112, no.81, SP2012-33, pp.1-6, June 2012.
井島勇祐, 磯貝光昭, 水野秀之, ``距離学習に基づく類似話者選択手法の検討,'' 日本音響学会春季講演論文集, 2-11-2, pp.337-338, Mar. 2012.
神山歩相名, 井島勇祐, 磯貝光昭, 水野秀之, ``雑音下での声質の明瞭性と音響特徴量との相関性の分析,'' 日本音響学会春季講演論文集, 1-10-5, pp.513-514, Mar. 2012.
井島勇祐, 磯貝光昭, 水野秀之, ``声質類似性知覚と音響特徴量との相関分析,'' 日本音響学会秋季講演論文集, 3-Q-13, pp.383-384, Sept. 2011.
前野 悠, 能勢 隆, 小林 隆夫, 井島 勇祐, 中嶋 秀治, 水野 秀之, 吉岡 理, ``多様な音声合成のための強調コンテキストの自動付与の検討,'' 日本音響学会2011年秋季研究発表会講演論文集, 3-8-4, pp.335-336, Sept. 2011.
関根雅人, 小川克彦, 野本済央, 井島勇祐, 吉岡理, ``自然な対話を実現する合成音声デザインの一考察,'' インタラクション2011, pp.355-358, Mar. 2011. (インタラクティブ発表賞受賞)
前野悠, 能勢 隆, 小林隆夫, 井島勇祐, 中嶋秀治, 水野秀之, 吉岡理, ``多様な発話様式によるHMM音声合成のための韻律コンテキストの検討,'' 日本音響学会2011年春季研究発表会講演論文集, 1-Q-28, pp.385-386, Mar. 2011.
井島勇祐, 磯貝光昭, 水野秀之, ``声質の類似性判断に影響を与える音響特徴量の分析,'' 日本音響学会春季講演論文集, 1-7-8, pp.273-274, Mar. 2010.
能勢 隆, 松原 健, 井島勇祐, 小林隆夫, ``重回帰HMMに基づく自然発話音声の発話様式識別,'' 電子情報通信学会技術研究報告, vol.109, no.139, SP2009-46, pp.31-36, July 2009.
松原 健, 井島勇祐, 橘 誠, 能勢 隆, 小林隆夫, ``スタイル推定を用いた自然発話音声の発話様式別に関する検討,'' 日本音響学会春季講演論文集, 1-P-28, Mar. 2009.
井島勇祐, 橘 誠, 能勢 隆, 小林隆夫, ``スタイル推定を用いた音声認識における音響モデル学習法の評価,'' 日本音響学会春季講演論文集, 1-P-27, Mar. 2009.
井島勇祐, 橘 誠, 能勢 隆, 小林隆夫, ``重回帰HMM に基づくスタイル推定を用いた音声認識における音響モデル学習法,'' 電子情報通信学会技術研究報告, vol.108, no.338, SP2008-85, pp.37-42, Dec. 2008.
井島勇祐, 橘 誠, 能勢 隆, 小林隆夫, ``スタイル推定に基づく音響モデルのオンライン適応手法の評価,'' 日本音響学会秋季講演論文集, 2-P-10, pp.131-132, Sept. 2008.
井島勇祐, 橘 誠, 能勢 隆, 小林隆夫, ``スタイル推定に基づく音響モデルのオンライン適応手法,'' 電子情報通信学会技術研究報告, vol.108, no.142, SP2008-48, pp.31-36, July 2008.
井島勇祐, 池田直光, 坂田 聡, 上田裕市, 渡邉 亮, ``単語音声認識におけるホルマント軌跡の重心補正効果,'' 日本音響学会春季講演論文集, 1-P-3, pp.129-130, Mar. 2007.
井島勇祐, 池田直光, 坂田 聡, 上田裕市, 渡邉 亮, ``話者正規化機能をもつ音声認識システムの構築,'' 電気関係学会九州支部連合会大会論文集, 10-2P-07, pp.417, Sept. 2005.
井島勇祐, ``NTTメディアインテリジェンス研究所における音声処理研究とその実用化,'' 東北大学電気通信研究所音響工学研究会, June 2017.