知覚モデルに基づくストレスフリーなリアルタイム広帯域音声変換の研究(総務省SCOPE 2018年度~)

Research Project on stress-free, real-time, and full-band voice conversion based on perceptual models

概要 (abstract)

音声変換 (voice conversion) とは,身体の制約を超えた音声表現を可能にする技術です.本研究課題では,次の2点を目標に掲げます.

目標1:リアルタイム・広帯域音声変換

コミュニケーションを阻害しない程度に高速に変換でき,かつ,人間の可聴周波数帯域を広くカバーする高音質な音声変換を目指します.

目標2:変換音声の自在化

変換音声を自分の声のように自在に操る未来を見据え,変換エラーによってユーザが感じるストレスの定量化法,そのストレスを低減できる音声変換技術を研究します.

メンバー (member)

東京大学 助教

研究代表者

東京大学 講師

研究分担者

小谷 岳

東京大学 学生

研究補助員

須田 仁志

東京大学 学生

研究補助員

東京大学 学生

研究補助員

上田 陽太

東京大学 学生

研究補助員

研究開発内容 (our research)

マスク型リアルタイム音声変換デバイス (Mask-shaped voice conversion device)


ソース・フィルタ型非負値行列因子分解 (Source-filter nonnegative matrix factorization)

言語情報制約付き韻律生成モデル (Context-constrained generative model for F0 contour)

時変全共分散行列に基づく音声変換 (Voice conversion based on a time-variant full-covariance matrix)

主観評価に基づく話者埋め込み (Speaker embedding based on subjective scoring)

人間の知覚的距離とDNNによる客観的距離を一致させるように,話者表現を獲得します.We estimate speaker representation to match humans' perceptual distance and DNN's objective distance.

saito19ssw_embedding_poster.pdf

聴覚特性を考慮した敵対的音声合成 (Adversarial speech synthesis using auditory-sensitive features)

音声合成時において,聴覚特性を考慮した敵対的学習を導入することで,高品質な音声合成を可能にしました.We introduce a generative adversarial networks using auditory-sensitive features to improve speech quality.

方向統計DNNを用いた位相モデリング (phase modeling based on directional-statistics DNN)

音声の合成時における位相情報を高精度に復元するために,非対称・対称周期分布を用いた深層生成モデルを導入します. We introduce a new deep generative model incorporating asymmetric and symmetric periodic distributions to accurately predict phase spectrograms in speech waveform generation.

発表文献 (publication)

学術論文 (Journal)

  1. Suda Hitoshi, Kotani Gaku, Saito Daisuke, "INmfCA Algorithm for Training of Nonparallel Voice Conversion Systems Based on Non-negative Matrix Factorization," IEICE Transactions on Information and Systems, Vol.E***-*, No.**, pp.***--***, ***. 2022.

  2. Yuki Saito, Shinnosuke Takamichi, and Hiroshi Saruwatari, "Perceptual-similarity-aware deep speaker representation learning for multi-speaker generative modeling," IEEE/ACM Transactions on Audio, Speech, and Language Processing, Vol. **, No. **, pp. **--**, ***. 2021.

  3. Shinnosuke Takamichi, Ryosuke Sonobe, Kentaro Mitsui, Yuki Saito, Tomoki Koriyama, Naoko Tanji, Hiroshi Saruwatari, "JSUT and JVS: free Japanese voice corpora for accelerating speech synthesis research," Acoustical Science and Technology, Vol.41, No.5, pp.761--768, Sep. 2020.

  4. Yuki Saito, Shinnosuke Takamichi, and Hiroshi Saruwatari, "Vocoder-free text-to-speech synthesis incorporating generative adversarial networks using low-/multi-frequency STFT amplitude spectra," Computer Speech & Language, Vol.***, No.***, pp.***-***, ***. 2019.

国際会議論文 (International conference)

  1. Takaaki Saeki, Shinnosuke Takamichi, Hiroshi Saruwatari, "Low-Latency Incremental Text-to-Speech Synthesis with Distilled Context Prediction Network," Proc. ASRU, pp. xxxx-xxxx, Dec. 2021.

  2. Riku Arakawa, Zendai Kashino, Shinnosuke Takamichi, Adrien Alexandre Verhulst, Masahiko Inami, "Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation," Proc. ACM ICMI, Oct. 2021.

  3. Yota Ueda, Kazuki Fujii, Yuki Saito, Shinnosuke Takamichi, Yukino Baba, Hiroshi Saruwatari, "HumanACGAN: conditional generative adversarial network with human-based auxiliary classifier and its evaluation in phoneme perception," Proc. ICASSP, pp. xxxx--xxxx, Tronto, Canada, Jun. 2021.

  4. Hitoshi Suda, Gaku Kotani, Daisuke Saito, "Nonparallel Training of Exemplar-based Voice Conversion System Using INCA-based Alignment Technique," Proc. Interspeech, pp. ****-****, Shanghai, China, Oct. 2020.

  5. Takaaki Saeki, Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari, "Real-time, full-band, online DNN-based voice conversion using a single CPU," Proc. Interspeech, pp. ****-****, Shanghai, China, Oct. 2020. (Show & Tell session)

  6. Takaaki Saeki, Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari, "Lifter training and sub-band modeling for computationally efficient and high-quality voice conversion using spectral differentials," Proc. ICASSP, pp. ****-****, Barcelona, Spain, May 2020.

  7. Kazuki Fujii, Yuki Saito, Shinnosuke Takamichi, Yukino Baba, Hiroshi Saruwatari, "HumanGAN: generative adversarial network with human-based discriminator and its evaluation in speech perception modeling," Proc. ICASSP, pp. ****-****, Barcelona, Spain, May 2020. (preprint)

  8. Gaku Kotani, Hitoshi Suda, Daisuke Saito and Nobuaki Minematsu, "Experimental investigation on the efficacy of Affine-DTW in the quality of voice conversion" Proc. APSIPA ASC, Lanzhou, China, Nov. 2019.

  9. Shunsuke Goto, Daisuke Saito, Nobuaki Minematsu, "DNN-based Statistical Parametric Speech Synthesis Incorporating Non-negative Matrix Factorization," Proc. APSIPA ASC, Lanzhou, China, Nov. 2019.

  10. Riku Arakawa, Shinnosuke Takamichi and Hiroshi Saruwatari, "TransVoice: Real-Time Voice Conversion for Augmenting Near-Field Speech Communication" UIST poster, Sep. New Orleans, U.S.A., Oct. 2019.

  11. Shunsuke Goto, Yuma Shirahata, Gaku Kotani, Hitoshi Suda, Daisuke Saito, Nobuaki Minematsu, "The UTokyo speech synthesis system for Blizzard Challenge 2019," Blizzard Challenge Workshop, Vienna, Austria, Sep. 2019.

  12. Hitoshi Suda, Daisuke Saito, and Nobuaki Minematsu, "Voice Conversion without Explicit Separation of Source and Filter Components Based on Non-negative Matrix Factorization," Proc. The 10th ISCA SSW, Vienna, Austria, Sep. 2019.

  13. Yuma Shirahata, Daisuke Saito, and Nobuaki Minematsu, "Generative Modeling of F0 Contours Leveraged by Phrase Structure and Its Application to Statistical Focus Control," Proc. The 10th ISCA SSW, Vienna, Austria, Sep. 2019.

  14. Gaku Kotani and Daisuke Saito, "Voice conversion based on full-covariance mixture density networks for time-variant linear transformations," Proc. The 10th ISCA SSW, Vienna, Austria, Sep. 2019.

  15. Riku Arakawa, Shinnosuke Takamichi and Hiroshi Saruwatari, "Implementation of DNN-based real-time voice conversion and its improvements by audio data augmentation and mask-shaped device," Proc. The 10th ISCA SSW, Vienna, Austria, Sep. 2019.

  16. Yuki Saito, Shinnosuke Takamichi and Hiroshi Saruwatari, "DNN-based Speaker Embedding Using Subjective Inter-speaker Similarity for Multi-speaker Modeling in Speech Synthesis," Proc. The 10th ISCA SSW, Vienna, Austria, Sep. 2019.

国内論文 (Domestic conference)

  1. 佐伯高明,高道 慎之介,中村友彦,丹治尚子,猿渡洋,”ソース・フィルタ・チャネル分解に基づく自己教師ありニューラル音声復元,” 情報処理学会研究報告, xxx, Mar. 2022.

  2. 佐伯 高明, 高道 慎之介, 猿渡 洋, "大規模言語モデルによる未観測文の生成機構を持つEnd-to-Endインクリメンタル音声合成," 電子情報通信学会技術研究報告, SP-xxxx-xxx, vol. xxx, no. xxx, pp. xx--xx, Mar. 2020.

  3. 齋藤 佑樹,高道 慎之介,猿渡 洋, "主観的話者間類似度を考慮したDNN話者埋め込みのためのActive Learning," 情報処理学会研究報告, xxx, Mar. 2021.

  4. 倉田 将希,高道 慎之介,佐伯 高明,荒川 陸,齋藤 佑樹,樋口 啓太,猿渡 洋, " リアルタイムDNN音声変換フィードバックによるキャラクタ性の獲得手法," 情報処理学会研究報告, xxx, Mar. 2021.

  5. 齋藤 佑樹,高道 慎之介,猿渡 洋,"主観的話者間類似度のグラフ埋め込みに基づく DNN 話者埋め込み," 日本音響学会2020年秋季研究発表会講演論文集, *-**-*, 2020. (slide)

  6. 佐伯 高明,齋藤 佑樹,高道 慎之介,猿渡 洋, ”サブバンドフィルタリングに基づくリアルタイム広帯域 DNN 声質変換の実装と評価,” 日本音響学会2020年秋季研究発表会講演論文集, *-**-*, 2020.

  7. 白旗 悠真,齋藤 大輔,峯松 信明, "文節情報を考慮した基本周波数軌跡のモデル化と焦点制御のための統計的韻律変換への応用," 日本音響学会2019年秋季研究発表会講演論文集, *-**-*, 2019.

  8. 小谷 岳,須田 仁志,齋藤 大輔,峯松 信明, "パラレルデータ声質変換の品質改善に向けた Affine-DTW の実験的評価," 日本音響学会2019年秋季研究発表会講演論文集, *-**-*, 2019.

  9. 齋藤 佑樹,高道 慎之介,猿渡 洋,"主観的話者間類似度に基づくDNN話者埋め込みを用いた多数話者DNN音声合成の実験的評価," 日本音響学会2019年秋季研究発表会講演論文集, *-**-*, 2019.

  10. 佐伯 高明,齋藤 佑樹,高道 慎之介,猿渡 洋,"差分スペクトル法に基づくDNN声質変換の計算量削減に向けたフィルタ推定," 日本音響学会2019年秋季研究発表会講演論文集, *-*-*, 2019.

  11. 高道 慎之介,猿渡 洋,"正弦関数摂動von Mises分布DNNのモード近似を用いた位相復元," 情報処理学会研究報告, 2018-SLP-126, no. 9, pp. 1--6, Feb. 2019.

講演 (talk)

  1. 高道 慎之介, “音声合成・変換の国際コンペティションへの参加を振り返って,” FIT2019 企画セッション「コンペの覇者」, Sep. 2019. (招待講演)

  2. 高道 慎之介, "統計的ボイチェン研究事情," GREE #VRSionUp!6, Jul. 2019. (招待講演)

  3. Shinnosuke Takamichi, "Group-delay modelling based on deep neural network with sine-skewed generalized cardioid distribution," International Conference on Soft Computing & Machine Learning (SCML), Apr. 2019. (Invited talk)

メディア掲載 (Media coverage)

  1. "VRSionUp! #6へ行って、「声のVR」をフィジカルとサイエンスの両面からガッツリ学んできました!" VRonWEBMEDIA, Jul. 2019.

その他 (others)

  1. Shinnosuke Takamichi, Kentaro Mitsui, Yuki Saito, Tomoki Koriyama, Naoko Tanji, and Hiroshi Saruwatari, "JVS corpus: free Japanese multi-speaker voice corpus," arXiv preprint, 1908.06248, Aug. 2019.

  2. 講演「音声研究者のための著作権法」, Jul. 2019.

  3. JVS (Japanese Versatile Speech) corpus, Aug. 2019.