MikeSchuster

마이크 슈스터 마이크 슈스터는 1993년 독일 뒤스부르크 대학교에서 전기공학을 전공했고 교토에서 일본어를 공부한 뒤에 도쿄대학교에서 섬유광학을 공부했고 나라과학기술대학원(Nara Institute of Science and Technology)에서 박사학위를 받았다. 기계학습과 언어(speech)를 전공한 그의 배경으로 인해 그는 교토의 Advanced Telecommunications Research Laboratories와 미국의 Nuance, 그리고 일본의 NTT에서 일을 하면서 기계학습과 음성인식을 집중적으로 연구했다. 마이크 슈스터는 2006년 구글의 speech group에 입사하면서 음성과 언어 관련 많은 프로젝트에 참여했다. 그는 일본어와 한국어 음성인식 초기모델을 개발한 개발자 중 한명이다. 그는 현재 대규모 신경네트워크(neural network)와 기계학습 인프라를 구축하는 구글 브레인에서 근무하고 있다. 2016년 그는 구글 신경망 기계번역 개발에 기여했다. 구글 신경망 기계번역은 기존의 번역체계에 비해 번역오류를 최대 60% 감소시켰다.

Mike Schuster

Dr. Mike Schuster graduated in Electric Engineering from the Gerhard-Mercator University in Duisburg, Germany in 1993. After receiving a scholarship he spent a year in Japan to study Japanese in Kyoto and Fiber Optics in the Kikuchi laboratory at Tokyo University. His professional career in machine learning and speech brought him to Advanced Telecommunications Research Laboratories in Kyoto, Nuance in the US and NTT in Japan where he worked on general machine learning and speech recognition research and development after getting his PhD at the Nara Institute of Science and Technology. Dr. Schuster joined the Google speech group in the beginning of 2006, seeing speech products being developed from scratch to toy demos to serving millions of users in many languages over the next eight years, and he was the main developer of the original Japanese and Korean speech recognition models. He is now part of the Google Brain group which focuses on building large-scale neural network and machine learning infrastructure for Google and has been working on infrastructure with the TensorFlow toolkit as well as on research, mostly in the field of speech and translation with various types of recurrent neural networks. In 2016 he led the development of the new Google Neural Machine Translation system, which reduced translation errors by more than 60% compared to the previous system.


The move to Neural Machine Translation at Google

Machine learning and in particular neural networks have made great advances in the last few years for products that are used by millions of people, most notably in speech recognition, image recognition and most recently in neural machine translation. Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Google has recently moved to a neural machine translation system reducing translation errors by more than 60% compared to Google's previous system. The new system launched for some selected languages in late 2016 for all Google Translate users and has improved translation quality significantly worldwide, especially for the Asian languages. This talk explains the development of the new system and what kind of problems had to be overcome to be able to launch at Google scale. Over the past few months Google has launched many more languages and has made significant improvements to many parts of the production system which will be explained during the talk.


Comments