Yang Liu, Masahiro Kaneko, Chenhui Chu. On the Alignment of Large Language Models with Global Human Opinion. arXiv. [arXiv] [Code]
Masahiro Kaneko, Timothy Baldwin. A Little Leak Will Sink a Great Ship: Survey of Transparency for Large Language Models from Start to Finish. The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP). [arXiv]
Masahiro Kaneko, Alham Fikri Aji, Timothy Baldwin. Balanced Multi-Factor In-Context Learning for Multilingual Large Language Models. The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP). [arXiv]
Rem Hida, Masahiro Kaneko, Naoaki Okazaki. Social Bias Evaluation for Large Language Models Requires Prompt Variations. The 2025 Conference on Empirical Methods in Natural Language Processing (Findings: EMNLP). [arXiv]
Masahiro Kaneko, Youmi Ma, Yuki Wata, Naoaki Okazaki. Sampling-based Pseudo-Likelihood for Membership Inference Attacks. The 63rd Annual Meeting of the Association for Computational Linguistics (Findings: ACL). Â [arXiv] Â [Code]
Ayana Niwa, Masahiro Kaneko, Kentaro Inui. Rectifying Belief Space via Unlearning to Harness LLMs’ Reasoning. The 63rd Annual Meeting of the Association for Computational Linguistics (Findings: ACL). [arXiv]
Masahiro Kaneko, Danushka Bollegala, Timothy Baldwin. Eagle 🦅: Ethical Dataset Given from Real Interactions. The 34th International Joint Conference on Artificial Intelligence (IJCAI). [arXiv] [data]
Yuxia Wang, Rui Xing, Jonibek Mansurov, Giovanni Puccetti, Zhuohan Xie, Minh Ngoc Ta, Jiahui Geng, Jinyan Su, Mervat Abassy, Saad El Dine Ahmed, Kareem Elozeiri, Nurkhan Laiyk, Maiya Goloburda, Tarek Mahmoud, Raj Vardhan Tomar, Alexander Aziz, Ryuto Koike, Masahiro Kaneko, Artem Shelmanov, Ekaterina Artemova, Vladislav Mikhailov, Akim Tsvigun, Alham Fikri Aji, Nizar Habash, Iryna Gurevych, Preslav Nakov. Is Human-like Text Liked by Human? Multilingual Human Detection and Preference Against AI. arXiv. [arXiv]
Panatchakorn Anantaprayoon, Masahiro Kaneko, Naoaki Okazaki. Intent-Aware Self-Correction for Mitigating Social Biases in Large Language Models. arXiv. [arXiv]
Ryuto Koike, Masahiro Kaneko, Ayana Niwa, Preslav Nakov, Naoaki Okazaki. ExaGPT: Example-Based Machine-Generated Text Detection for Improving Interpretability. arXiv. [arXiv]
Masahiro Kaneko, Danushka Bollegala, Timothy Baldwin. The Gaps between Fine Tuning and In-context Learning in Bias Evaluation and Debiasing. The 31st International Conference on Computational Linguistics (COLING). [arXiv] [paper]
Yuxia Wang, Artem Shelmanov, Jonibek Mansurov, Akim Tsvigun, Vladislav Mikhailov, Rui Xing, Zhuohan Xie, Jiahui Geng, Giovanni Puccetti, Ekaterina Artemova, Jinyan Su, Minh Ngoc Ta, Mervat Abassy, Kareem Elozeiri, Saad El Dine Ahmed, Maiya Goloburda, Tarek Mahmoud, Raj Vardhan Tomar, Alexander Aziz, Nurkhan Laiyk, Osama Mohammed Afzal, Ryuto Koike, Masahiro Kaneko, Alham Fikri Aji, Nizar Habash, Iryna Gurevych, Preslav Nakov. GenAI Content Detection Task 1: English and Multilingual Machine-generated Text Detection: AI vs. Human. The 1st Workshop on GenAI Content Detection (GenAIC). [WebSite] [arXiv]
Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki. Gender Bias in Meta-Embeddings. Findings of the Association for Computational Linguistics: The 2022 Conference on Empirical Methods in Natural Language Processing (Findings: EMNLP ). [arXiv] [paper]
Hiroyuki Deguchi, Kenji Imamura, Masahiro Kaneko, Yuto Nishida, Yusuke Sakai, Justin Vasselli, Huy Hien Vu, Taro Watanabe. NAIST-NICT-TIT WMT22 General MT Task Submission. Proceedings of the Seventh Conference on Machine Translation (WMT). [Paper]
Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki. Debiasing isn't enough! - On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks. The 29th International Conference on Computational Linguistics (COLING). (Long paper, Acceptance rate: 33.4%) [arXiv] [paper]
Koki Maeda, Masahiro Kaneko, Naoaki Okazaki. IMPARA: Impact-based Metric for GEC using Parallel Data. The 29th International Conference on Computational Linguistics (COLING). (Long paper, Acceptance rate: 33.4%) [paper]
Masahiro Kaneko, Aizhan Imankulova, Danushka Bollegala, Naoaki Okazaki. Gender Bias in Masked Language Models for Multiple Languages. In Proceedings of the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). (Long paper, Acceptance rate: 26%) [arXiv] [paper] [code]
Mengsay Loem, Sho Takase, Masahiro Kaneko, Naoaki Okazaki. ExtraPhrase: Efficient Data Augmentation for Abstractive Summarization. In Proceedings of the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop (NAACL SRW). [arXiv]
Tosho Hirasawa, Masahiro Kaneko, Aizhan Imankulova, Mamoru Komachi. Pre-trained Word Embedding and Language Model Improve Multimodal Machine Translation: A Case Study in Multi30K. IEEE Access, 2022. [paper]
Yujin Takahashi, Masahiro Kaneko, Masato Mita, Mamoru Komachi. Proficiency Matters Quality Estimation in Grammatical Error Correction. Proceedings of the 13th Language Resources and Evaluation Conference (LREC). [arXiv]
Masahiro Kaneko, Sho Takase, Ayana Niwa, Naoaki Okazaki. Interpretability for Language Learners Using Example-Based Grammatical Error Correction. In Proceedings of the 60th Annual Conference of the Association for Computational Linguistics (ACL). (Long paper, Acceptance rate: 20.75%) [arXiv] [paper] [code]
Yi Zhou, Masahiro Kaneko, Danushka Bollegala. Sense Embeddings are also Biased -- Evaluating Social Biases in Static and Contextualised Sense Embeddings. In Proceedings of the 60th Annual Conference of the Association for Computational Linguistics (ACL). (Long paper, Acceptance rate: 20.75%) [arXiv] [paper] [code]
Masahiro Kaneko and Danushka Bollegala. Unmasking the Mask -- Evaluating Social Biases in Masked Language Models. Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI). (Acceptance rate: 15%) [arXiv] [paper] [code]
Mengsay Loem, Sho Takase, Masahiro Kaneko, Naoaki Okazaki. Are Neighbors Enough? Multi-Head Neural n-gram can be Alternative to Self-attention. arXiv. [arXiv]
Raj Dabre, Aizhan Imankulova, Masahiro Kaneko. Studying The Impact Of Document-level Context On Simultaneous Neural Machine Translation. Proceedings of the 18th Biennial Machine Translation Summit (MT Summit). [paper]
Aomi Koyama, Kengo Hotate, Masahiro Kaneko and Mamoru Komachi. Comparison of Grammatical Error Correction Using Back-Translation Models. 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop (NAACL SRW) (Acceptance rate: 44%) [arXiv] [paper]
Seiichiro Kondo, Kengo Hotate, Tosho Hirasawa, Masahiro Kaneko and Mamoru Komachi. Sentence Concatenation Approach to Data Augmentation for Neural Machine Translation. 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop (NAACL SRW) (Acceptance rate: 44%) [paper]
Masahiro Kaneko and Danushka Bollegala. Debiasing Pre-trained Contextualised Embeddings. The 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL). (Long paper, Acceptance rate: 27%) [arXiv] [paper] [code] [poster]
Masahiro Kaneko and Danushka Bollegala. Dictionary-based Debiasing of Pre-trained Word Embeddings. The 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL). (Long paper, Acceptance rate: 27%) [arXiv] [paper] [code] [poster]
Raj Dabre, Aizhan Imankulova, Masahiro Kaneko and Abhisek Chakrabarty. Simultaneous Multi-Pivot Neural Machine Translation. arXiv. [arXiv]
Masahiro Kaneko and Danushka Bollegala. Autoencoding Improves Pre-trained Word Embeddings. The 28th International Conference on Computational Linguistics (COLING). (Short paper, Acceptance rate: 26.2%) [arXiv] [paper] [bib] [slide]
Ikumi Yamashita, Satoru Katsumata, Masahiro Kaneko, Aizhan Imankulova and Mamoru Komachi. Cross-lingual Transfer Learning for Grammatical Error Correction. The 28th International Conference on Computational Linguistics (COLING). (Long paper, Acceptance rate: 35.3%) [paper]
Kengo Hotate, Masahiro Kaneko and Mamoru Komachi. Generating Diverse Corrections with Local Beam Search for Grammatical Error Correction. The 28th International Conference on Computational Linguistics (COLING). (Short paper, Acceptance rate: 26.2%) [paper]
Ryoma Yoshimura, Masahiro Kaneko, Tomoyuki Kajiwara and Mamoru Komachi. SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction. The 28th International Conference on Computational Linguistics (COLING). (Short paper, Acceptance rate: 26.2%) [paper]
Aizhan Imankulova, Masahiro Kaneko, Tosho Hirasawa and Mamoru Komachi. Towards Multimodal Simultaneous Neural Machine Translation. The Fifth Conference in Machine Translation (WMT). (Acceptance rate: 32.7%) [arXiv] [paper] [code]
Masato Mita, Shun Kiyono, Masahiro Kaneko, Jun Suzuki and Kentaro Inui. A Self-Refinement Strategy for Noise Reduction in Grammatical Error Correction. Findings of the Association for Computational Linguistics: The 2020 Conference on Empirical Methods in Natural Language Processing (Findings: EMNLP ) [paper]
Zizheng Zhang, Tosho Hirasawa, Wei Houjing, Masahiro Kaneko and Mamoru Komachi. Translation of New Named Entities from English to Chinese. In Proceedings of the 7th Workshop on Asian Translation (WAT). [paper]
Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki and Kentaro Inui. Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction. In Proceedings of the 58th Annual Conference of the Association for Computational Linguistics (ACL). (Short paper, Acceptance rate: 17.6%) [arXiv] [paper] [bib] [slide] [code]
Hiroto Tamura, Tosho Hirasawa, Masahiro Kaneko and Mamoru Komachi. TMU Japanese-English Multimodal Machine Translation System for WAT 2020. In Proceedings of the 7th Workshop on Asian Translation (WAT): Japanese-English Multimodal Machine Translation track.Â
Masahiro Kaneko, Aizhan Imankulova, Tosho Hirasawa and Mamoru Komachi. English-to-Japanese Diverse Translation by Combining Forward and Backward Outputs. The 4th Workshop on Neural Generation and Translation (WNGT): Simultaneous Translation And Paraphrase for Language Education (STAPLE) English-to-Japanese track [paper] [bib]
Masahiro Kaneko and Danushka Bollegala. Gender-preserving Debiasing for Pre-trained Word Embeddings. In Proceedings of the 57th Annual Conference of the Association for Computational Linguistics (ACL). (Long paper, Acceptance rate: 25.7%) [arXiv] [paper] [slide] [bib] [code]
Kengo Hotate, Masahiro Kaneko, Satoru Katsumata and Mamoru Komachi. Controlling Grammatical Error Correction Using Word Edit Rate. In Proceedings of the 57th Annual Conference of the Association for Computational Linguistics: Student Research Workshop (ACL SRW). [paper] [bib]
Mio Arai, Masahiro Kaneko and Mamoru Komachi. Grammatical-Error-Aware Incorrect Example Retrieval System for Learners of Japanese as a Second Language. In Proceedings of the 14th Workshop on Innovative Use of NLP for Building Educational Applications (BEA). [paper]
Masahiro Kaneko and Mamoru Komachi. Multi-Head Multi-Layer Attention to Deep Language Representations for Grammatical Error Detection. In 20th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing). [arXiv] [poster]
Masato Mita, Tomoya Mizumoto, Masahiro Kaneko, Ryo Nagata and Kentaro Inui. Cross-Corpora Evaluation and Analysis of Grammatical Error Correction Models — Is Single-Corpus Evaluation Enough? 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). (Short paper, Acceptance rate: 21.3%) [pdf]
Masahiro Kaneko, Mamoru Komachi. Multi-Head Multi-Layer Attention to Deep Language Representations for Grammatical Error Detection. Computacion y Sistemas. Vol. 23, No. 3, pp. 883-891. September, 2019. [paper]
Aizhan Imankulova, Masahiro Kaneko and Mamoru Komachi. Japanese-Russian TMU Neural Machine Translation System using Multilingual Model for WAT 2019. The 6th Workshop on Asian Translation (WAT): News Commentary task.
Masahiro Kaneko, Kengo Hotate, Satoru Katsumata and Mamoru Komachi. TMU Transformer System Using BERT for Re-ranking at BEA 2019 Grammatical Error Correction on Restricted Track. In Proceedings of the 14th Workshop on Innovative Use of NLP for Building Educational Applications (BEA): Shared Task on Grammatical Error Correction. [paper] [poster] [bib]