Manuscripts (ongoing)
Will update soon.
International Conferences (refereed)
Edouard Fouche and Junpei Komiyama and Klemens Bohm. Scaling Multi-Armed Bandit Algorithms. In Proceedings of the 25th {ACM} {SIGKDD} International Conference on Knowledge Discovery & Data Mining (KDD 2019), August 2019.
J. Komiyama, A. Takeda, J. Honda, and H. Shimao. Nonconvex Optimization for Regression with Fairness Constraints. In Proceedings of the 35th International Conference on Machine Learning (ICML 2018), June 2018.
J. Komiyama, J. Honda, and A. Takeda. Position-based Multiple-play Multi-armed Bandit Problem with Unknown Position Bias. In Proceedings of the 31st Neural Information Processing Systems (NIPS 2017), 5005-5015, Long Beach, United States, December 2017.
Junpei Komiyama, Masakazu Ishihata, Hiroki Arimura, Takashi Nishibayashi, Shin-Ichi Minato. Statistical Emerging Pattern Mining with Multiple Testing Correction. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2017), 897-906, Research Track, Halifax, Nova Scotia, Canada, August 13-17, 2017.[ACM][preprint].
J. Komiyama, J. Honda, and H. Nakagawa. Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm, and Computationally Efficient Algorithm. In Proceedings of the 33rd International Conference on Machine Learning (ICML 2016), 1235-1244, New York City, United States, June 2016 (acceptance ratio: 24.3% (322/1327)). arXiv ver.
J. Komiyama, J. Honda, and H. Nakagawa. Regret Lower Bound and Optimal Algorithm in Finite Stochastic Partial Monitoring. In Proceedings of the 29th Neural Information Processing Systems (NIPS 2015), Montreal, Canada, December 2015. arXiv ver.
J. Komiyama, J. Honda, H. Kashima, and H. Nakagawa. Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem. In Proceedings of the 28th Annual Conference on Learning Theory (COLT 2015), 1141-1154, Paris, France, July 2015 (acceptance ratio: 39% (70/176)). arXiv ver.
J. Komiyama, J. Honda, and H. Nakagawa. Optimal Regret Analysis of Thompson Sampling in Stochastic Multi-armed Bandit Problem with Multiple Plays. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), 1152-1161, Lille, France, July 2015 (acceptance ratio: 26% (270/1037)). arXiv ver.
J. Komiyama and T. Qin. Time-Decaying Bandits for Non-stationary Systems. In Proceedings of the 10th Conference on Web and Internet Economics (WINE 2014), 460-466, Beijing, China, December 2014, (acceptance ratio: 44% (46/107)).
J. Komiyama, H. Oiwa, and H. Nakagawa. Robust Distributed Training of Linear Classifiers Based on Divergence Minimization Principle. In Proceedings of the 7th European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2014), 1-17, Nancy, France, September 2014 (acceptance ratio: 24% (115/483)). Preprint PDF
J. Komiyama, I. Sato, and H. Nakagawa. Multi-armed bandit problem with lock-up periods. In Proceedings of the 5th Asian Conference on Machine Learning (ACML 2013), 116-132, Canberra, Australia, November 2013 (acceptance ratio: 31% (32/102)).
Journals (refereed)
Ryo Watanabe, Junpei Komiyama, Atsuyoshi Nakamura, Mineichi Kudo. UCB-SC: A Fast Variant of KL-UCB-SC for Budgeted Multi-Armed Bandit Problem. IEICE Transactions 101-A(3): 662-667 (2018).
Ryo Watanabe, Junpei Komiyama, Atsuyoshi Nakamura, Mineichi Kudo. KL-UCB-Based Policy for Budgeted Multi-Armed Bandits with Stochastic Action Costs. IEICE Transactions 100-A(11): 2470-2486 (2017)
J. Komiyama, I. Sato, and H. Nakagawa. Multi-armed bandit problem with lock-up periods. In Transactions on Mathematical Modeling and its Applications, Vol 6 (No.3), 11-22, December 2013.
You can also refer me in DBLP.