Proceedings (Refereed)
[10] Shion Takeno, Yoshito Okura, Yu Inatsu, Aoyama Tatsuya, Tomonari Tanaka, Akahane Satoshi, Hiroyuki Hanada, Noriaki Hashimoto, Taro Murayama, Hanju Lee, Shinya Kojima, Ichiro Takeuchi.
Distributionally Robust Active Learning for Gaussian Process Regression.
Proceedings of The 42nd International Conference on Machine Learning (ICML 2025), vol. XX, pp. XX, PMLR, 2025. (Acceptance rate 26.9% (=3260 / 12107)), to appear
[9] Shogo Iwazaki, Tomohiko Tanabe, Mitsuru Irie, Shion Takeno, Kota Matsui, Yu Inatsu.
No-Regret Bayesian Optimization with Stochastic Observation Failures.
The 28th International Conference on Artificial Intelligence and Statistics (AISTATS2025, acceptance rate 31.3% (=583/1861)), PMLR 258, 415-423, 2025.
[8] S. Takeno, Y. Inatsu, M. Karasuyama, I. Takeuchi.
Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds.
Proceedings of The 41st International Conference on Machine Learning (ICML2024, acceptance rate 27.5% (=2609/9473)), PMLR 235, 47510-47534, 2024.
[7] Y. Inatsu, S. Takeno, H. Hanada, K. Iwata, I. Takeuchi.
Bounding Box-based Multi-objective Bayesian Optimization of Risk Measures under Input Uncertainty.
The 27th International Conference on Artificial Intelligence and Statistics (AISTATS2024, acceptance rate 27.6% (=546/1980)), PMLR 238, 4564-4572, 2024.
[6] S. Iwazaki, T. Tanabe, M. Irie, S. Takeno, Y. Inatsu.
Risk Seeking Bayesian Optimization under Uncertainty for Obtaining Extremum.
The 27th International Conference on Artificial Intelligence and Statistics (AISTATS2024, acceptance rate 27.6% (=546/1980)), PMLR 238, 1252-1260, 2024.
[5] S. Takeno, Y. Inatsu, M. Karasuyama.
Randomized Gaussian Process Upper Confidence Bound with Tight Bayesian Regret Bounds.
Fortieth International Conference on Machine Learning (ICML2023, acceptance rate 27.9% (=1827/6538)), PMLR 202, 33490-33515, 2023.
[4] Y. Inatsu, S. Takeno, M. Karasuyama, I. Takeuchi.
Bayesian Optimization for Distributionally Robust Chance-constrained Problem.
Thirty-ninth International Conference on Machine Learning (ICML2022, acceptance rate 21.9% (=1235/5630)), 162, 9602-9621.
[3] Y. Inatsu, S. Iwazaki and I. Takeuchi (2021).
Active Learning for Distributionally Robust Level-Set Estimation.
Thirty-eighth International Conference on Machine Learning (ICML2021, acceptance rate 21.5% (=1184/5513)), 139, 4574-4584.
[2] S. Iwazaki, Y. Inatsu and I. Takeuchi (2021).
Mean-Variance Analysis in Bayesian Optimization under Uncertainty.
The 24th International Conference on Artificial Intelligence and Statistics (AISTATS, acceptance rate 29.8% (=455/1527)), 130, 973-981.
[1] K. Tanizaki, N. Hashimoto, Y. Inatsu, H. Hontani and I. Takeuchi (2020).
Computing Valid P-Values for Image Segmentation by Selective Inference.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR, acceptance rate 22.1% (=1470/6656)), 9553-9562.
Journal (Refereed)
[17] Shion Takeno, Yu Inatsu, Masayuki Karasuyama, Ichiro Takeuchi (2025).
Regret Analysis of Posterior Sampling-Based Expected Improvement for Bayesian Optimization.
Transactions on Machine Learning Research (TMLR)
[16] Yu Inatsu (2025).
Bayesian Optimization of Robustness Measures under Input Uncertainty: A Randomized Gaussian Process Upper Confidence Bound Approach.
Transactions on Machine Learning Research
[15] Shion Takeno, Yu Inatsu and Masayuki Karasuyama (2025).
Regret Analysis for Randomized Gaussian Process Upper Confidence Bound
Journal of Artificial Intelligence Research, Vol. 84, Article 18, 18:1-33, 2025
[14] Tomonari Tanaka, Hiroyuki Hanada, Hanting Yang, Aoyama Tatsuya, Yu Inatsu, Akahane Satoshi, Yoshito Okura, Noriaki Hashimoto, Taro Murayama, Hanju Lee, Shinya Kojima, Ichiro Takeuchi (2025).
Distributionally Robust Coreset Selection under Covariate Shift
Transactions on Machine Learning Research
[13] Yu Inatsu, Shion Takeno, Kentaro Kutsukake and Ichiro Takeuchi (2024).
Active Learning for Level Set Estimation Using Randomized Straddle Algorithms.
Transactions on Machine Learning Research
[12] S. Kusakawa, S. Takeno, Y. Inatsu, K. Kutsukake, S. Iwazaki, T. Nakano, T. Ujihara, M. Karasuyama, I. Takeuchi (2022).
Bayesian Optimization for Cascade-type Multi-stage Processes.
Neural Computation (IF in 2021 = 3.278), 34 (12), 2408–2431.
[11] T. Tsukurimichi, Y. Inatsu, V. N. L. Duy and I. Takeuchi (2022).
Conditional Selective Inference for Robust Regression and Outlier Detection using Piecewise-Linear Homotopy Continuation.
Annals of the Institute of Statistical Mathematics (IF in 2021 = 1.180), 74, 1197–1228.
[10] S. Iwazaki, Y. Inatsu and I. Takeuchi (2021).
Bayesian Quadrature Optimization for Probability Threshold Robustness Measure.
Neural Computation, 33(12), 3413-3466.
[9] K. Inoue, M. Karasuyama, R. Nakamura, M. Konno, D. Yamada, K. Mannen, T. Nagata, Y. Inatsu, H. Yawo, K. Yura, O. Béjà, H. Kandori, I. Takeuchi (2021).
Exploration of natural red-shifted rhodopsins using a machine learning-based Bayesian experimental design.
Communication Biology 4, 362.
[8] S. Iwazaki, Y. Inatsu and I. Takeuchi (2020).
Bayesian Experimental Design for Finding Reliable Level Set under Input Uncertainty.
IEEE Access, 8, 203982-203993.
[7] Y. Inatsu, M. Karasuyama, K. Inoue and I. Takeuchi (2020).
Active learning for level set estimation under input uncertainty and its extensions.
Neural Computation, 32(12), 2486-2531.
[6] Y. Inatsu, M. Karasuyama, K. Inoue, H. Kandori and I. Takeuchi (2020).
Active Learning of Bayesian Linear Models with High-Dimensional Binary Features by Parameter Confidence-Region Estimation.
Neural Computation, 32(10), 1998-2031.
[5] Y. Inatsu, D. Sugita, K. Toyoura and I. Takeuchi (2020).
Active learning for enumerating local minima based on Gaussian process derivatives.
Neural Computation, 32(10), 2032-2068.
[4] T. Sato and Y. Inatsu (2020).
A Cp type criterion for model selection in the GEE method when both scale and correlation parameters are unknown.
Hiroshima Mathematical Journal, 50(1), 85-115.
[3] Y. Inatsu and S. Imori (2018).
Model Selection Criterion Based on the Prediction Mean Squared Error in Generalized Estimating Equations.
Hiroshima Mathematical Journal, 48(3), 307-334.
[2] Y. Inatsu (2017).
An unbiased Cp type criterion for ANOVA model with a tree order restriction.
Hiroshima Mathematical Journal, 47(2), 181-216.
[1] Y. Inatsu and H. Wakaki (2016).
Asymptotic expansions of the null distribution of the LR test statistic for random-effects covariance structure in a parallel profile model.
JOURNAL OF THE JAPAN STATISTICAL SOCIETY, 46(1), 51-79.
Journal (Not Refereed)
[1] 稲津 佑 (2016).
順序制約つきANOVAモデルのAIC規準.
京都大学数理解析研究所講究録, No.1999, 47-71.
Proceedings (Not Refereed)
[19] 竹野 思温, 稲津 佑, 烏山 昌幸 (2022).
乱択GP-UCBアルゴリズムのリグレット解析 .
IEICE Technical Report, 122 (325), 38-45 (2022年度IBISML研究会賞ファイナリスト).
[18] 稲津 佑, 竹内一郎 (2022).
分布ロバストなパレートフロントの同定問題に対する多目的ベイズ最適化.
IEICE Technical Report, 122 (325), 112-119.
[17] 佐藤瑞起, 大森夢拓, 稲津 佑, 竹内一郎 (2022).
K-meansクラスタリングに対するより強力な選択的推論とその単一細胞分析への応用.
IEICE Technical Report, 121 (321), 54-60.
[16] 杉山諒太, 戸田博己, Vo Nguyen Le Duy, 稲津 佑, 竹内一郎 (2021).
多次元系列データにおける変化点検出のための選択的推論.
IEICE Technical Report, 121 (304), PRMU2021-28, 25-30.
[15] 稲津 佑, 竹野思温, 烏山昌幸, 竹内一郎 (2021).
分布的ロバストな機会制約付き最適化問題に対する能動学習.
IEICE Technical Report, 121 (80), 47-54 (2021年度IBISML研究会賞ファイナリスト).
[14] 杉山諒太, 戸田博己, Vo Nguyen Le Duy, 稲津 佑, 竹内一郎 (2021).
多次元系列データの変化点検出のための選択的推論.
IEICE Technical Report, 120 (395), 63-70 (2020年度IBISML研究会賞).
[13] 大森夢拓, 稲津 佑, 竹内一郎 (2021).
パラメトリック計画法を用いた凸クラスタリングのための選択的推論.
IEICE Technical Report, 120 (395), 9-15.
[12] H. Toda, Y. Inatsu and I. Takeuchi (2020).
Post-selection Inference for Spatio-temporal Trajectory Segmentation.
研究集会「Recent Progress in Spatial and/or Spatio-temporal Data Analysis」予稿集.
[11] 稲津 佑,竹内 一郎 (2019).
入力がコストに応じたランダム性を持つ場合のレベルセット推定のための能動学習.
研究集会「統計学と機械学習の数理と展開」予稿集.
[10] 稲津 佑,竹内 一郎 (2019).
コストに基づく入力不確実性がある下でのレベルセット推定のための能動学習.
2019年度統計関連学会連合大会報告集, p.214.
[9] 稲津 佑, 椙田大輔, 豊浦和明, 竹内一郎 (2018).
ガウス過程の導関数に基づく極小点の同定のための能動学習.
IEICE Technical Report, 118 (284), 373-380.
[8] 稲津 佑, 烏山 昌幸,井上 圭一,神取 秀樹 ,竹内 一郎 (2018).
高次元ベイズ線形モデルにおけるconfidence region estimationを用いた回帰係数の同定のためのアクティブラーニング.
2018年度統計関連学会連合大会報告集, p.324.
[7] 稲津 佑, 竹内一郎 (2017).
Selective inferenceに基づくactive learningの選択バイアス補正.
IEICE Technical Report, 117 (293), 289-296.
[6] 稲津 佑 (2016).
パラメータに順序制約が課せられたANOVAモデルに対するAIC規準およびCp規準.
2016年度統計関連学会連合大会報告集.
[5] 稲津 佑 (2015).
パラレルプロファイルモデルにおけるランダムエフェクト共分散構造に対する尤度比統計量の高次元漸近展開.
研究集会「大規模複雑データの理論と方法論:最前線の動向」予稿集.
[4] 稲津 佑 (2015).
平均と分散の両方に制約を課したモデルに対するAIC規準.
2015年度統計関連学会連合大会報告集.
[3] 稲津 佑 (2014).
ランダムエフェクトパラレルプロファイルモデルにおける共分散構造に関する尤度比検定統計量の漸近展開.
2014年度統計関連学会連合大会報告集.
[2] 稲津 佑 (2013).
Model selection for generalized estimating equations with nuisance parameters.
2013年度統計関連学会連合大会報告集.
[1] 稲津 佑, 伊森 晋平 (2012).
一般化推定方程式を用いたモデルに対する選択規準.
2012年度統計関連学会連合大会報告集.
Preprints
[3] Keiichiro Seno, Kota Matsui, Shogo Iwazaki, Yu Inatsu, Shion Takeno, Shigeyuki Matsui.
Dose-finding design based on level set estimation in phase I cancer clinical trials.
arXiv preprint, arXiv:2504.09157.
[2] Y. Inatsu.
Bayesian Optimization of Robustness Measures Using Randomized GP-UCB-based Algorithms under Input Uncertainty.
arXiv preprint, arXiv:2504.03172.
[1] R. Sugiyama, H. Toda, V. N. L. Duy, Y. Inatsu and I. Takeuchi.
Valid and Exact Statistical Inference for Multi-dimensional Multiple Change-Points by Selective Inference.
arXiv preprint, arXiv:2110.08989.
Book
[3] 少ないデータによるAI・機械学習の進め方と精度向上、説明可能なAIの開発
(分担執筆,第4章第10節「ベイズ最適化に基づく実験工程の効率化」,p.215-224,担当)
出版社:株式会社技術情報協会
ISBN:978-4-86798-048-4
発刊日:2024年10月31日
[2] マテリアルズインフォマティクス・量子コンピュータおよび自然言語処理と自律型実験システムを活用した次世代材料開発
(分担執筆,第2章第4節「ロバスト尺度に対するベイズ最適化」,p.77-86,担当)
出版社:株式会社AndTech
ISBN:978-4-9091118-68-4
発刊日:2024年2月16日
[1] 実験の自動化・自律化によるR&Dの効率化と運用方法
(分担執筆,第8章第2節「ベイズ最適化を用いた実験工程の効率化」,p.415-424,担当)
出版社:株式会社技術情報協会
ISBN:978-4-86104-994-1
発刊日:2023年12月27日
Technical Report
[1] Y. Inatsu (2016).
Akaike Information Criterion for ANOVA Model with a Simple Order Restriction.
TR 16-13, Statistical Research Group, Hiroshima University, Hiroshima.