Preprint
H. Chang and J. Kim. Identifiability of the minimum-trace directed acyclic graph and hill climbing algorithms without strict local optima under weakly increasing error variances, 2025. [arXiv]
Referred articles
6. H. Chang and Q. Zhou. Dimension-free Relaxation Times of Informed MCMC Samplers on Discrete Spaces. Bernoulli, 2025. [arXiv]
5. H. Chang, J. Cai and Q. Zhou. Order-based structure learning without score equivalence. Biometrika, 2024. [link] [arXiv]
4. Q. Zhou and H. Chang. Complexity analysis of Bayesian learning of high-dimensional DAG models and their equivalence classes. Annals of Statistics, 2023. [link] [arXiv]
3. H. Chang, CJ. Lee, ZT. Luo, H. Sang and Q. Zhou. Rapidly mixing multiple-try Metropolis algorithms for model selection problems. NeurIPS, 2022, Selected as oral presentation (201 out of 10,411, acceptance rate 2%). [link] [arXiv]
2. H. Jeong, H. Chang and EA. Valdez. A non-convex regularization approach for stable estimation of loss development factors. Scandinavian Actuarial Journal, 2021 [link]
1. Y. Kim, Y. Kwon, H. Chang and MC. Paik. Lipschitz continuous autoencoders in application to anomaly detection. AISTATS, 2020. [link]