Jinho Choi AI Research Scientist | Ph.D. candidate
I’m a Ph.D. candidate at KAIST specializing in AI safety, focusing on autonomous agents, mechanistic interpretability, and AI control.
Previously, I conducted research and published papers on topics like model fairness, image segmentation, and human-computer interaction.
With over five years of experience as a research engineer, I have transformed cutting-edge AI research into impactful real-world applications, including conversational agents with 500K+ users, video job interview assessment models adopted by 150+ companies, and advanced 3D biomedical imaging analysis systems.
Interpretability
Sparse autoencoders reveal selective remapping of visual concepts [PDF] [Demo]
Hyesu Lim, Jinho Choi, Jaegul Choo, and Steffen Schneider
arXiv (under review) 2024
Developed a sparse autoencoder (SAE) applied to the CLIP vision transformer to identify 49K candidate concepts for different input reasons.
Analyzed image-, class-, and task-level representations, highlighting the relationship between SAE concepts and model predictions.
Demonstrated prompt-learning adaptation methods adjust concept mappings for downstream tasks.
Autonomous agents
ZUICY: Conversation with YouTube creators [link]
Genesis Lab Inc, Feb. 2024 ~ Nov. 2024
Developed conversational agents mimicking YouTube creators using Azure OpenAI and LangChain, deployed via REST API.
Improved conversation quality through prompt chaining, retrieval-augmented generation, and memory recall.
Built an automated pipeline for efficient creation of agent profiles and memory, leading to 50+ agents and 500K+ downloads on the Play Store.
Fairness & talking face generation
viewinterHR 2.0: Automated job interview assessment platform [link]
Genesis Lab Inc, Sep. 2022 ~ Dec. 2023
Build a bias diagnosis pipeline to extract attributes like gender and light conditions from data, analyzing their impact on model outcomes.
Deployed the pipeline in job interview assessment models used by 150+ companies to improve trustworthiness.
Developed a talking face generation method with precise lip sync and high video clarity. Applied it to create an AI interviewer for a job interview platform.
Collaborated with HR experts to ensure the AI interviewer accurately reflected real interview dynamics.
Autonomous agents
Multi-agent-based decision-making system
Genesis Lab Inc, Aug. 2023 ~ Feb. 2024
Designed and implemented a multi-agent system prototype to support decision-making by querying relational databases and performing data-driven reasoning.
Developed a prompt chaining mechanism for SQL query generation and reasoning, with final answers refined through agent discussions.
Enhanced user interaction by clarifying ambiguous queries and dividing complex ones into sub-queries.
Fairness
Fairness-aware Multimodal Learning in Automatic Video Interview Assessment [PDF]
Changwoo Kim, Jinho Choi, Jongyeon Yoon, Daehun Yoo, Woojin Lee
IEEE Access 2023
Developed a bias mitigation algorithm that reduced group disparities in both feature space and output distribution.
Achieved a 35% reduction in gender bias in job interview assessment models without compromising accuracy.
Human Computer Interaction | Image Segmentation
Slice and Conquer: A Planar-to-3D Framework for Efficient Interactive Segmentation of Volumetric Images [PDF]
Wonwoo Cho, Dongmin Choi, Hyesu Lim, Jinho Choi, Saemee Choi, Hyun-seok Min, Sungbin Lim, Jaegul Choo
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024
Developed a human-in-the-loop segmentation model for 3D biomedical imaging, generating 3D masks from labeled 2D slices and recommending corrections for uncertain 2D slices.
Improved annotation speed and accuracy by 9.5% over competitors.
Image Segmentation
Label-free three-dimensional analyses of live cells with deep-learning-based segmentation [PDF]
Jinho Choi, Hye-Jin Kim, Gyuhyeon Sim, Sumin Lee, Wei Sun Park, Jun Hyung Park, Ha-Young Kang, Moosung Lee, Won Do Heo, Jaegul Choo, Hyunseok Min, YongKeun Park
arXiv 2021
Proposed and experimentally demonstrated three-dimensional segmentation of subcellular organelles in unlabelled live cells, exploiting a 3D U-Net-based architecture.
Presented the high-precision three-dimensional segmentation of cell membrane, nucleus, membrane, nucleoli, and lipid droplets of various cell types.
Time-lapse analyses of the dynamics of activated immune cells using label-free segmentation
Image Segmentation
3D cell instance segmentation via point proposals using cellular components [PDF]
Jinho Choi, Junwoo Park, Hyeon-seok Min, Hyungjoo Cho, Sungbin Lim, Jaegul Choo
Analysis of Biomolecules, Cells, and Tissues XIX, 2021
Developed 3D cell instances segmentation models, achieving state-of-the-art accuracy and outperforming competitors by 26%.
Led dataset construction in collaboration with biology experts. Deployed the models in commercial cell analysis software using Libtorch.
Human Computer Interaction
Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization [PDF]
Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, Niklas Elmqvist
Computer Graphics Forum (Eurovis), 2019
Developed a deep neural network pipeline to extract key elements from web-based visualizations and created a Google Chrome extension that integrates with screen readers to assist visually impaired users.
Achieved an extraction rate of approximately 88%, outperforming competitors with rates of 50–75%.
The paper has been cited over 150 times.
Human Computer Interaction | Visual analytics
ExTopicTile: Tile-Based Spatio-Temporal Event Analytics on Social Media [PDF]
Minsuk Choi, Sungbok Shin, Jinho Choi, Scott Langevin, Christopher Bethune, Philippe Horne, Nathan Kronenfeld, Ramakrishnan Kannan, Barry Drake, Haesun Park, Jaegul Choo
Proceedings of the 2018 Conference on Human Factors in Computing Systems (CHI), 2018
Improved a recently proposed topic modeling method that can extract spatio-temporally exclusive topics corresponding to a particular region and a time point.
Utilized a tile-based map interface to handle large-scale data in parallel efficiently, highlighting anomalous tiles using our novel glyph visualization that encodes the degree of anomaly computed by our exclusive topic modeling processes.
Visual analytics
STExNMF: Spatio-Temporally Exclusive Topic Discovery for Anomalous Event Detection [PDF]
Sungbok Shin, Minsuk Choi, Jinho Choi, Scott Langevin, Christopher Bethune, Philippe Horne, Nathan Kronenfeld, Ramakrishnan Kannan, Barry Drake, Haesun Park, Jaegul Choo
IEEE International Conference on Data Mining (ICDM), 2017
Presented a tile-based spatiotemporally exclusive topic modeling approach called STExNMF, based on a novel nonnegative matrix factorization (NMF) technique.
Implemented large audio language models for speech understanding tasks, including automatic speech recognition and emotion recognition. Fine-tuned models to enhance performance in low-resource languages like Korean.
Developed conversational agents mimicking YouTube creators using Azure OpenAI and LangChain, deployed via REST API. Improved conversation quality through prompt chaining, retrieval-augmented generation, and memory recall. Built an automated pipeline for efficient creation of agent profiles and memory, leading to 50+ agents and 500K+ downloads on the Play Store.
Designed and implemented a multi-agent system prototype to support decision-making by querying relational databases and performing data-driven reasoning. Enhanced the accuracy of final outputs by refining answers through agent discussions, clarifying ambiguous queries, and breaking down complex queries into manageable sub-queries.
Developed a bias diagnosis pipeline to extract attributes like gender and light conditions from data, analyzing their impact on model outcomes. Designed statistical parity-based metrics to quantify bias. Deployed the pipeline in job interview assessment models used by 150+ companies, contributing to AI Trustworthiness certification by the Ministry.
Led the development of a bias mitigation algorithm that reduced group disparities in both feature space and output distribution. Achieved a 35% reduction in gender bias in job interview assessment models without compromising accuracy. Scaled the algorithm to cover multiple attributes, improving fairness in scenarios with imbalanced data distributions.
Developed a talking face generation method with precise lip-sync and high video clarity, leveraging seamless chroma key compositing to enhance realism. Applied it to create an AI interviewer for a job interview platform, enabling natural, responsive interactions. Collaborated closely with HR experts to ensure the AI interviewer accurately reflected real interview dynamics.
Led the development of a human-in-the-loop segmentation model for 3D biomedical imaging, generating 3D masks from labeled 2D slices and recommending corrections for uncertain slices. Increased annotation speed and accuracy by 9.5% over competitors. Deployed the model as the in-house annotation system, significantly improving the efficiency of the annotation process.
Developed 3D cell segmentation models for four subcellular organelles and cell instances, achieving state-of-the-art accuracy and outperforming competitors by 26%. Collaborate with biology experts to construct datasets and refine model specifications. Integrated and deployed the models into commercial cell analysis software using Libtorch.
Ph.D in Artificial Intelligence, Korea Advanced Institute of Science and Technology (Advisor: Jaegul Choo), Aug. 2022 - Present
M.S. in Computer Science and Engineering, Korea University (Advisor: Jaegul Choo), Mar. 2017 - Aug. 2019
B.S. in Computer Science and Engineering, Korea University, Mar. 2013 - Feb. 2017