Giseung Park

Hello! I'm Giseung. [CV]

I recently completed my Ph.D. at the Korea Advanced Institute of Science and Technology (KAIST), Korea, under the guidance of Prof. Youngchul Sung.

As a researcher, I am eager to collaborate with experts from diverse backgrounds to solve challenging and impactful problems. Driven by a passion for groundbreaking research, I aim to contribute to projects that have a significant impact: tackling important problems with novel methods and making research widely accessible!

Contact: gs.park [at] kaist [dot] ac [dot] kr 

Research Interests

Reinforcement Learning (RL)

Ideal RL typically requires (i) states with the Markov property and (ii) a well-designed scalar reward. However, many real-world scenarios do not meet these requirements. My research focuses on partially observable RL and multi-objective RL to address situations where one or both of these conditions are not guaranteed.

I believe that advancing the practical application of RL in real-world problems requires a strong focus on multi-modal RL and multi-agent system control. My current research interests specifically include developing robust and data-efficient multi-modal RL,  handling non-stationarity in multi-agent systems, and exploring RL applications.

News

(Aug. 2024) Our lab has joined a research project called AI Hub Project (funded by the Korean government), where we are leading the multi-modal RL-based decision-making branch to develop general-purpose foundation models for robotics. I’m excited to collaborate with numerous esteemed research groups on this project! [News link with details of the entire project in Korean.]

(Jul. 2024) I presented a poster at ICML in Vienna, Austria 🇦🇹. (Check out my photo above! 😊)

(Jun. 2024) I passed my Ph.D. dissertation evaluation! I sincerely thank the committee members—Prof. Leshem, Kim, Park, Ahn, and my advisor, Prof. Sung—for their kind and helpful feedback.

(May. 2024) Our paper has been has been accepted to ICML 2024! I'm very proud of this achievement, which reflects the dedicated effort of our Korea-Israel international research project.


Selected Publications

Adaptive Multi-Model Fusion Learning for Sparse-Reward Reinforcement Learning

Giseung Park, Whiyoung Jung, Seungyul Han, Sungho Choi, Youngchul Sung

Under revision

The Max-Min Formulation of Multi-Objective Reinforcement Learning: From Theory to a Model-Free Algorithm

Giseung Park, Woohyeon Byeon, Seongmin Kim, Elad Havakuk, Amir Leshem, Youngchul Sung

International Conference on Machine Learning (ICML), 2024 (Acceptance rate: 27.5%)

TL;DR: An explicit formulation of max-min multi-objective reinforcement learning (MORL) and a practical model-free algorithm for max-min MORL.

Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning (Oral, Top 4.6%)

Giseung Park, Sungho Choi, Youngchul Sung

AAAI Conference on Artificial Intelligence (AAAI), 2022 (Acceptance rate: 15.0%)

TL;DR: A new architecture based on sequential block inputs in POMDPs, learned through a novel direct gradient estimation using self-normalized importance sampling.

Population-Guided Parallel Policy Search for Reinforcement Learning

Whiyoung Jung, Giseung Park, Youngchul Sung

International Conference on Learning Representations (ICLR), 2020 (Acceptance rate: 26.5%)

TL;DR: Multiple learners share a common replay buffer and collaboratively search for a good policy with guidance from the best policy information.