I am a postdoctoral researcher working at the Visual Intelligence Center and the Machine Learning Group at UiT – The Arctic University of Norway.
I am working on developing deep learning algorithms for learning from limited labeled data and multimodal learning. In particular, my research focuses on effectively leveraging geometric concepts in feature space, such as proxies, prototypes, and Voronoi cells, to improve probabilistic classification models.
Common prototype-based medical image few-shot segmentation (FSS) methods model foreground and background classes using class-specific prototypes. However, given the high variability of the background, a more promising direction is to focus solely on foreground modeling, treating the background as an anomaly—an approach introduced by ADNet. Yet, ADNet faces three key limitations: dependence on a single prototype per class, a focus on binary classification, and fixed thresholds that fail to adapt to patient and organ variability. To address these shortcomings, we propose the Tied Prototype Model (TPM), a principled reformulation of ADNet with tied prototype locations for foreground and background distributions. Building on its probabilistic foundation, TPM naturally extends to multiple prototypes and multi-class segmentation while effectively separating non-typical background features. Notably, both extensions lead to improved segmentation accuracy. Finally, we leverage naturally occurring class priors to define an ideal target for adaptive thresholds, boosting segmentation performance. Taken together, TPM provides a fresh perspective on prototype-based FSS for medical image segmentation. The code can be found at https://github.com/hjk92g/TPM-FSS.
Hyeongji Kim, Stine Hansen, Michael Kampffmeyer
Camera-ready version (MICCAI 2025 - Oral)
In this paper, we propose ProxyDR, a novel metric learning method for hyperspherical embeddings. Through the adoption of a distance ratio-based formulation, ProxyDR resolves the fundamental shortcomings of the conventional squared distance softmax formulation. Notably, our proposed method addresses the near-uniform positioning of class representatives that obstructs effective learning of semantic relationships among classes—a phenomenon demonstrated by our theoretical and experimental analyses. Moreover, by employing proxies as class representatives, our method can be effortlessly incorporated into established classification frameworks. We rigorously evaluate ProxyDR against conventional methods using diverse datasets, including CIFAR100 and NABirds, demonstrating superiority in capturing hierarchical structures while maintaining conventional classification accuracy.
Hyeongji Kim, Changkyu Choi, Michael Kampffmeyer, Terje Berge, Pekka Parviainen, and Ketil Malde
Workshop paper (ECCV 2024 workshop - Beyond Euclidean)
Previous studies on robustness have argued that there is a tradeoff between accuracy and adversarial accuracy. The tradeoff can be inevitable even when we neglect generalization. We argue that the tradeoff is inherent to the commonly used definition of adversarial accuracy, which uses an adversary that can construct adversarial points constrained by epsilon-balls around data points. As epsilon gets large, the adversary may use real data points from other classes as adversarial examples. We propose a Voronoi-epsilon adversary which is constrained both by Voronoi cells and by epsilon-balls. This adversary balances two notions of perturbation. As a result, adversarial accuracy based on this adversary avoids a tradeoff between accuracy and adversarial accuracy on training data even when epsilon is large. Finally, we show that a nearest neighbor classifier is the maximally robust classifier against the proposed adversary on the training data.
Hyeongji Kim, Pekka Parviainen, and Ketil Malde
Proceedings (NLDL 2023)
In metric learning, the goal is to learn an embedding so that data points with the same class are close to each other and data points with different classes are far apart. We propose a distance-ratio-based (DR) formulation for metric learning. Like softmax-based formulation for metric learning, it models p(y=c|x′), which is a probability that a query point x′ belongs to a class c. The DR formulation has two useful properties. First, the corresponding loss is not affected by scale changes of an embedding. Second, it outputs the optimal (maximum or minimum) classification confidence scores on representing points for classes. To demonstrate the effectiveness of our formulation, we conduct few-shot classification experiments using softmax-based and DR formulations on CUB and mini-ImageNet datasets. The results show that DR formulation generally enables faster and more stable metric learning than the softmax-based formulation. As a result, using DR formulation achieves improved or comparable generalization performances.
Hyeongji Kim, Pekka Parviainen, and Ketil Malde