Face privacy is essential because facial recognition (FR) technology can covertly identify individuals from images or video streams, creating major privacy issues. As FR technology progresses, it increases the risk for individuals who are unknowingly recorded in public spaces, with potential security vulnerabilities or data breaches potentially exposing sensitive information. International regulations, such as GDPR, highlight the need to protect personal data, including facial images, to prevent misuse and unauthorized access. Protecting face privacy involves adopting measures like anonymization, where identifiable facial features are obscured or removed, thereby protecting individual identities and adhering to privacy laws.
Black Hole-Driven Identity Absorbing in Diffusion Models
Recent advances in diffusion models have positioned them as powerful generative frameworks for high-resolution image synthesis across diverse domains. The emerging “hspace” within these models, defined by bottleneck activations in the denoiser, offers promising pathways for semantic image editing similar to GAN latent spaces. However, as demand grows for content erasure and concept removal, privacy concerns highlight the need for identity disentanglement in the latent space of diffusion models. The highdimensional latent space poses challenges for identity removal, as traversing with random or orthogonal directions often leads to semantically unvalidated regions, resulting in unrealistic outputs. To address these issues, we propose Black Hole-Driven Identity Absorption (BIA), a novel approach for identity erasure within the latent space of diffusion models. BIA uses a “black hole” metaphor, where the latent region representing a specified identity acts as an attractor, drawing in nearby latent points of surrounding identities to “wrap” the black hole. Instead of relying on random traversals for optimization, BIA employs an identity absorption mechanism by attracting and wrapping nearby validated latent points associated with other identities to achieve a vanishing effect for specified identity. Our method effectively prevents the generation of a specified identity while preserving other attributes, as validated by improved scores on identity similarity (SID), FID metrics, qualitative evaluations, and user studies as compared to SOTA.
Our proposed architecture for a de-identification is derived from the black hole region.
The latent representation of the target face is obtained through diffusion inversion, and multiple nearby latent points are sampled to capture local variations around the target identity in the latent space. Each sampled latent point is decoded and compared with the target face using a face recognition model. Based on identity similarity, a decision boundary is learned to separate the target identity from surrounding identities, forming the black hole region in the latent space.
The input face is first mapped into the latent space of a pre-trained diffusion model. Nearby latent samples are generated and compared using a face recognition model to separate identity-related features from others. Based on this separation, a clear identity boundary is learned, defining a latent region that represents the target identity to be removed (the “black hole”).
Latent representations from neighboring, different identities are pulled toward the black hole region to absorb and neutralize the target identity. This process removes identity-specific information while preserving other attributes such as facial expression, lighting, and background, resulting in realistic identity-unlearned images.
The figure illustrates the qualitative evaluation protocol for identity unlearning, where multiple input images of different identities are processed using GUIDE, a baseline method, and the proposed approach under the same experimental settings to assess identity removal and attribute preservation.
Compared to GUIDE and the baseline, the proposed method more effectively suppresses identity-specific features while maintaining facial attributes such as expression, pose, and lighting, producing visually realistic and diverse results without output collapse or noticeable artifacts.
FACIAL IDENTITYEDITING:TOWARDS EFFECTIVE DE-IDENTIFICATION
We introduce a new method for face de-identification using a frozen diffusion model. In contrast to previous methods that carefully design and train a generative model, we reformulate face de-identification as an identity editing task and employ a pretrained unconditional diffusion model. Also, unlike pre vious facial image editing approaches that try to preserve the identity and change only the demanded attributes, we aim to shift the identity while preserving the rest. This approach is significantly efficient because there is no need to construct or train any part of the diffusion model for identity shift. To the best of our knowledge, this is the first work to perform face de-identification with image editing. Ultimately, our findings, supported by both qualitative and quantitative results, show that image editing can effectively achieve de-identification.
We present an overview of our proposed training-free unlearning-based de-identification method. Our approach consists of two key stages: identity boundary search and h-space projection. In the identity boundary search stage, we collect latent representations of the same identity together with generated target identity samples and learn a semantic identity boundary using a lightweight linear Support Vector Machine. This boundary captures the direction that separates identity-specific features from non-identity attributes in the diffusion model’s latent space.
In the h-space projection stage, we perform identity editing by projecting the source latent representation along the learned boundary direction at the semantic mixing step of the diffusion process. This operation effectively removes identity-related information while preserving non-identity attributes such as pose, expression, and facial structure, without requiring any modification or retraining of the diffusion model.
This figure compares qualitative de-identification results across Base, Hybrid, and Synthetic settings, where identity boundaries are learned using different combinations of real and generated source–target identity samples.
Compared to Base and Hybrid, the Synthetic setting achieves more effective identity removal while better preserving facial realism and non-identity attributes, demonstrating the advantage of using fully generated identity samples for stable boundary estimation.
Diffusion Based Identity Removal
With machine unlearning becoming increasingly important, our approach focuses on selectively removing specific identities from a pre-trained diffusion model and refining pre-trained models without the need to train from scratch. Our Identity Conditional Diffusion Model (ID Conditional DM) precisely eliminates unwanted identities while maintaining other important features and generating images associated with the target identity. Moreover, our method provides clear visual insights into the unlearning process, demonstrating its efficacy and the underlying mechanisms that facilitate the selective removal of identity features. This contributes to a more secure and privacy-conscious framework in machine learning applications, offering a practical solution for managing sensitive information. We are currently working on diffusion-based identity removal.
Seangmin Lee, Seahwan Heo, Jiye Won, Jinhyeong Park and Soon Ki Jung, Training-Free Face De-identification via Pose Aligned Face Component Swapping , 2025 8th Artificial Intelligence and Cloud Computing Conference(AICCC 2025), (2025.12.20 ~ 2025.12.22)
Jinhyeong Park, Seangmin Lee, Muhammad Shaheryar and Soon Ki Jung, Facial Identity Editing: Towards Effective De-Identification, the IEEE International Conference on Image Processing 2025(ICIP 2025), (2025.09.14 ~ 2025.09.17)
Seangmin Lee, Jinhyeong Park and Soon Ki Jung, Facial Attribute Editing with Diffusion Models using Data-Efficient SVMs, the IEEE International Conference on Advanced Visual and Signal-Based Systems(AVSS 2025), (2025.08.11 ~ 2025.08.13)
Muhammad Shaheryar, Jong Taek Lee and Soon Ki Jung, Black Hole-Driven Identity Absorbing in Diffusion Models, The IEEE/CVF Conference on Computer Vision and Pattern Recognition,2025(CVPR 2025), (2025.06.11 ~ 2025.06.15)
Muhammad Shaheryar, Jong Taek Lee and Soon Ki Jung, Unlearn and Protect: Selective Identity Removal in Diffusion Models for Privacy Preservation, International Conference on Symposium On Applied Computing(SAC 2025), (2025.03.31 ~ 2025.04.04)
Jinhyeong Park, Muhammad Shaheryar, Seangmin Lee and Soon Ki Jung, Navigating h-space for Multi-Attribute Editing in Diffusion Models, The International Conference on Artificial Intelligence in Information and Communication,2025(ICAIIC 2025), (2025.02.18 ~ 2025.02.21)
Seangmin Lee, Jinhyeong Park, Yoonsuk Kwak and Soon Ki Jung, Analysis on Midpoint Estimation for Identity Loss Observation , The 13th International Conference on Smart Media and Applications (SMA 2024), (2024.12.18 ~ 2024.12.22)
Jinhyeong Park, Seangmin Lee, Lamyanba Laishram and Soon Ki Jung, Analysis on Diffusion Model for Face De-identification, The 7th International Conference on Culture Technology and Applications (ICCT2024), (2024.10. 23 ~ 2024.10.26)
Muhammad Shaheryar, Jong Taek Lee, Soon Ki Jung, IDDiffuse: Dual-Conditional Diffusion Model for Enhanced Facial Image Anonymization, Asian Conference on Computer Vision (ACCV), (2024.12.8 ~ 2024.12.12)
Muhammad Shaheryar, Jong Taek Lee, Soon Ki Jung, Selective Noise-Aided Machine Unlearning with Deep Feature Visualization, International Symposium on Visual Computing (ISVC), (2024.10.21 ~ 2024.10.23)
Lamyanba Laishram, Jin Hyeong Park, Soon Ki Jung, real-Time Privacy-Preserving Surveillance Framework Using UAVs: Face De-Identification with Synthetic Faces, The 20th International Conference on Multimedia Information Technology and Applications(MITA 2024), (2024.07.23~2024.07.26)
Lamyanba Laishram, Muhammad Shaheryar, Jong Taek Lee, Soon Ki Jung, Toward a Privacy-Preserving Face Recognition System: A Survey of Leakages and Solutions, ACM Computing Surveys, ISSN. 0360-0300, 2025 (2025.02.10), JCR:99.7(Q1)
Lamyanba Laishram, Jong Taek Lee, Soon Ki Jung, Face De-Identification using Face Caricature, IEEE Access, Vol. 12, pp. 19344-19354, ISSN. 2169-3536, 2024 (2024.01.22), JCR : 54.1(Q2)
Muhammad Shaheryar, Jun Hyeok Jang, Jong Taek Lee and Soon Ki Jung, Targeted Forgetting Noise-Aided Machine Unlearning with Deep Feature Visualization, The International Workshop on Frontiers of Computer Vision (IW-FCV), (2024.02.19 ~ 2024.02.21)
Lamyanba Laishram, Muhammad Shaheryar, Jong Taek Lee, Soon Ki Jung, High-Quality Face Caricature via Style Translation, IEEE Access, Vol. 11, pp.138882-138896, ISSN.2169-3536, 2023 (2023.12.07), JCR : 54.1(Q2)
Muhammad Shaheryar, Lamyanba Laishram, Jun Hyeok Jang, Jong Taek Lee and Soon Ki Jung, Learn to Unlearn: Targeted Unlearning in ML, 6th International Conference on Culture Technology (ICCT 2023), (2023.12.1 ~ 2023.12.4)
Muhammad Shaheryar, Lamyanba Laishram, Jong Taek Lee and Soon Ki Jung, Latent Space Navigation for Face Privacy: A Case Study on the MNIST Dataset, The 18th International Symposium on Visual Computing (ISVC) (2023.10.16 ~ 2023.10.18)
Lamyanba Laishram, Muhammad Shaheryar, Jong Taek Lee and Soon Ki Jung, A Style-based Caricature Generator, The 29th International Workshop on Frontiers of Computer Vision (IW-FCV2023), (2023.02.20 ~ 2023.02.23)
Muhammad Shaheryar, Lamyanba Laishram, Jong Taek Lee and Soon Ki Jung, Multi-Attributed Face Synthesis for One-Shot Deep Face Recognition, The 29th International Workshop on Frontiers of Computer Vision (IW-FCV2023) (2023.02.20 ~ 2023.02.23)
Lamyanba Laishram, Md.Maklachur Rahman, Soon Ki Jung, Challenges and Applications of Face Deepfake, The 27th International Workshop on Frontiers of Computer Vision(IW-FCV 2021) (2021.02.22 ~ 2021.02.23)