Goal: Designing Low-Power, High-Efficiency AI Semiconductors for On-Device Platforms
We focus on engineering AI chips that overcome the strict power, area, and memory constraints inherent in mobile and edge environments. Our research involves architecting lightweight hardware logic and specialized structures to enable seamless, real-time AI inference directly on the device.
[ISSCC 2026] Sangjin Kim, Jungjun Oh, Byeongcheol Kim, Yuseon Choi, Gwangtae Park, Hoi-jun Yoo
“Revolver: Low-Bit GenAI Accelerator for Distilled-Model and CoT with Phase-Aware-Quantization and Rotation-Based Integer-Scaled Group Quantization,” in IEEE International Solid-State Circuits Conference
[ISSCC 2026] Sangwoo Ha, Jingu Lee, Youngjin Moon, Sunjoo Hwang, Wooyoung Jo, Gwangtae Park, Sangjin Kim, Soyeon Um, Junha Ryu, Yurim Jo, Hoi-Jun Yoo
“SMoLPU: 122.1μJ/Token Sparse MoE-based Speculative Decoding Language Processing Unit with Adaptive-Offload NPU-CIM Core,” in IEEE International Solid-State Circuits Conference
[ISSCC 2025] Sangjin Kim, Jungjun Oh, Jeonggyu So, Yuseon Choi, Sangyeob Kim, Dongseok Im, Gwangtae Park, Hoi-jun Yoo
"EdgeDiff: 418.4mJ/inference Multi-modal Few-step Diffusion Model Accelerator with Mixed-Precision and Reordered Group-Quantization,” in IEEE International Solid-State Circuits Conference