This research focuses on developing novel memory devices for In-Memory Computing (IMC) systems, a promising alternative to the traditional Von Neumann architecture. As the demand for computing performance increases, conventional architectures face critical limitations in terms of energy efficiency, speed, and scalability due to the physical separation between logic and memory units.
To overcome these challenges, we explore highly reliable memory devices that are optimized for IMC applications, such as DRAM cells, flash memory cells, etc. DRAM memory offers high speed and excellent endurance, making it suitable for applications that require frequent updates and rapid data access. On the other hand, flash memory provides nonvolatility, high density, and low standby power, which are advantageous for building energy-efficient and compact IMC architectures. As such, our research includes designing novel memory cells, improving device-level reliability, and establishing integration schemes that enable the realization of IMC systems with novel memory devices.
In parallel, we investigate system-level integration strategies to seamlessly combine Flash memory arrays, logic components, and neuromorphic elements into a unified IMC architecture. This includes both device-circuit co-design and the development of compact, CMOS-compatible fabrication processes that enable tight integration.
Ultimately, the goal is to build efficient and scalable IMC systems capable of solving complex combinatorial optimization problems or accelerating large-scale artificial intelligence. This interdisciplinary research spans device physics, circuit design, hardware-algorithm co-optimization, and ultimately fabricating novel devices for IMC systems.
[1] D. Kwon et al., "Reconfigurable Neuromorphic Computing Block through Integration of Flash Synapse Arrays and Super-Steep Neurons," Science Advances, 2023.
In-Memory Computing for Solving Hard Computing Problems, 2025.2- 2027.1
As conventional computing architectures begin to reach their physical and performance limitations, unconventional computing paradigms are emerging as promising solutions for solving complex and large-scale computational problems. This research explores innovative electronic devices that can enable new forms of computation beyond the traditional von Neumann framework.
We investigate various unconventional approaches, such as:
In-Sensor Computing, where sensing, memory, and processing functionalities are integrated within the same chip to achieve ultra-low-latency and energy-efficient systems, particularly for real-time signal processing tasks.
Probabilistic Computing, which leverages the intrinsic stochastic behavior of emerging devices to perform efficient optimization.
Quantum-Inspired Computing, which mimics algorithms from quantum computing—such as quantum adiabatic annealing —using classical hardware to efficiently explore large solution spaces, especially for combinatorial optimization.
Neuromorphic Computing, which emulates the structure and function of the human brain to perform parallel and energy-efficient information processing, suitable for tasks like pattern recognition and adaptive learning.
These new computing paradigms are being applied to solve challenging problems such as the Traveling Salesman Problem (TSP) and the K-Satisfiability (K-SAT) problem, which are central to many industrial and scientific applications.
Our goal is to co-develop novel device technologies, architectures, and algorithms that synergistically work together to unlock the full potential of unconventional computing. This interdisciplinary research spans across device physics, machine learning, circuit design, and computational theory.
[1] M. Jiang et al., "Efficient combinatorial optimization by quantum-inspired parallel annealing in analogue memristor crossbar," Nature Communication, 2023.
In-Memory Computing for Solving Hard Computing Problems, 2025.2- 2027.1
This research focuses on memory technology, particularly on reliable memory devices, cell structures, array architectures, and system-level integration. As memory plays a central role in both conventional and emerging computing systems, innovations in this area are essential for improving speed, density, energy efficiency, and functionality.
We investigate a wide range of memory types—from traditional volatile memories like SRAM and DRAM, to non-volatile memories such as Flash, FeFET, and memristors. Each technology offers unique trade-offs in terms of retention, speed, and scalability, and is selected based on the target application.
Our work includes:
Novel DRAM Cell Designs that improve sensing margin, reduce leakage, and enhance scalability beyond the limitations of conventional capacitor-based DRAM structures.
Development of 3D Vertical NAND-Flash and AND-Flash Arrays, which enable high-density integration by stacking memory layers vertically. These architectures are essential for next-generation storage and in-memory computing platforms.
By bridging innovations from device-level fabrication process & structure to system-level architecture, we aim to develop next-generation memory solutions that will lead future computing paradigms including in-memory computing, neuromorphic processing, and data-centric high-performance systems, as well as conventional memory solutions.
In-Memory Computing for Solving Hard Computing Problems, 2025.2- 2027.1