Ferroelectric devices based on doped hafnium oxides (HfO₂) have been reported as great candidates for low-power logic, nonvolatile memory (NVM), and in-memory computing applications due to their compatibility with CMOS technology. They have some advantages over other types of electronic devices, such as high endurance, low power consumption, and fast switching speeds.
NCFET has been reported to overcome the fundamental limit of conventional CMOS technology (60 mV/dec). It can be used in low-power circuit applications due to the internal voltage amplification provided by the ferroelectric layer. Additionally, it can offer higher ON current and low threshold voltage compared to conventional FETs.
We have studied the device physics and modeling of these devices for designing the next-generation electronic circuits.
Fig. Model validation with FeFET IV data
Fig. Sid vs. f for the program state
Fig. Sid vs. f for the erase state
Fig. 14nm NC-FinFET IV calibration
Fig. Sid vs. f for NCFET
Fig. Sid vs. Vgs for NCFET
ULTRARAM is a non-classical charge-trapping-based emerging NVM that exhibits fast, non-volatile, high endurance (>10^7 P/E cycles), long retention (>1000 years), and ultra-low switching energy per unit area characteristics. It breaks the paradigm of the unachievable universal memory idea. Unlike a single SiO2 barrier in a flash, the novelty comes from the InAs/AlSb triple barrier resonant tunneling (TBRT) mechanism. The TBRT structure provides a high-energy electron barrier with no bias and allows fast resonant tunneling at program/erase pulse (±2.5V) with switching energy approximately ten times smaller than Flash. However, ULTRARAM is in the early stage of development and needs to overcome several challenges.
We have developed a physics-based model that captures the trapping and de-trapping of charge in the floating gate (FG) using TBRT physics and is used to calculate the device characteristics. This model can be used for circuit design with a ULTRARAM memory device.
Machine learning (ML) is revolutionizing Electronic Design Automation (EDA) by improving efficiency, accuracy, and scalability in the design and optimization of electronic systems. Both foundries and design companies rely on EDA to support their businesses. A key component in EDA that bridges the foundry technology and IC design is device compact models. ML models, such as neural networks and reinforcement learning, reduce simulation overhead, automate repetitive tasks, and provide predictive insights into design metrics. Additionally, ML-assisted algorithms can be used to optimize advanced node semiconductor devices. There are many ways to optimize two or more target objectives. On the one hand, we have TCAD-based optimization, which is a trial-and-error method and requires experience and a deep understanding of device physics. Contrary to this, the other method is a framework that utilizes an ML-assisted genetic algorithm to maximize or minimize the value of target objectives.
M. Ehteshamuddin, A. Kumar, S. Roy, and A. Dasgupta, “Optimizing Memory Window and Polarization of a Fe-CAP Using Machine Leaning Assisted Genetic Algorithm Framework,” International Workshop on Physics of Semiconductor Devices (IWPSD), Dec. 2023.