Exploring MRAM and Its Spintronic Derivative Devices for Storage and Emerging Compute Applications
Abstract: In a context of growing demand for storage and data-intensive applications, the von Neumann computer architecture suffers from throughput limitations due to memory transfer rates below CPU speed, as well as the energy cost of information transfer. These limitations can be mitigated through innovative architectures and non-volatile memories (NVM) that enable greater locality in information processing, merging memory and computation functions. MRAM and spintronics-based devices are particularly promising, offering processor-speed capabilities, high endurance, large density, and compatibility with CMOS voltage.
In this presentation, I will review recent advances in MRAM component research for embedded systems, highlighting their contributions to energy-efficient storage and computing. Furthermore, I will discuss how these technologies enable concepts based on neuromorphic architectures and probabilistic computing, which are currently being explored at Spintec laboratory in collaboration with CEA-LETI.
Trends on EDA
Abstract: The design quality of modern chips depends on the quality of the EDA tools used in the design flow. With the evolution of nanotechnologies new EDA tools are needed. Some trends on EDA to cope with the evolution of manufacturing processes will be presented. An important set of EDA tools nowadays are the ones to reduce power consumption at all levels of design abstraction. Power Optimization is fundamental in the IoT world. At logic and physical level, it is needed to reduce the transistor count to reduce leakage power. Also, the use of estimation tools and visualization tools are more and more important in modern design flows.
AI de Confiance
Abstract: Nowadays, we are witnessing an increasing adoption of Machine Learning (ML) for solving complex real-world problems. However, despite some reports showing that ML models can produce results comparable and even superior to human experts, they are often vulnerable to carefully crafted perturbations and are prone to bias and hallucinations. Ensuring the safety and trustworthiness of systems enabled by machine learning is a very challenging task. In this talk, I will discuss challenges that we should overcome to build safe and trustworthy ML-enabled systems and present some recent solutions that my team has proposed.
Generative IA at the Edge
Abstract: In this talk, we explore the emerging trends in generative AI, including large language models and text-to-image generators, and the role of transformer-based neural networks at their core. We investigate the distinct attributes of transformers and how they diverge from conventional convolutional neural networks (CNNs). The talk underscores that CNN-optimized accelerators do not effectively scale for transformers. We discuss key requirements for emerging Neural Processing Units (NPU) to support Transformers and Generative AI constructs. We use transformer-based vision models, text-to-image, and large language model generative AI examples to illustrate the requirements for efficient mapping onto an NPU.