Invited Talks
Invited Talks
Prof. Kay Römer - TU Graz, Austria
Ultra-wide-band wireless transceivers transmit very short pulses of electromagnetic waves and by measuring the time-of-flight of these signals between a transmitter and receiver one can estimate their distance with an accuracy of a few centimetres. However, in many environments the line-of-sight between a transmitter and a receiver is obstructed by obstacles, which can lead to substantial inaccuracies as the wireless signals then reach the receiver via reflections or the signals travel through the obstacles or along the surface of the obstacles at different propagation speed. A specific challenge are occlusions by humans who either wear the transmitter or receiver, or people who block the line of sight. In this talk I will first present the effects of the presence of obstacles, specifically also humans, on distance sensing. Then I will present several embedded machine-learning-based techniques to detect the presence of (bio)obstacles and to automatically correct the distance measurement errors resulting from their presence.
Prof. Muhammad Shafique - NYU Abu Dhabi, United Arab Emirates
Modern Machine Learning (ML) and Artificial Intelligence (AI) approaches, such as, the Deep Neural Networks (DNNs) and Large Language Models (LLMs), have shown tremendous improvement over the past years to achieve a significantly high accuracy for a certain set of tasks, like image classification, object detection, natural language processing, medical data analytics, and generative AI. However, these DNNs/LLMs require huge processing, memory, and energy costs, thereby posing gigantic challenges on building energy-efficient tinyML, Edge-AI and Embodied-AI solutions for a wide range of applications from Smart Cyber Physical Systems (CPS) and Internet of Thing (IoT) to Robotics domains on resource/energy-constrained devices subjected to unpredictable and harsh scenarios. Moreover, in the era of growing cyber-security threats and nano-scale devices, the AI/ML functions face new type of attacks and reliability threats, requiring novel design principles for robust ML.
In my eBRAIN and iCAS Labs at New York University (NYUAD UAE, NYU-Tandon USA), I have been extensively investigating the foundations for the next-generation energy-efficient, dependable and secure AI/ML computing systems, while addressing the above-mentioned challenges across different layers of the hardware and software stacks. This talk will present design challenges, advanced techniques and cross-layer frameworks for building highly energy-efficient and robust cognitive systems for the tinyML, Edge-AI and Embodied-AI applications, which jointly leverage optimizations at different layers of the software and hardware stacks, and at different design stages (e.g., design-time vs. run-time approaches). These techniques provide crucial steps towards enabling the wide-scale deployment of energy-efficient and secure embedded AI in autonomous systems like UAVs, UGVs, autonomous vehicles, Robotics, IoT-Healthcare / Wearables, Industrial-IoT, smart transportation, smart homes and cities, etc. Towards the end, I will show some glimpses of our recent advanced projects on Quantum Machine Learning, Continual Learning, Multimodal LLMs, and Agentic-AI.
Dr. Soham Chakraborty - TU Delft, Netherlands
Heterogeneous computing platforms that integrate CPUs, GPUs, and specialized accelerators are rapidly becoming ubiquitous across modern computing domains, particularly in AI and machine learning. Recent advances are pushing these architectures toward shared memory models, enabling more seamless communication across devices and unlocking substantial gains in performance and energy efficiency.
These systems are already demonstrating strong potential in intelligent, safety-critical applications, such as real-time recognition on edge devices. However, their increasing complexity poses significant challenges for reasoning about correctness and reliability.
In this talk, I will present my work on the specification and formal verification of GPU and heterogeneous CPU-GPU programs. I will discuss how programming language techniques and formal methods can provide rigorous guarantees for these systems, paving the way for more trustworthy and robust AI infrastructures.
Serkan Oktem - Philips, Netherlands
Low-power edge AI is often discussed in terms of model efficiency, compression, latency, energy consumption, and benchmark performance. However, deploying AI in industrial and safety-critical environments requires a broader system-level perspective, particularly under the real-time, power, and integration constraints of industrial edge deployments. In practice, the choice of compute platform is not only a question of inference performance, but a multi-dimensional decision involving determinism, system integration, development effort, toolchain maturity, validation strategy, long-term maintainability, lifecycle constraints, cost, update frequency, and the availability of specialized engineering expertise.
This talk presents an industrial perspective on when FPGAs are a suitable choice for low-power edge AI deployment. Rather than focusing on specific applications or benchmark comparisons, it introduces a decision framework for evaluating compute platforms in real systems. This naturally requires a cross-layer perspective, linking model choices, hardware architecture, toolchains, and system constraints. Key considerations include how latency and power are defined and measured at the system level, how hardware and software responsibilities are partitioned, and how trade-offs between flexibility, time-to-market, and long-term sustainability influence platform selection.
The talk aims to provide a balanced view of where FPGAs can offer clear value, particularly in deterministic, tightly integrated, latency- or power-constrained systems, and where they introduce additional complexity. It also highlights the gap between model-centric research prototypes and constraints-driven deployable systems, and outlines opportunities for closer collaboration between academia and industry in developing deployment-aware, trustworthy, and efficient edge AI solutions.
Simeon Kanya - Innatera nanosystems, Netherlands
Research and engineering in spiking neural networks (SNNs) and Neuromorpchic AI aim to deliver energy and often latency benefits for Neural Network models and processing in ML applications. Despite that most algorithmic explorations in Neuromorpchic neural networks focus on spike quantization for defending energy efficiency, several other neuromorphic principles such as synaptic delays merit equal attention. In particular synaptic delays hold promise for enabling shallow models that perform on par with their deep counterparts, but there is typically a trade-off in terms of memory resources to facilitate them. In this keynote we will present such a use-case driven exploration of the benefits of synaptic delays with an industry standard ML-Perf audio benchmark, present quantitative results, and movitate insights drawn from the hardware deployment on Innatera's C1 Pulsar Spikking Neural Processor (SNP).
Samuel Milton - Sirris, Belgium
The adoption of Artificial Intelligence (AI) in manufacturing environments remains constrained by high computational requirements, integration challenges with legacy systems, and limited demonstrable value for small and medium-sized enterprises (SMEs). Although edge AI offers a promising paradigm for low-latency, energy-efficient inference, its deployment in real industrial contexts, characterized by variability, data scarcity, and operational constraints, remains non-trivial.
The DigiMach project, a cross-border initiative in the Meuse-Rhine region, investigates methodologies for translating edge AI concepts into practical, resource-efficient, and deployable solutions for machining applications. The project emphasizes the integration of lightweight models, domain knowledge, and low-cost sensing to enable localized decision-making under constrained computational environments.
Representative use cases, including tool condition monitoring, process stability analysis, and sensor-based decision support, illustrate how hybrid approaches, combining physics-based understanding with data-driven methods, can achieve robust performance without reliance on large-scale datasets or cloud infrastructure.
Key challenges such as heterogeneity across machine-tool systems, limited labelled data, and the need for explainability and reliability in safety-critical environments are addressed through modular and interpretable frameworks. These approaches support incremental adoption and rapid validation in industrial settings.
Ultimately, the advancement of industrial AI depends not only on model complexity, but on the development of scalable, interpretable, and energy-efficient edge intelligence tailored to real-world manufacturing constraints.