This project focuses on analysing the performance of heterogeneous edge devices when executing artificial intelligence (AI) workloads. With the increasing demand for real-time processing in applications such as computer vision, speech processing, and intelligent systems, there is a growing need to move computation from centralized cloud environments to edge devices. However, edge devices differ significantly in terms of computational power, memory capacity, and energy constraints, making it challenging to determine their suitability for different AI tasks. This project addresses this challenge by evaluating how various edge platforms handle a range of workloads, including image recognition, speech-to-text, text-to-speech, and large language model (LLM) inference. The study involves executing AI workloads on multiple devices, such as single-board computers, microcontrollers, mobile devices, and AI-accelerated platforms. Performance is assessed using key metrics including processing time, memory usage, CPU utilisation, and efficiency measures such as time-to-first-token and tokens per second for language models. The outcome of this project is a comparative analysis that highlights the strengths and limitations of different edge devices. These findings aim to provide practical insights into selecting appropriate hardware for AI deployment in resource-constrained environments.
• Evaluate performance of edge devices
• Compare devices across AI workloads
• Analyse resource usage and efficiency
• Raspberry Pi / Jetson / ESP32
• Ollama / LM Studio
• Python / AI models
• Performance comparison
• Insights into device suitability
• Recommendations for Edge AI deployment