Research Statement
What is Embedded Artificial Intelligence (EAI) and Embedded Deep Intelligence?
Confluence of Embedded Systems and Artificial Intelligence
Along with the deepening development in computing technologies and the surge of embedded, mobile, and IoT (Internet of Things) devices, more and more data is created by widespread and geographically distributed embedded and IoT devices. For example, 45% of the 40 zettabytes global internet data is expected to be generated by embedded devices in 2024.
Meanwhile, Artificial Intelligence (AI), defined as intelligence exhibited by machines, is thriving with the breakthroughs in machine learning algorithms such as deep neural networks (DNNs) due to their superiority in solving complex machine learning problems, e.g., autonomous driving, natural language processing, and healthcare applications with the help of billions of bytes of data generated at the embedded devices.
Considering that AI is functionally necessary for quickly analyzing vast volumes of data and extracting insights, there exists a strong demand to integrate embedded devices and AI, which gives the birth of a brand-new paradigm called Embedded Artificial Intelligence (EAI) that performs intelligent tasks on the device directly without offloading massive data from the device to the cloud.
Embedded Artificial Intelligence (EAI) is not a simple combination of embedded systems and AI.
The subject of Embedded Artificial Intelligence is tremendous and enormously sophisticated, covering many concepts and technologies, which are interwoven together in a complicated manner.
Currently, the formal and acknowledged definition of Embedded Artificial Intelligence is non-existent. To deal with the problem, some researchers put forward their definitions. For example, some argues that the scope of Embedded Artificial Intelligence should not be restricted to running AI models solely on the cloud servers or devices but in the manner of the collaboration of device and cloud. They define six levels of Embedded Intelligence, from cloud-device co-inference (level 1) to all on-device (level 6).
In our lab, we research all levels of Embedded Artificial Intelligence (from level 1 to level 6). At the moment, we currently focus on all on-device machine learning on embedded systems, the level-6 embedded artificial intelligence, especially for deep neural networks, which we call Embedded Deep Intelligence.
Benefits of On-Device Machine Learning enabled by EAI.
In fact, the current cloud-based offloading machine learning solutions were popular back in the days when Wireless Sensor Networks (WSNs) were deployed to collect data from the sensor nodes, only to be analyzed later on a remote base station. Compared to the sensor motes of those WSNs, today's embedded systems are far more advanced in terms of CPU and memory, and their energy efficiency has improved by several orders of magnitude.
For instance, the latest mixed-signal microcontrollers from Texas Instruments (i.e., TI MSP430 series) comes with up to 16-bit/25 MHz CPU, 512 KB flash memory, 66 KB RAM, and 256 KB non-volatile FRAM---which are comparable to the 16-bit Intel x86 microprocessors of the early 80s which ran MS-DOS. These devices are quite capable of executing simple machine learning workloads that perform on-device classification of sensor data as well as training of the model.
In general, there are several advantages of on-device machine learning over relaying data to a base station:
Data Transmission Cost and Latency. Data communication between a device and a base station introduces delays and increases energy cost per bit transmission. Using back-scatter communication apparently lower the energy cost, but the dependency on an external entity and the unpredictable delay in wireless communication still remains, which we want to avoid by design.
Privacy and Security. Private and confidential data, such as health vitals from a wearable device, can be safely learned on-device -- without exposing them to external entities. Security problems caused by side-channel and man-in-the-middle attacks are avoided by design when we adopt on-device processing of sensitive data.
Precision Learning and Resource Management. Many human-in-the-loop machine learning applications running on wearable and implantable systems benefit from run-time adaptation as different persons have different preferences and different expectations from the same application. On-device learning helps a system adjust itself at run-time to satisfy each individual's needs and to optimize its own resource management.
Adaptability and Lifelong Learning. Lifelong learning is an emerging concept in robotics and autonomous systems where the vision is to create intelligent machines that learn and adapt throughout their lifetime. On-device machine learning enables true lifelong learning by liberating these devices from being stationary and connected to power sources, to mobile, ubiquitous, and autonomous.
Why do we research Embedded Artificial Intelligence (EAI)?
Embedded systems are everywhere; they are the dominant computing platform of today.
IDC predicts that by 2025 there will be 55.7 B (connected) embedded and IoT devices worldwide, 75% of which will be connected to an IoT platform. It estimates data generated from embedded devices to be 73.1 ZB by 2025, growing from 18.3 ZB in 2019.
Embedded systems (e.g., smartphones, IoT devices, robots, airplanes, autonomous vehicles, drones, smart appliances, etc.) far outnumber general-purpose computer systems (e.g., personal computers, desktops, laptops, workstations, servers, etc.), roughly 97% vs. 3%, encompassing a broad range of applications.
You can easily find that every time you look at your watch, answer the phone, take a picture, or turn on the TV, you are interacting with an embedded system. They are also found in cars, airplanes, robots, and so on.
Embedded Artificial Intelligence (EAI) is the key enabler that will bring the benefits of artificial intelligence to many people's real lives; currently, it is only staying at the scene of research.
Today’s state-of-the-art artificial intelligence technology requires prohibitive computing resources such as high-end GPU and large memory, which is only available for very limited entities, e.g., large corporations or research labs.
On the other hand, many ordinary people only have small devices with a very limited computing capability (e.g., low-end processor, small memory, limited battery, etc.) such as smartphones that can hardly execute the current artificial intelligence algorithms, such as deep learning.
Therefore, by effectively putting artificial intelligence technology to an enormous number of embedded systems easily found around us, the potential of artificial intelligence can be truly realized in our real life. It will democratize artificial intelligence and disseminate its benefits in the form of commodity embedded devices to the public who do not have the expertise and cannot afford prohibitive computing infrastructures (e.g., GPU machines) required to perform today's resource- and compute-intensive machine learning algorithms.
The data generated from embedded devices is enormous, creating new opportunities to improve everyday life at a close distance without relying on faraway data centers. Such on-device learning with instant data access will solve many challenges of today's artificial intelligence, such as excessive energy consumption, high latency, security, privacy, and so on.
Embedded Artificial Intelligence (EAI) will facilitate collaborative learning on a large scale.
Phase one) Embedded Artificial Intelligence (EAI) will enable billions of computing devices to learn, evolve, and adapt by themselves without requiring any help from other systems.
Phase two) When those self-learning devices are connected together, they will be able to facilitate collective intelligence by learning from each other and exchanging the knowledge learned on each device, providing better insights and making better decisions.
Phase three) Finally, they will keep autonomously performing such collaborative learning without human involvement, accelerating the collective intelligence that will provide unimaginable insights and intelligence.
What is the goal of our lab?
We pursue excellence in research on computer science and artificial intelligence.
We relentlessly push forward state-of-the-art science and technology via disruptive ideas and game-changing perspectives.
We imaginatively turn our ideas into meaningful artifacts (e.g., software, hardware, paper, etc.) that can solve the real problem in the world, which makes our research practical and useful.
We actively communicate with peer researchers worldwide for discussion, facilitating the progress of research in the community.
We make the world a better place by making real impacts with our research.
We contemplate how our research can improve the lives of people; not only academically but also socially and practically.
We help people solve people's problems by making intellectual contributions to society via our original, innovative, novel, and sound ideas.
We do our best to explain and publicize our work to the public so that they can correctly appreciate and utilize our research as they need.
We collaborate with and learn from each other when solving challenging problems.
We enjoy discussing not only with our lab members having similar ideas and thoughts but also other people out of the field who can introduce fresh ideas and points of view.
We help each other as a team member since one person does not have all the necessary ability to conduct successful research.
We are unafraid of conducting interdisciplinary work with people from various backgrounds via consilience and convergence of the studies.