My research interests are in wearable sensing with an emphasis on ultra-low-power and energy harvesting wearable devices. My broader vision is to create smart wearable devices that work without or with minimal maintenance efforts (e.g., charging batteries). These "smart" wearables also include ones that exploit unconventional operational modes to achieve the lowest user attention level. My past works have focused on using energy harvested from WiFi transmissions to enable early gesture recognition and hand tracking on battery-less wearables. I am currently working on DeepLight, a project where we use deep-learning techniques to boost the robustness of screen-camera communication against real-world artefacts (e.g., hand-motion, ambient environment). I am also working on the extension of WiWear to support multi-device & multi-AP operations. In the future, I will explore other potentials of battery-less wearables, e.g., healthcare monitoring devices, and new wireless power transfer mechanisms for implantable devices.
Example of NLOS condition, which is common in in door environments
The CIR (Channel Impulse Response) of three packets in a ranging transaction used in DW1000 sensors
Ultra-Wideband (UWB) sensors have an immense potential for future spatial-aware sensing thanks to their accurate range measurements. However, accurate ranging under severe non-line-of-sight (NLOS) remains a challenging problem. Recent data-driven error mitigation approaches do not systematically investigate multiple packets used in a ranging transaction. Consequently, the use of a single packet results in sub-optimal performance.
We systematically studied the efficacy of different packets (in a regular transaction) as well as their combinations for error mitigation. We proposed two data-driven approaches that leverage multiple packets as input to correct ranging errors, outperforming the state-of-the-art by a significant margin (further reducing 45% error)
Figure 1.1. A WiWear system that dynamically assigns AP(s) to serve WiWear devices based on the current position of the devices. The AP(s) charge the device super-capacitors and act as a gateway to read sensory data from devices.
Figure 1.2. 2nd version of the WiWear device (without the harvester which will be embedded in the wrist strap).
Figure1.3. A WiWear++ device includes a main board (uC + RF + WuRx), a boost converter and many small harvesters that can be integrated into a wrist strap.
I am actively working on an extension of WiWear to support multi-device & multi-AP operation. In such an environment, the scheduling of both data and energy transmissions is challenging. The problem is more critical for energy harvesting devices as the AP needs to send control packets to the devices, but keeping the device active in receiving mode drains the stored energy rapidly. To address this problem, I am developing an ultra-low-power downlink communication technique (from a WiFi AP to a WiWear device) using a wake-up receiver. A wake-up receiver works at much lower frequencies and data-rates, so it can stay active to detect a special wake-up pattern consuming only a few µW. However, there are three key challenges to enable this ultra-low-power channel: (1) Using WiFi packets to emulate a non-WiFi signal is not trivial as the AP must deal with other WiFi devices; (2) Keeping the micro-controller active to read the low-rate data from the wake-up receiver may drain significant energy; and (3) The system must support dynamic scheduling as a WiWear device might enter or leave the network, or moving from the vicinity of one AP to another. The third challenge leads to the multi-AP scheduling problem. Fortunately, combining the Angle-Of-Arrival (AoA) estimation from multiple APs enables the system to estimate accurately the location of each device. The system then estimates the harvested energy from each AP and assign the appropriate AP(s) to serve each WiWear device.
Currently, I have developed a prototype where an AP can transmit data to a WiWear device through a wake-up receiver. Leveraging the Inter-Processor-Communication (IPC) function in ARM micro-controller, the system enables both broadcast and unicast downlink channels which are crucial to support mobile sensor nodes and to reduce the energy consumption. The 2nd version of the WiWear device is made smaller to be worn easily by a user during user studies.
Figure 2.1. The entire architecture of DeepLight.
Figure 2.2. A demo video of DeepLight system.
I am actively working on DeepLight project in which we improve the robustness of screen-camera communication by using a deep-learning model to enable a robust, holistic decoding mechanism. In screen-camera communication, a grid of bits is embedded in several continuous video frames shown on a screen. Then an application records the frames shown on the screen and decodes the original data. The current approaches need an accurate estimation of 4 corners of the screen to split each video frame to a grid (similar to the original grid) and decodes each bit from each cell. However, accurate screen corners are unlikely achievable in practical scenes, especially when the camera is practically far from the screen (> 1m) in an arbitrary background.
We propose to use a convolutional neural network (CNN) to enable holistic "bit" inference. A CNN can automatically learn the relations among consecutive frames with considerations of other factors such as ambient light, hand-motion artefacts, screen extraction errors, etc. An output bit is the result of the entire screen (not just 1 cell), so it is less affected by the distortion caused by inaccurate screen extraction.
Currently, the main processing pipeline of DeepLight is done on a server. In the demo video, the iPhone records a 1-second long video clip and transmits it to a server for processing. The long delay is mostly caused by the transmission of the hi-quality video though the decoder processing time is only ~15ms, and the screen extractor processing time is ~200ms. Note that the system only runs the screen extractor every 10 video frames. I am developing a purely mobile version on iPhone and Jetson nano.
Figure 3.1. A user wearing a WiWear device working near a multi-antenna WiFi AP. The AP is made of 2 WARPv3 boards.
Most of the battery-less systems rely on the energy harvested from sunlight (solar energy). However, in indoor environments, the energy from light is much lower. Other alternative energy sources have been discovered and used such as kinetic energy, piezoelectric, etc. RF energy (e.g., RFID, WiFi) is another interesting energy source which has been applied to power ultra-low-power devices. Unlike other energy sources, WiFi is controllable and highly available in indoor environments. However, it is inapplicable to wearables as the energy density is too low to power a device with energy-hungry sensors. The problem of WiFi is that the energy is scattered around the antenna. In this work, I want to answer the question: Can a wearable (with inertial sensors) be powered by WiFi transmissions? I develop a multi-antenna WiFi AP (on WARPv3 platform) which provides 2 key functions: (1) To estimate the angle of arrival (AoA) of incoming signal through which it can detect the direction of the device, and (2) to generate WiFi transmissions that are beam-formed toward the device to increase the energy density at the direction of the device. The device then harvests the energy from the highly boosted electromagnetic wave and uses the energy to operate the entire system including inertial sensors.
Figure 4.1. A participant playing ping-pong with a robot in our instrumented room.
Sensor-based gesture recognition is a promising modality of interaction using mobile or wearable devices as it does not require a camera to track the user’s hand. Though sensor-based gesture recognition has been successfully applied in several cases, its key limitation is recognition latency. Recognition algorithms extract the segment of sensory data that likely contains a gesture and classify it. This approach requires a system to delay the classification until the end of a gesture, which is too late for interactive applications (e.g., Table Tennis game). In this work, I try to answer the question: Can a wearable (e.g., a smartwatch) recognize gestures early, before the gesture actually ends? I propose an HMM-based (Hidden Markov Model) algorithm that continuously evaluates the probability that each sensor sample belongs to 6 table tennis gestures or null-gesture. Intuitively, if the user is performing a specific gesture, his hand is likely to move in a limited region. Take this advantage, I propose a hand trajectory tracking algorithm that utilizes HMM states (in the gesture recognizer) as features to predict the user’s hand position. The proposed algorithm is sufficiently light-weight to support real-time execution on commercial smartwatches. More importantly, it can recognize 6 table tennis gestures early, before the end of the gestures, and support hand tracking during gestures.
High-performance TCP reassembly on FPGA for Network Intrusion Detection System (NIDS): TCP packets frequently travel out-of-order because of the network traffic. To enable a NIDS to perform deep packet inspection, the TCP packets must be sorted before being scanned by the NIDS. I developed a TCP reassembly engine on the NetFPGA platform to sort the TCP packets at line-rate.
High-performance Packet Header Matching Engine on FPGA for Intrusion Detection System (NIDS): One of the quick detections of malicious packets is to detect special headers of IP packets (e.g., some IP addresses should arrive at an internal network). I developed a high-speed matching engine on the NetFPGA platform to match packet headers with ones stored in a database.