Project

SNOW

Subscribing to Knowledge via Channel Pooling for Transfer & Lifelong Learning of Convolutional Neural Networks

SNOW is an efficient learning method to improve training/serving throughput as well as accuracy for transfer and lifelong learning of convolutional neural networks based on knowledge subscription. SNOW selects the top-K useful intermediate feature maps for a target task from a pre-trained and frozen source model through a novel channel pooling scheme, and utilizes them in the task-specific delta model. The source model is responsible for generating a large number of generic feature maps. Meanwhile, the delta model selectively subscribes to those feature maps and fuses them with its local ones to deliver high accuracy for the target task. Since a source model takes part in both training and serving of all target tasks in an inference-only mode, one source model can serve multiple delta models, enabling significant computation sharing. The sizes of such delta models are fractional of the source model, thus SNOW also provides model-size efficiency. Our experimental results show that SNOW offers a superior balance between accuracy and training/inference speed for various image classification tasks to the existing transfer and lifelong learning practices.

*This work was published in ICLR 2020

Card-stunt as a Service [CaaS]

Empowering a Massively Packed Crowd for Instant Collective Expressiveness

Imagine a densely packed crowd that gathers to convey a common message, such as people in a candlelight vigil or a protest. We envision an innovation through mobile computing technologies to empower such a crowd by enabling them simply to hold their phones up and create a massive collective visualization on top of them. We propose Card-stunt as a Service (CaaS). CaaS is a service enabling a densely packed crowd to instantly visualize symbols using their mobile devices and a server-side service. The key challenge toward realizing an instant collective visualization is how to achieve instant, infrastructure-free, decimeter-level localization of individuals in a massively packed crowd, while maintaining low latency. CaaS addresses the challenges by mobile visible-light angle-of-arrival (AoA) sensing and scalable constrained optimization. It reconstructs relative locations of all individuals and dispatches individualized timed pixels to each one so that they can do their part in the overall visualization. We deploy CaaS to 49 participants so that they successfully perform a collective visualization cheering up MobiSys.

*This work was done when I was an intern at IBM Research Austin.

*This work was published in MobiSys 2017 [Best Video Award]

SymmetriSense

Mobile Platform for Enabling Near-surface Interactivity on Everyday Glossy Surfaces using a Single Commodity Smartphone

We developed SymmetriSense, a technology that enables near-surface 3-D fingertip localization above arbitrary glossy surfaces, using a single commodity camera such as one from a smartphone. The state-of-the-art requires dedicated devices such as depth cameras or stereoscopic cameras which are still not as ubiquitous as a regular camera built in every smartphone. SymmetriSense localizes a user’s fingertip by using only a smartphone, which is ubiquitously available. To address the challenge using a single camera, we proposed a novel technique utilizing the fingertip's natural reflection and the principle of reflection symmetry. SymmetriSense achieves sub-centimeter 3-D localization accuracy in most cases, even as environmental conditions change.

*This work was done when I was an intern at IBM Research Austin.

*This work was published in CHI 2016. [ACM DL][Video]

PowerForecaster

Predicting Smartphone Power Impact of Continuous Sensing Applications at Pre-installation Time and Developing Context-aware Battery Management Advisor

Today’s smartphone app markets miss a key piece of information, power consumption of apps. This causes a severe problem for continuous sensing apps as they consume significant power without users’ awareness. Users have no choice but to repeatedly install one app after another and experience their power use. To break such an exhaustive cycle, we proposed PowerForecaster, a system that provides users with power use of sensing apps at pre-installation time. Such advanced power estimation is extremely challenging since the power cost of a sensing app largely varies with users’ physical activities and phone use patterns. PowerForecaster adopts a novel power emulator that emulates the power use of a sensing app while reproducing users’ physical activities and phone use patterns, achieving accurate, personalized power estimation. Our experiments with three commercial apps and two research prototypes show that PowerForecaster achieves 93.4% accuracy under 20 use cases. Also, we optimize the system to accelerate emulation speed and reduce overheads, and show the effectiveness of such optimization techniques.

While studying the power consumption characteristics of continuous sensing apps, we found that many continuous sensing apps not only consume battery in the background continuously, but also their battery uses vary depending on users’ contexts. Without understanding such characteristics, users may perceive growing disparities between their estimation of the near-future battery consumption and the actual outcomes. To help users effectively manage their battery, we developed Sandra, a novel context-aware smartphone battery information advisor.

*These works were published in SenSys 2015 [ACM DL][Paper] and UbiComp 2015 [ACM DL][Paper].

SmartWatch Battery

Exploring Current Practices for Battery Use and Management of Smartwatches

As an emerging wearable device, a number of commercial smartwatches have been released and widely used. While many people have concerns about the battery life of a smartwatch, there is no systematic study for the main usage of a smartwatch, its battery life, or battery discharging and recharging patterns of real smartwatch users. Accordingly, we know little about the current practices for battery use and management of smartwatches. To address this, we conducted an online survey to examine usage behaviors of 59 smartwatch users and an in-depth analysis on the battery usage data from 17 Android Wear smartwatch users. We investigated the unique characteristics of smartwatches’ battery usage, users’ satisfaction and concerns, and recharging patterns through an online survey and data analysis on battery usage.

*This work was published in ISWC 2015 [ACM DL][Paper]

TalkBetter

Everyday Intervention Care for Children with Language Delay and Face-to-Face Interaction Monitoring Mobile Platform

Speech-language pathologists highlight that effective parent participation in everyday parent-child conversation is important to treat children's language delay. Our key inspiration behind the TalkBetter project was an emerging mobile platform for monitoring everyday face-to-face interaction, especially conversation monitoring, e.g., SocioPhone. SocioPhone monitors meta-linguistic contexts of conversation, such as turn-takings, prosodic features, a dominant participant, and pace using volume-topography-based method. We found that a meaningful number of guidelines for parents who have a child with language delay are related to meta-linguistic perspective, such as “Wait for the child talk back.”, and “Talk more slowly.” We designed and implemented TalkBetter, a mobile in-situ intervention service to help parents in daily parent-child conversation through real-time meta-linguistic analysis of ongoing conversations with speech-language pathologists.

*These works were published in MobiSys 2013 [ACM DL][Paper] and CSCW 2014 [ACM DL][Paper].

*The CSCW paper won the Best Paper Award.

Sinabro

Opportunistic and Unobtrusive Mobile Electrocardiogram Monitoring System

We developed Sinabro, an unobtrusive mobile electrocardiogram (ECG) monitoring system that monitors the user's ECG opportunistically during daily smartphone use. Daily ECG monitoring will open up an unprecedented opportunity for pervasive healthcare applications. Despite its huge potential, daily ECG monitoring still has not become reality due to its obtrusiveness. We first studied the potential opportunity to capture ECGs from daily use of smartphones, without requiring the user's explicit attention. Based on such an opportunity, we implemented a prototype ECG sensor that allows neat integration with a smartphone and the Sinabro system to provide ECG-related physiological status. We attached multiple electrodes at the corners, front, and back of the smartphone, which could allow maximal opportunities for ECG measurement when a user touches the phone with two body limbs during daily use, such as two hands when sending a text message or an ear and a hand when making a phone call.

*These works were published in HotMobile 2014 [ACM DL][Paper] and EMBC 2014 [ACM DL][Paper].

FaceLog

Capturing User's Everyday Face Using Mobile Devices

To date, initial attempts of lifelogging services have been proposed, capturing what I see, hear, meet, and visit. While today’s lifelogging services capture and utilize what the user perceives, these services are still devoid of the notion how the user is perceived by others. We focused on an important key feature of lifelogging which has been unexplored so far, i.e., appearance context (one's facial expression, body image, gaze, posture, gesture, etc). Appearance monitoring in a fine-grained and momentary manner enables total recall of a user, i.e., not only what the user perceives but also how the user is perceived by others. We proposed FaceLog, a face logging service which automatically and opportunistically captures the user's everyday face. A prototype takes a picture of the user’s face using the front camera of his smartphone when he turns on the smartphone screen; the moment implies his face would be in viewing angle of the camera.

*These works were published in UbiComp 2012 (poster) [ACM DL][Paper] and MobiSys 2013 Ph.D. forum (Best Presentation)

ExerLink

Mobile Platform for Pervasive Social Exer-games with Heterogeneous Exercise Devices

To provide pervasive social exertion games transforming solitary exercises into social activity, we proposed a novel platform called ExerLink that converts exercise intensity to game inputs and intelligently balances intensity/delay variations of each exercise for fair game play experience. We designed and implemented four exergame controllers by significantly augmenting existing exercise devices and performed preliminary human subject studies to evaluate the performance of exergame controllers and the user experience of an exergame.

*These works were published in MobiSys 2012 [ACM DL][Paper] and CSCW 2012 [ACM DL][Paper].

MallingSense

Understanding Customer Malling Behavior in Urban Shopping Mall Using Mobile Framework for Store Visit Detection

The advance of smartphones and mobile computing technology opens an opportunity for panoptical monitoring of customers’ shopping behavior in modern shopping malls. Enabling smartphones to understand shopping behavior, we computationally modeled the behavior as a sequence of store visits, ‘who visits which store, when, for what purpose, and how long?’ We developed a mobile framework for store visit detection and evaluated its performance in real-world environments, i.e., the COEX Mall with tens of real customers. We adapted WiFi fingerprint-based indoor localization techniques for store recognition and accelerometer-based mobility monitoring techniques for entrance/departure detection. Using the framework, we collected store visit traces from real customers and analyzed their malling behavior.

*These work was published in MCSS 2013 (2nd ACM Workshop on Mobile Systems for Computational Social Science, at UbiComp 2013) [ACM DL][Paper].

E-Gesture

Collaborative Architecture for Energy-efficient Gesture Recognition with Hand-worn Sensor and Mobile Devices

Gesture is a promising mobile User Interface modality that enables eyes-free interaction without stopping or impeding movement. We proposed E-Gesture, a collaborative architecture for energy-efficient gesture recognition that greatly reduces energy consumption while achieving high recognition accuracy under dynamic mobile situations. We developed a closed-loop collaborative segmentation architecture that can (1) be implemented in resource-scarce sensor devices, (2) adaptively turn off power-hungry motion sensors without compromising recognition accuracy, and (3) reduce false segmentations generated from dynamic changes of body movement. We also developed a mobile gesture classification architecture for smartphones that enables HMM-based classification models to better fit multiple mobility situations.

*These work was published in SenSys 2011 [ACM DL][Paper].