Robot Programming Guidance System
This project aimed to develop a robot programming system that provided user programming recommendations to meet robot tasks' timing and camera sensing quality requirements. We quantified the tasks' timing and camera sensing quality capabilities based on camera geometry, system performance profiles, and robot tasks. Based on the quantified capabilities, our system could guide users in selecting proper robot skill parameters, adding cameras, or reconfiguring the robot work environment to meet the users' needs. We deployed the framework on a 6DOF robot arm in real-world and simulated environments. The results showed the system performance improved by at least 13 seconds for an original 58-sec robot task if users followed the guidance. To show the efficacy of our system, we conducted a user study on 24 users, resulting in reduced programming time and improved programming correctness.
*Yi-Hsuan Hsieh, "Sensing-based capability-aware robot programming guidance system for non-expert users", Ph.D. Dissertation, Nov 21, 2022. (Including the results of user studies) [Dissertation Link]
*Yi-Hsuan Hsieh, Aloysius K. Mok, “SQGS: Sensing-based Quality-aware Robot Programming Guidance System for Non-experts”, Conference Paper, IEEE 26th International Conference on Emerging Technologies and Factory Automation (IEEE ETFA 2021). [Paper Link] [Video Link]
*Yi-Hsuan Hsieh, Pei-Chi Huang, Aloysius K. Mok, “SQRP: Sensing Quality-aware Robot Programming System for Non-expert Programmers”, Conference Paper, IEEE International Conference on Robotics and Automation (IEEE ICRA 2021). [Paper Link]
*Yi-Hsuan Hsieh, Pei-Chi Huang, Zihang He, Aloysius K. Mok, “Sensing Quality-aware Robot Programming System for Non-expert Programmers”, Demo Section, IEEE Real-Time Systems Symposium (IEEE RTSS 2020).
Programming System for Robot Perception Attention
This project aimed to develop a programming system for users to program the robot's perception attention to locate the target object in a multi-camera environment. We provided users with a simple language to describe the target object's location by enclosing the target in a 3D virtual box. Our system automatically projected the 3D box to environment cameras to perform 3D reconstruction of the target object and quantified the quality of the 3D reconstruction. We also provided tuning for tradeoffs between quality and performance based on user's needs. We deployed our system in both real-world and simulated environments.
*Yi-Hsuan Hsieh, Pei-Chi Huang, Qixing Huang, Aloysius K. Mok, “LASSO: Location Assistant for Seeking and Searching Objects”, IEEE International Conference on Industrial Cyber-Physical Systems (IEEE ICPS 2019). [Paper Link]
A Portable Strategy for Preserving Web Applications Functionality
This project's goal was to develop a portable strategy to preserve the functionalities of web applications (the websites and the MySQL database) across different platforms. We designed and implemented algorithms to automatically detect web dependencies (such as external system libraries, system configuration files, and required external files) from web codes (PHP files). We analyzed 100 PHP files from the codes of "The Speech Presentation in Homeric Epic" website and successfully detected the dependencies with a precision of 78% and recall of 82%. By automatically finding dependencies from web codes, we showed its potential to reduce human efforts to preserve a web application.
*Weijia Xu, Maria Esteva, Deborah Beck, Yi-Hsuan Hsieh, “A Portable Strategy for Preserving Web Applications Functionality”, Digital Libraries (JCDL), 2017 ACM/IEEE Joint Conference, June 2017. [Link]
Web-based Visualization Tool for Signal Processing
This project aimed to develop a web-based visualization tool for users to visualize signal data and conduct signal processing, such as Fourier transformation and PCA, on the data. Javascript was used as the front-end language. Python and PHP were used as the backend languages to process the data and communicate with the MySQL database.
* Demo video [Link]
Humans Perceive Flicker Artifacts at 500 Hz
Previous research shows that humans cannot distinguish modulated light from a stable field when televisions and monitors update at a sufficiently high rate, i.e., the critical flicker fusion rate (50 - 90 Hz). However, this work shows that humans can perceive artifacts over 500 Hz with unconscious rapid eye movements (saccades) across high-frequency edges. Therefore, today's monitors' frame rate, 72 Hz, is not sufficient for modern 3D technology, which requires images of high-frequency edges.
*James Davis, Yi-Hsuan Hsieh, Hung-Chi Lee, “Humans perceive flicker artifacts at 500 Hz”, Nature Scientific Reports, 03 February 2015. [Link]
SoberDiary
We collaborated with doctors at an alcohol rehabilitation center to design and develop a mobile system to support alcohol-dependent patients returning to regular lives after leaving the rehabilitation center. I implemented a Bluetooth breath analyzer with various sensors on an Arduino platform to monitor the alcohol usage of alcohol-dependent patients. The sensor data was visualized on the smartphone to motivate the patient to decrease consumption.
*Kuo-Cheng Wang, Yi-Hsuan Hsieh, Chi-Hsien Yen, Chuang-Wen You, Yen-Chang Chen, Ming-Chyi Huang, Seng-Yong Lau, Hsin-Liu (Cindy) Kao, Hao-Hua Chu, "SoberDiary: A Phone-based Support System for Assisting Recovery from Alcohol Dependence", video presentation, ACM UbiComp 2014. [Link]
ThermalProbe (Wearable Embedded System)
This per-user energy metering system used thermal imaging and thermal identification to track and associate energy usage among individual occupants in a shared working/living space. I developed an algorithm on a wearable embedded system, worn by each occupant's neck, and encoded/decoded a thermal tag to emit a unique temperature signature for user identification.
*Chuang-Wen You, Hsin-Liu (Cindy) Kao, Bo-Jhang Ho, Nan-Chen Chen, Yi-Hsuan Hsieh, Polly Huang, Hao-hua Chu, "ThermalProbe: Exploring the Use OF Thermal Identification for Per-User Energy Metering", IEEE International Conference on Green Computing and Communications ( GreenCom 2014), Taipei Taiwan, September 2014 (Best Paper Award)
Why Blow Away Heat? Harvest Server’s Heat Using Thermoelectric Generators
This project aimed to harvest electronic energy from wasted heat by deploying Thermoelectric generators (TEGs) on or nearby servers’ IC hotspots, such as CPU, memory chips, etc. The harvested energy can power fans or sensors in a server room.
*Ted Tsung-Te Lai, Wei-Ju Chen, Yi-Hsuan Hsieh, Kuei-Han Li, Ya-Yunn Su, Polly Huang, Hao-Hua Chu, “Why Blow Away Heat? Harvest Server’s Heat Using Thermoelectric Generators” ACM Conference on Architectural Support for Programming Languages and Operating Systems 2012 (ASPLOS’12), Poster Section, London, UK. (Best Poster Award) [Poster]
Road Rage Detection
My M.S. thesis aimed to develop a phone-based system that detected road rage events around drivers using computer vision with the AdaBoost classifier. We deployed a camera system on a real-world vehicle and conducted parallel computation on one smartphone for event detection. Our system achieved real-time performance on image processing, on average 175 ms per frame. The recognition precision and recall were 85% and 84%, respectively. After detection, the mobile phone sends alert signals to the drivers for defensive driving.