Projects/Publications

Updated 02/06/2022

Google Scholar link

SELECTED PUBLICATION:

Deep Kinematic Models for Kinematically Feasible Vehicle Trajectory Predictions,”

 in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2020).


Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks,” 

in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2019).


Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets,” 

Neurips workshop, 2018.


Deep Learning of Spatio-Temporal Features with Geometric-based Moving Point Detection for Motion Segmentation,

accepted for publication in IEEE International  Conference on Robotics and Automation (ICRA), 2014

 Others

Using Range and Bearing Observation in Stereo-based EKF SLAM,in Proc. of Towards Autonomous Robotic Systems,"

 (TAROS), Oxford, UK, 2013.  (pdf)

"Optimization of a multi-stage ATR system for small target identification,"  

in Proceedings of SPIE Defense and Security, Vol. 7696C, (2010)

PROJECTS:

SLIDES ON PREVIOUS WORK (describing the papers above)

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

National Taiwan University - Robot Perception and Learning Lab

Prof. Chieh-Chih (Bob) Wang

Thesis:

Deep Learning of Spatio-Temporal Features with Geometric-based Moving Point Detection for Motion Segmentation (ICRA 2013)


@inproceedings{icra2013_lin_motion,

  title={Deep learning of spatio-temporal features with geometric-based moving point detection for motion segmentation},

  author={Lin, Tsung-Han, and Chieh-Chih Wang},

  booktitle={2014 IEEE International Conference on Robotics and Automation},

  pages={3058-3065},

  year={2014},

  organization={IEEE}

}


FRAMEWORK:

Input: Stereo Images, 2 time steps (4 frames)

Feature1: Unsupervisedly learned Spatio-temporal features (Reconstruction Independent Component Analysis)

Feature2: Geometric-based Moving Points

Segmentation: Combine Feature1+Feature2 to Recursive Neural Network for Motion Segmentation


Top-left: Groundtruth

Top-right: Geometric-based Moving Points

Bottom-left: Spatio-temporal features + Recursive Neural Network

Bottom-right: Spatio-temporal features  + Geometric-based Moving Points + Recursive Neural Network (Final Best Result)

video (web)

Unsupervised Spatio-temporal Feature Learning (Layer 1):

- translational (moving edges)

Unsupervised Spatio-temporal Feature Learning (Layer 2):

- Rotations, non-affined transformations


--------------------------------------------------------------------------------------------------------------------------------------------------------------------

Misc Work:

Stereo Moving Object Tracking (Using Optical Flow)

Left - (observation at each time t) Segments determined by running multiple ransac.

       - Different colors indicate different segments.

       - Notice when moving objects are far away its motion is not enough to be

         segmented out.

Right - (final result after merging each observation)

          Color indicates different classes.

           Blue - static;

           Red - moving;

           Green - unknown;

Video show as 10 frames per second.

--------------------------------------------------------------------------------------------------------

NASA Jet Propulsion Laboratory - Undergraduate Student Research Program (USRP)

Dr. Thomas Lu

09/01/2009 - 12/15/2009

09/2010 

 Jets and Missiles                                     Landing Sites                                 Small Boats

Automated Target Recognition (ATR): Sponsored by NASA and DoD, a need for a lightweight and fast ATR system for autonomous vehicles to detect and locate landing sites, jets, missiles, mines, and various targets in real-time video. We first apply a grayscale optical correlator (GOC) based on OT-MACH algorithm to identify many region-of-interests in an image. potential targets. OT-MACH performs the operations in the Fourier domain and finds correlations in an optical correlator. For each region-of-interests we then apply feature extractions techniques, such as Principal Component Analysis (PCA), PCA with whitening, Independent Component Analysis (ICA), and wavelet transform.  Finally through neural network learning on extracted features we find true-positive objects against “noisy” background.

09/2009 - 12/2009

My work mainly is to develop a general classifier targeting small boats. Challenges are to train the classifiers to work with different weather lightning conditions, shoreline backgrounds, and waves that shapes similar to a boat. I developed Backpropagation Neural Network using Levenberg-Marquardt algorithm and Radial Basis Neural Network into the ATR system. I also compared advantages and disadvantages of each neural network, and provided best ways to train the neural networks. My final [Presentation]

09/2010- 12/2010:

    - Improve ATR system by developing 1) Textons to extract texture features and 2) Adaboost algorithm (GentleBoost) for classification.

    - ATR accuracy has improved over neural networks

    - Tested on real world data, underwater sonar images

    - Final [Presentation]

Videos of my work on small boats: [Video1] [Video2]

Computer Vision Methods for Coral Reef Assessment /Professor Serge Belongie

Summer REU 2010

(http://cvce.ucsd.edu/)

06/2010-09/2010

-          Worked with the Scripps Institute of Oceanography to make computer vision recognition tools for large scale monitoring of coral reefs.

-          Designed segmentation software to quickly label and identify bleaching area of coral reefs for health assessment. Developed under Matlab and C, user can segment multiple layers with foreground and background labeling, using algorithms growCut, random walker, and graphCut.

-          Developed coral website and database for Scripps using PHP, MySQL, and Java Script. (http://cvce.ucsd.edu)

Final Presentation (pdf)

Software:

Download Matlab code at bottom of page, file name: SegTool.rar

Objective: Allow semi automated multi-layer segmentation and area calculation

Usage: Biologists from Scripps Institute of Oceanography uses the program to segment images of coral reefs, on the

            coral reef itself, its bleach area, dead tissue area, and calculate all area in cm

- Feature: segmentation: Use GrowCut Algorithm, user gives points for foreground and background to segment images

--------------------------------------------------------------------------------------------------------

UC San Diego Computer Vision Lab

                                    (a)                                                                              (b)

(a) Feature: Adjust contour by moving control points (in blue 'X') (using spline tool)

(b) Feature: Area Calculation (and ratio of inner segment over outer segment)

(c) Feature: Bleach Area auto detection                                             (d) Feature: Allow multi-inner layers segmentation

 ----------------------------------------------------------------------------------------------------------

Professor Chih-hao Hsieh

06/2009 - 08/2009

        

Automatic plankton recognition: Studied with marine biologists at National Taiwan University. Our goal is to design automatic plankton recognition system. This would reduce the tedious labor of manual plankton sorting, and will help marine biologists to understand plankton diversity and distribution in ocean for large scale research. We use ZooScan, a scanner to scan images of planktons and apply image processing techniques to classify planktons into groups of family, using feature extractions techniques and classifier random decision forest.

My main job was to use OpenCV (a open-sourced computer vision software) to extract features from the images,  

and then we ran through R programming language to classify based on many classification algorithms, with random forest algorithms giving the best result.

Orignially: classify once into 28 groups.                                               ZooScan: scanner to scan images of planktons

New Plan: Use dual classification, first classify into two groups,

legs and no legs, then into 28 groups.

We found many misclassification if classify once into 28 groups. However, if we first classify into legs and no-legs groups, then classify into 28 groups, we would have an improved accuracy. My job was to classify into 2 subgroups, legs and no-legs.

I used various techniques on the plankton images:

- transforming images to fourier domain to get the frequency 

- hough transform to detect straight lines commonly occurred for the appendages,

- based on grayscale, find ratio of perimeter around legs plus body over perimeter only around body.

- other measurements, body diameters, body length and width, grayscale distribution, etc.

Finding if plankton has legs:

Hough transform to detect legs                  Find ratio of perimeter of body over body+ legs

UCSD Undergraduate Summer Research –                                            

Professor Eleazar Eskin

07/2006 - 09/2006

 Mining significant chromosomes: Population structure resulted in false detection of chromosomes responsible for physical traits. To correct errors from population structures, apply new Genome-association techniques [EMMA] on a plant model Arabidopsis thaliana. We developed a website which reads in DNA data and calculates the most significant chromosomes controlling flowering time, and displays a graph of all the significant hits.


---------------------------------------------------------------------------------------------------------

University of California Education Abroad Program, Engineering in Japan, 2008

Professor Masayuki Numao

  Intelligent system for markets and robots: Research on Machine Learning, particularly on Reinforcement Learning. Apply Reinforcement Learning on varies problems, such as market trends prediction and robot locomotion for obstacles avoidance.

Wrote a simple simulation. Circle representing robot, which moves around the grid until it reaches the goal (yellow box). The robot wants to move avoiding the obstacles (black boxes) while moves to the goal in shortest path. The problem can be found [here] (Java Applet).