Dr. Kuan-Ting (K.T.) Lai (賴冠廷)
kuantinglai
gmail.com

    Research Affiliations:
    1. NetDB, National Taiwan University, Taipei, Taiwan
    2. Research Center for IT Innovation, Academia Sinica, Taiwan
    3. DVMM, Columbia University, New York City, NY, USA    

    My citations and profiles on Linkedin, ResearchGate and Blogger :
   www.linkedin.com/in/kuantinglai     https://www.researchgate.net/profile/Kuan-Ting_Lai      http://kuantinglai.blogspot.tw/

News

  • 2015/2/18 - My Doctoral Dissertation
    I've passed the oral defense and the final version of my dissertation can be found at here.
    Posted Feb 17, 2015, 8:58 AM by Kuan-Ting Lai
  • 2014/11/4 - Release the code of our CVPR 2014 work
    Better late than never.... Release the code for training and testing video events with one granularity. As explained in our oral presentation on the conference , we now recommend to learn individual classifiers for each granularity by pSVM and do late fusion to get better performance.
    Posted Nov 13, 2014, 7:17 PM by Kuan-Ting Lai
  • 2014/7/26 - Release my oral slides on CVPR 2014
    The file can be downloaded from here. It's a bit large (>30MB).
    Posted Jul 26, 2014, 3:44 AM by Kuan-Ting Lai
  • 2014/7/26
    My new homepage is opened. I will release some codes of my recent publications soon.
    Posted Jul 26, 2014, 3:37 AM by Kuan-Ting Lai
Showing posts 1 - 4 of 4. View more »

Publications

  • "Recognizing Complex Events in Videos by Learning Key Static-Dynamic Evidences," ECCV, 2014 -Kuan-Ting Lai, Dong Liu, Ming-Syan Chen, Shih-Fu Chang  ABSTRACT Complex events consist of various human interactions with different objects in diverse environments. The evidences needed to recognize events may occur in short time periods with variable lengths and can happen anywhere in a video. This fact prevents conventional machine learning algorithms from effectively recognizing the events. In this paper, we propose a novel method that can automatically identify the key evidences in videos for detecting complex events. Both static instances (objects) and dynamic instances (actions) are considered by sampling frames and temporal segments respectively. To compare the characteristic power of heterogeneous instances, we embed static and dynamic instances into a multiple instance learning framework via instance similarity ...
    Posted Jul 26, 2014, 6:49 AM by Kuan-Ting Lai
  • "Video Event Detection by Inferring Temporal Instance Labels," CVPR, 2014 Kuan-Ting Lai, Felix X. Yu, Ming-Syan Chen, Shih-Fu Chang    ABSTRACT Video event detection allows intelligent indexing of video content based on events. Traditional approaches extract features from video frames or shots, then quantize and pool the features to form a single vector representation for the entire video. Though simple and efficient, the final pooling step may lead to loss of temporally local information, which is important in indicating which part in a long video signifies presence of the event. In this work, we propose a novel instance-based video event detection approach. We represent each video as multiple "instances", defined as video segments of different temporal intervals. The objective is to learn an instance-level event detection ...
    Posted Jul 26, 2014, 7:24 AM by Kuan-Ting Lai
  • "Sample-Specific Late Fusion for Visual Category Recognition," CVPR, 2013 Dong Liu, Kuan-Ting Lai, Ming-Syan Chen, Shih-Fu Chang    ABSTRACT Late fusion addresses the problem of combining the prediction scores of multiple classifiers, in which each score is predicted by a classifier trained with a specific feature. However, the existing methods generally use a fixed fusion weight for all the scores of a classifier, and thus fail to optimally determine the fusion weight for the individual samples. In this paper, we propose a sample-specific late fusion method to address this issue. Specifically, we cast the problem into an information propagation process which propagates the fusion weights learned on the labeled samples to individual unlabeled samples, while enforcing that positive samples have higher fusion scores than negative samples ...
    Posted Jul 26, 2014, 6:50 AM by Kuan-Ting Lai
  • "Human Action Recognition Using Key Points Displacement," ICISP, 2010 Kuan-Ting Lai, Chaur-Heh Hsieh, Mao-Fu Lai, Ming-Syan Chen    ABSTRACT Recognizing human actions is currently one of the most active research topics. Efros et al. first proposed using optical flow and normalized correlation to recognize distant actions. One weakness of the method is that optical flow is too noisy and cannot reveal the true motions; the other popular method is the space-time-interest-points proposed by Laptev et al., who extended the Harris corner detector to temporal domain. Inspired by the two methods, we proposed a new algorithm based on displacement of Lowe’s scale-invariant key points to detect motions. The vectors of matched key points are calculated as weighted orientation histograms and then classified ...
    Posted Jul 26, 2014, 6:53 AM by Kuan-Ting Lai
Showing posts 1 - 4 of 4. View more »

Research Highlights

  • Learning Key Evidences for Detecting Complex Events in Videos Video event detection is one of the most important, yet very challenging, research topics in computer science. The recognition of complex events, e.g. “birthday party”, “wedding ceremony” or “attempting a bike trick”, is even more difficult since complex events consist of various human interactions with different objects in diverse environments with variable time intervals. Currently the most common approach is to extract features from frames or video clips, and then to quantize and pool these features to form a single vector representation for the entire video. While this method is simple and efficient, the final pooling step may lead to the loss of temporally local information, and include many irrelevant features from noisy background.   To approach this problem in ...
    Posted Feb 17, 2015, 8:53 AM by Kuan-Ting Lai
Showing posts 1 - 1 of 1. View more »

KT's Technical Blog