Tae Eun Choe

Tae Eun Choe (한글: 최태은, 中文: 崔泰銀, カタカナ: チェテウン)

Tae Eun Choe is a staff computer vision scientist at Tesla motors. He earned his Ph.D. degree in Computer Science from the University of Southern California in 2007. He has broad experiences in computer vision, video codec, and wireless networks. His research interests include 3-D reconstruction, object detection and tracking, scene understanding, text and video analysis and fusion, and mobile computing. Previously, he led multiple projects at ObjectVideo. In an Automatic Scene Understanding (DoD SBIR NAVY OSD09-SP3, Phase I and II) project, he developed an framework to handle multi-modal queries (keyword, plain text, video clip, SQL, SPARQL) and retrieve relevant videos in distributed network systems. For Mathematics of Sensing, Exploitation, and Execution (DARPA BAA Phase I, II, and III), he developed a unified framework to fuse multi-modal data including image, video, SIGINT, unstructured data by representing them to a And-Or graph and converted it to a natural text report. He is currently developing a fully autonomous Autopilot system at Tesla motors.

Research Interest

  • Computer vision, machine learning, multimedia, computer graphics
  • 3-D reconstruction, big data analysis, video event search, medical imaging, data mining, mobile computing, cloud computing
Education 
  • Ph.D. Computer Science, University of Southern California          Aug 2007
  • M.S. Computer Science, University of Southern California            Dec 2004
  • M.S. Computer Science, Pohang University of Science and Technology,
    S. Korea 
                                                                                                     Feb 1998
  • B.E. Computer Engineering, Pusan National University, S. Korea 
    • (Summa cum laude)                                                               Feb 1996
Work Experience

ObjectVideo inc. 
(Reston, VA)

Principal Research Scientist

 

2007 – Present

  • Automated Scene Understanding (DoD SBIR NAVY OSD10-L04, Phase II), Principal Investigator
    Developing methods for complex event modeling and recognition using Markov Logic Network.
  • Mathematics of Sensing, Exploitation, and Execution (DARPA BAA Phase I, II and III), Co-Principal Investigator
    Studying a unified mathematical foundation of representation, inference and learning for Sensing, Exploitation and Execution (SEE).
  • Automatic Scene Understanding (DoD SBIR NAVY OSD09-SP3, Phase I and II), Principal Investigator
    Developing an framework to handle multi-modal queries (keyword, plain text, video clip, SQL, SPARQL) and retrieve relevant videos in distributed network systems.
  • Context-sensitive Content Extraction and Scene Understanding (DoD SBIR NAVY, N07-085, Phase II), Principal Investigator
    Developed a system to automatically extract scene context using semantic ontology and  describe the scene in a natural language text.
  • Real-time video tracking of multiple moving target (DoD STTR ARMY A07-T007, Phase II), Principal Investigator
    Led a project of optimal multiple-targets tracking across multiple-cameras in real-time.
  • Intelligent Retrieval of Surveillance Imagery (Office of Naval Research)
    Developed algorithms for vehicle fingerprinting and cross-camera tracking.
  • Maritime video surveillance (Office of Naval Research – Future Naval Capabilities)
    Developed key algorithms in a real-time detection and tracking system in maritime scene.

University of Southern California 
(Los Angeles, CA)

Research Assistant

 2001 – 2007

  • 3-D Euclidean reconstruction and registration of near-planar surface 
    (NIH Grant No. R21 EY015914-03)
    Proposed a method to reconstruct a near-planar surface, which has very small depth variations, such as a retinal fundus, a building façade, or satellite images of a terrain. The near-planar surface is famous for one of degenerate cases to be reconstructed in Euclidean space. The three-step bundle adjustment algorithm is proposed to estimate camera parameters and to reconstruct a dense 3-D near-planar surface. Subsequently, the images are back-projected to the reconstructed 3-D structure for more accurate registration of the images.

  • 3-D building detection and reconstruction (NIMA Grant)
    Developed a system to model buildings in 3-D from aerial and ground-view images. 
    An algorithm to detect and remove obstacles in front of the building façade from ground-view images was implemented.

Pohang Institute of Science and Technology 
(Pohang, South Korea)

Research Associate (Project Leader)

 

2000 – 2001

  • 3-D building reconstruction from the satellite images 
    Made a system of detection and reconstruction of artificial structures by processing satellite images
  • Model based 3-D face reconstruction 
    Developed a system to model a 3-D human face reconstruction from frontal and side images

Research Assistant

1996 – 1998

  • Stereo matching  
    Developed an intensity- and feature-based stereo matching algorithm to reconstruct 3-D objects such as human face.
  • Autonomous navigation system for unmanned vehicle (Granted by Hyundai Motors)
    Participated in the project to develop the algorithm to detect traffic lanes and traffic signals in real time. 

LG Electronics (Seoul, South Korea)
Research Assistant

1998 – 2000

  • Video Codec Standard for H.263+
    Developed a de-blocking filter for video CODEC, which was accepted for the H.263+ and H.264 standards and awarded as the best paper.
  • LG cellular Phone for Sprint Co. 
    Engaged in the SMS (Short Message Service) and call processing part for the first CDMA mobile phone of LG Electronics in the United States.
 Journal Publications 

    Book Chapters

    • Asaad Hakeem, Himaanshu Gupta, Atul Kanaujia, Tae Eun Choe, Kiran Gunda, Andrew Scanlon, Li Yu, Zhong Zhang, Peter Venetianer, Zeeshan Rasheed and Niels Haering, "Video Analysis for Business Intelligence", Springer Berlin Heidelberg , pp 309-354, 2012
    • Zeeshan Rasheed, Khurram Shatique, Li Yu, Mun Wai Lee, Krishnan Ramnath, Tae Eun Choe, Omar Javed, and Niels Haering, “Distributed Sensor Networks for Visual Surveillance”, in Distributed Video Sensor Networks,  Editors: Bhanu, B., Ravishankar, C.V., Roy-Chowdhury, A.K., Aghajan, H., Terzopoulos, D.,  Springer,  2011.

    Conference Papers

    Presentation

    Patents

    • Tae Eun Choe, Gerard Medioni, “3-D reconstruction and registration, 
      US Patent #: US 8,401,276 and US 
      9,280,821
    • Tae Eun Choe, Sung Chul Kang, “Apparatus and method for transmitting call holding message in mobile communication terminal," 
      US Patent #: US 6,782,252 and US 7,551,900
    • Atul Kanaujia, Tae Eun Choe, Hongli Deng, "Complex Event Recognition in a Sensor Network," USPTO Application #: 20,150,279,182
    • Atul Kanaujia, Narayanan Ramanathan, Tae Eun Choe, "System and Method for Identifying Faces in Unconstrained Media," USPTO Application #: 20,150,178,554
    • Minwoo Park, Tae Eun Choe, W. Andrew Scanlon, M. Allison Beach, Gary W. Myers, "Systems and methods for processing crowd-sourced multimedia items, USPTO Application #: 20,150,066,919
    • Tae Eun Choe, Hongli Deng, Mun Wai Lee, Feng Guo, "Graph matching by sub-graph grouping and indexing" USPTO Application #:20,140,324,864
    • Tae Eun Choe, Mun Wai Lee, Kiran Gunda, Niels Haering, “Automatic event detection, text generation, and use thereof,” USPTO Application #: 20,130,129,307

    Teaching Experience


       University of Southern California  (Los Angeles, CA)
        
        Teaching Assistant

    • Advanced Artificial Intelligence (CS573)                                       Spring 2004
    • Computer Vision (CS574)                                                                Fall 2003
    • Computer Graphics (CS480)                                       Fall 2002, Spring 2003
      Pohang University of Science and Technology (PohangKorea)
         
         Teaching Assistant
    • Computer Architecture (CS311), Taught an assembly language          Fall 1996

    Awards and Honors

    • Marquis Who’s who in America                                                                     2009-
    • USC Teaching and Research Assistantship                        May 2002 – Aug 2007
    • Outstanding Academic Achievements Award from USC (PhD)              May 2007
    • Outstanding Academic Achievements Award from USC (MS)               May 2004
    • Scholarship from Korea Government(4years, $60,000)        Sep 2001–Aug 2005
    • The Best Paper Award in Korea Broadcasting Conference                      Oct 1998
    • LG Electronics Scholarship                                                   Mar 1996 – Feb 1998
    • Student Scholarship from POSTECH                                    Mar 1996 – Feb 1998
    • Premium Award for Academic Excellence                                                Feb 1996
             For summa cum laude, Pusan National University 
    • The 3rd prize in National Software Contest as a group,                           Aug 1994
             Hyundai Electronics
    • The 5th prize in National Software Contest as an individual,                    Aug 1994
             Hyundai Electronics
    • Student Scholarship from Pusan National University             Aug 1992 – Feb1996

    Languages

    •     EnglishJapaneseKorean: Fluent

    Technical Skills    

    • Programming languagesC/C++, MATLAB, C#, LISP, Java, JavaScript, PHP, Python, HTML5, Perl, PASCAL, Visual Basic, x86 Assembly Language   

    Services 

    • Reviewer of ICCV, CVPR, ECCV, and top-tier computer vision journals, 2007-
    • Contributor of designing government standards such as Motion Imagery Standards Board (MISB), Universal Core (UCore), and DoD Discovery Metadata Specification (DDMS)
    • Program committee of IAPR International Conference on Machine Vision Applications (MVA) 2013, 2015. 
    • Finance Chair of Korea Institute of Communication Sciences Workshop on Image Processing and Understanding, 2001
    • Committee member of ITU-T Video Coding Experts Group (H.263+/H.26L), 1998
    Subpages (2): Research Research
    Comments