Showcase: Computer Vision

Semi-automatic Centerline Extraction Framework

2014

Efficiently obtaining a reliable coronary artery centerline from computed tomography angiography data is relevant in clinical practice. 

This project aims to implement centerline extraction semi-automatic methods. It is based on two seed points (start and end) placed by the radiologist. Then, It extracts interactively the centerline using a medialness image [1][2] and a minimum cost path algorithm [3]. All code has been developed entirely in C++, using ITK and VTK libraries and the medical imaging framework CreaTools [4]. Currently, further medialness techniques are being implemented in order to do a comparison between them. 

References

[1] Esteban Correa-Agudelo, Leonardo Flórez-Valencia, Maciej Orkisz, Claire Mouton, Eduardo E. Dávila Serrano, Marcela Hernández Hoyos, "A Modular Workflow Architecture for Coronary Centerline Extraction in Computed Tomography Angiography Data", ICCVG 2014, Warsaw, Poland. (doi:10.1007/978-3-319-11331-9_19).

[2] CreaTools http://www.creatis.insa-lyon.fr/site/en/CreaTools_home

Further information:

http://www.creatis.insa-lyon.fr/

Master Thesis: Chapter 3

cvTraffic: A Computer Vision Based Tool for Traffic Analysis

2014

This project aims at creating a computer vision-based framework for traffic analysis. Vehicle counting and tracking are based on MoG segmentation strategy [1] and overlapping heuristic of foreground element bounding box. The classification module uses the BoW+SVM approach [2] to successfully separate vehicle classes. Statistics are available in offline/online scenarios using a publisher-subscriber API (zeroMQ). Current work is mainly focused on real-time optimization (currently running at 33-34fps@fanless microITX) and ANPR module detection. 

References

[1] Z.Zivkovic, F. van der Heijden, Efficient Adaptive Density Estimapion per Image Pixel for the Task of Background Subtraction, Pattern Recognition Letters, vol. 27, no. 7, pages 773-780, 2006. The algorithm similar to the standard Stauffer&Grimson algorithm with additional selection of the number of the Gaussian components based on: Z.Zivkovic, F.van der Heijden, Recursive unsupervised learning of finite mixture models, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.26, no.5, pages 651-656, 2004.

[2] Visual Categorization with Bags of Keypoints by Gabriella Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cedric Bray, 2004.

[3] http://sirius.utp.edu.co

Basics of tracking: Pedestrian Tracking using camshift in OpenCV

2012

The task of pedestrian tracking is a key component of video surveillance and monitoring systems. In this video, I play with the meanshift algorithm for tracking a pedestrian by combining with Kalman[2]. Firstly, transform the current frame from RGB space to the CIE1976 (L*a*b*) color space to distinguish the target from the background and other pedestrians. Secondly, use the adaptive meanshift [3] algorithm to calculate the window size and the position of the person. Finally, I use the Kalman filter to avoid the Camshift failure in some occlusion circumstances. Experiments on various video sequences illustrate the proposed algorithm performs better than using only the Camshift approach. 

References

[1] D. Comaniciu and P. Meer, “Robust Analysis of Feature Spaces: Color Image Segmentation,” CVPR’97, pp. 750-755.

[2] Gary R. Bradski, "Computer Vision Face Tracking For Use in a Perceptual User Interface", Intel Technology Journal, No. Q2. (1998).

[3] R. E. Kalman. "A New Approach to Linear Filtering and Prediction Problems", Transactions of the ASME – Journal of Basic Engineering, No. 82 (Series D). (1960), pp. 35-45.

Stereo imaging using XNA + Kinect

2011

At the end of 2011, I teamed-up with researchers from Uniandes visual computing group Imagine (Colombia) and a local mobile network operator (Tigo) to develop an interactive kiosk for advertising purposes. In the course of this project, I programmed stereoscopic visualization in the 3D engine (XNA). Also, I was involved in the semiotics of the user interface and gesture interaction with novel devices (Microsoft Kinect sensor and Phasespace Motion Capture).

HCI - Two-Handed Interaction in Virtual Environments 

2011

Our interest in this project is to show the development of Mixed Reality experiences, in which interaction with novel input and output devices should be taken into account. However, in order to integrate hardware solutions, we require software technologies that hide part of their complexity and facilitate development. InTml [1][2] targets these goals. We implement a common two-handed 3D interaction to show some results from our preliminary user studies and lessons learned [3]. All the code is developed in java and the connection to the devices is handled by a VRPN server.

References

[1] WALTER F BISCHOF, BOULANGER PIERRE, JAMES HOOVER, ROBYN TAYLOR, AND FIGUEROA FORERO PABLO ALEJANDRO. "InTml: A Dataflow Oriented Development System for Virtual Reality Applications". Presence-Teleoperators and Virtual Environments. October 2008, Vol. 17, No. 5, Pages 492-511.

[2] FIGUEROA FORERO PABLO ALEJANDRO. "Insights on the Design of InTml". Presence-Teleoperators and Virtual Environments. April 2010, Vol. 19, No. 2, Pages 118-130.

[3] Pablo Figueroa, Santiago Gil, Raúl Oses, Juan Toro, Catalina Rodriguez, Christian Benavides and Esteban Correa, "Visual Programming for Virtual Reality Applications Based on InTml", SBC Journal on 3D Interactive Systems, vol 3, No 1, 2012.

Further information:

http://sourceforge.net/projects/intml/

Prof. Pablo Figueroa <pfiguero@uniandes.edu.co>

iPuppet: Interactive Puppet Show


2011

The iPuppet is an interactive puppetry system for children between 4 and 8 years old. Beyond the classic relationship between puppets and their public, iPuppet [1] attempts to facilitate puppet's feedback from the audience's active participation. We followed a User-Centered Design process, with about 20 kids that identified the requirements and possibilities of this new interface. In general, children were enthusiastic, always willing to try new activities with the iPuppet. The system foundations are a teleoperated puppet, a puppet theatre with a rear projection screen and a Microsoft Kinect® camera.

References:

[1] Correa E., Fernandez D., Figueroa P., "iPuppet: Interactive Puppet Show". Whole Body Interaction in Games and Entertainment Workshop, ACE2011. November 2011. 

Further information:

http://imagine.uniandes.edu.co/

Prof. Pablo Figueroa <pfiguero@uniandes.edu.co>

Hello world in OpenCV

2009

Multitouch screen using OpenCV

2009