Projects
Deep Learning for perception
DPDB-Net: Exploiting Dense Connections for Convolutional Encoders, ICRA 2018.
Topometric Localization with Deep learning
Paper Accepted for The International Symposium on Robotics Research (ISRR). Arxiv version available at: Arxiv_paper
Efficient Deep Models for Monocular Road Segmentation
We proposed a new architecture called Fast-Net for near real-time performance.
Deep Learning for human part segmentation
We are investigating CNN architectures to perform human part segmentation.
Completed Projects
View Planning For Cloud-Based Active Object Recognition
We are investigating the possibility of enabling cloud-based object recognition by carefully planning the viewpoints.
Real Time Action Recognition
We proposed the Space-time Occupancy Patterns, a new visual representation for 3D action recognition from sequences of depth maps which is the base for real time human robot interaction.
ROS TurtleBot Like construction
This project developed a ROS TurtleBot Like robotic Plataform which we call Michelangelo. The video below show the first configuration obtained.
Second video show our new configuration performing Autonomous navigation. The map was created using GMapping provide by ROS. Based on Willow Garage specification we built a power and gyro circuit to reach needed capabilities necessary to use ROS mapping and navigation nodes.
RGB-D Binary Descriptor
Designed an accurate and meaningful maps of indoor environments based on objects. A sort of sensors can be employed for this task, such as, lasers and/or cameras, however with the recently introduction of fast and inexpensive RGB-D sensors (RGB actually means trichromatic intensity information and D depth). The Integration of synchronized intensity (color) and depth feasible. We focus on how represent geometrical information with a higher level of abstration.
Nascimento, E. ; OLIVEIRA, G. L. ; VIEIRA, A. W. ; Campos, M. . Improving Object Detection and Recognition for Semantic Mapping with an Extended Intensity and Shape based Descriptor. In: IROS 2011 workshop - Active Semantic Perception and Object Search in the Real World (ASP-AVS-11), 2011, San Francisco. Proc. IROS Workshop ASP-AVS-11, 2011. PDF
Experiments with image reconstruction were performed to show the robustness of the method.
Underwater Visual SLAM
The use of Autonomous Underwater Vehicles (AUVs) for visual inspection tasks is a promising robotic field. The images captured by the robots can also aid in their localization/navigation. In this context, this project proposed an approach to localization and mapping problem of underwater vehicle. Supposing the use of inspection cameras, this proposal is composed of two stages: First the use of computer vision through the algorithm SIFT to extract the features in underwater image sequences and second the development of topological maps to localization and navigation. The integration of such systems will permit simultaneous localization and mapping of the environment. A set of tests with real robots was accomplished, regarding online and performance issues. The results reveals an accuracy and robust approach to several bottom conditions, illumination and noise, leading to a promissory and original SLAM technique.
Botelho S.S.C; OLIVEIRA, G. L. ; HAFFELE, C. ; Figueiredo M. ; Drews P. Self-Localization and Mapping for underwater Autonomous Vehicles. In: Edited by Hanafiah Yussof, In-Tech Press. (Org.). Robot Localization and Map Building. Vienna: In-Tech, 2009, v. 1, p. -.PDF
CATADIOPTRIC VISUAL ODOMETRY
We propose a method for visual odometry using optical flow with a single omnidirectional (catadioptric) camera. We show how omnidirectional images can be used to perform optical flow, discussing the basis of optical flow and some restrictions needed to it and how unwarp these images. In special we describe how to unwarp omnidirectional images to Bird’s eye view, that correspond to scaled orthographic views of the ground plane. Catadioptric images facilitate landmark based odometry, since landmarks remain visible for longer time, as opposed to a small field-of-view standard camera. Also, providing adequate representations to support visual odometry with a fast processing time.
OLIVEIRA, G. L. ; Nascimento, E. ; Campos, M. Visual odometry with omnidirectional images In: Simposio Brasileiro de Automacao Inteligente (SBAI), 2011, Sao Joao del-Rei - MG. Anais Simpisio Brasileiro de Automacao Inteligente, 2011. PDF
Final course work
Monocular Visual odometry, investigate aplications of monocular visual SLAM and test a technique based on neural networks, called RATSLAM.
Reducing planar rendezvous to 1D search.
Human segmentation using stochastic background subtraction, based on ViBe algorithm.