Background

Towards Human-Aware Path Planning

Part I - Partial Motion Flow (work in progress)

Capturing partial information about the flow of people from video frames obtained from cameras positioned in the environment. The current flow of people is used to predict the future flow of people in the environment. This current and future flow will be used in a future work to create a generalized costmap capable of guiding path planning through regions that have a lower impact on people inside the environment.


Links: Short Paper

Multi-Perspective Interaction

Human robot interaction through camera video frames

As cameras become a more recurring occurrence in public and private spaces, such as in airports, research centers and even personal residences, their coverage of existing space has increased substantially. However, a seamless integration of this multitude of cameras with mobile robots is still not fully tackled by the literature. This integration can help solve common issues for robot control solutions - the lack of situational awareness for the operator when using solely the robot own sensors.

Instead of being limited to a map-based representation, multi-perspective interaction allows an operator to send the robot to a specific position in "Camera 2" by interacting directly with its video frame. The planned path (green line) is then overlayed on all affected perspectives.

Links: Simulated Experiments, Real World Experiments


Contributions to autonomous inspection robot

Real-time thermal imaging & Remote Control

Implementing real-time conversion from energy matrices to thermal images while accounting for impact of elevated body temperatures. Fast runtime speed code with many hot path optimizations. Also enabled remote control of a robot through an efficient API.

Links: Robot Description (pt-br)

PhD Thesis

Fig. Evaluate the manner people distribute motion adaptations between first and last crosser

Fig. Generate collision avoidance motions that replicate the distribution of motion adaptations by people

Part I - Human-like collision avoidance

Our focus is on replicating one characteristic of human-human interaction during collision avoidance that is the mutual sharing of the adaptations performed to solve a collision. Given that collision avoidance situations between people are solved cooperatively, we model the manner in which this cooperation is done so that a robot can replicate their behavior. To that end, hundreds of situations where two people have crossing trajectories were analyzed. Based on these trajectories, we determined how total effort is shared between each agent depending on several factors of the interaction such as crossing angle, time to collision and speed and modified an existing collision avoidance approach to replicate this behavior.

Links: Thesis

Fig. Near-symmetry situation where reactive agents continuously misjudge crossing order and fail to solve collision

Fig. Ego-view perspective of robot in near-symmetry situation. The decision of which side to cross is not clear

Part II - Mitigating impact of near-symmetry

Collaboration during collision is not without its potential negative consequences. Effective collaboration during collision avoidance requires predicting whether the person will attempt to avoid collision by crossing first or last. Whenever situations arise in which this decision is not consistent for people, the robot is forced to account for the possibility that both agents will attempt to cross each other on different sides i.e. take a decision detrimental to collision avoidance. Thus, we also evaluate what determines the boundary that separates the decision to avoid collision through one side or the other. By approximating the uncertainty around this boundary, we developed a collision avoidance strategy that attempts to address this problem.

Links: Thesis

Master dissertation

Fig. An input image is filtered using MSR before object detection

Faster object detection using saliency

My method, named MSR, aims to speed up sliding window-based object detection by spectral residual analysis on multiple scales. MSR method relies on a sliding window approach based on the image saliency with the goal of assigning a score to each window before object detection stage. My approach also avoids assumptions about an object shape to reduce the search space, as such, it does not attempt to segment an object based on salient locations. The results shown an runtime speed tree to five times faster than a regular detector, with negligible loss in accuracy.

Links: Youtube, Dissertation (us-en)

Undergraduate work

Fig. Second generation of the Eco-be! Robot

Fig. Mixed reality soccer server running with real robots

A server for mixed reality soccer

The work was centered around building a generic mixed reality soccer server for "EcoBe! " robots using design patterns and metaprogramming concepts. The server was tested and used in the RoboCup mixed reality sub-league during several years. It was also successfully used in several other competitions, such as IranOpen and Latin American and the Brazilian Robotics Competition. (LARC). I also participed albeit not in the same proportion in the development of the mixed reality soccer team intelligence.

Links: Youtube, Sourceforge, Final Report (pt-br)