I would like to add some information here and clarify some things that I unfortunately did not explain well in the paper. Hopefully these comments can help. I also add some more details and examples for illustration purposes.
In the first place, just in case it is not clear, the ideas proposed for Ros Hydro were not tested in the experiments reported in the Results section, which were conducted with Fuerte before migrating the system to Hydro. These ideas were conceived after testing the possibilities and limitations of Fuerte, especially regarding the observations persistence, with our sensor setup and the first prototype platform.
For Hydro and the new layers based system, the adapted code can be found here. There are two implementation approaches:
The first approach is simpler and faster. It preserves the whole static map, without ray-tracing, and previous obstacles inside the blind area, also adding new observed obstacles.
First approach layers schema. Red represents previous obstacles inside the blind area, green represents new obstacles.
The second approach requires more updates. It preserves the static map and previous obstacles inside the blind area, applying ray-tracing outside the blind area and also adding new observed obstacles. This way the free space is cleared. One drawback of this approach is that the whole map is re-inflated at each iteration.
Second approach layers schema. Red represents previous obstacles inside the blind area, green represents new obstacles.
Always remembering obstacles inside the blind area may prevent the robot from colliding after reaching a target place close to an obstacle. Whereas the obstacle can be avoided when reaching the goal if remembered only for a while, there would be a risk if the robot resumes movement later, e.g. if called by the user or needing to go recharge, since the obstacle would no longer be there and may be already too close to be observed. This is a problem with standard navigation methods that in our opinion has not received much attention. We think that higher level reasoning about the dynamic nature of the obstacles would be a good direction to follow.
One limitation of the proposed approach is that there can be problems if the area is not very well delimited and the user comes too close, as pointed out in the paper. The importance of this problem is quite related to the application and dynamic characteristics of the environment. Another possible problem could be caused by spurious measurements inside the blind zone, which should be properly filtered or cleaned up.
After deciding to use Mira navigation for the project, we found the nogo areas utility very useful. It can provide enhanced safety to avoid static obstacles not observable by the sensors (like protruding shelves with a similar height to that of the robot head), it can help the robot navigate through safer areas (especially considering the limited field of view) and it can prevent the static map from not being properly cleared (due to problems related with the blind area). Something similar could be implemented with the Ros layers approach.
Navigation through narrow spaces has improved with the new robot and Mira, but it depends a lot on the environment and the localization accuracy. When entering a room, it is important that the robot is correctly localized in the transversal direction to the doorway and that the doorway is approached from the front. Regarding the first point, doors located at one side of a corridor may cause problems, while doors located at the beginning or end of a corridor are better. In order to approach doors from the front, the first thing to do is try to define the places in a suitable manner, i.e. in front of the doorway itself. A more general strategy is to add nogo areas at the sides and corners of the doorway entrance. Please note that the addition of the nogo areas is helpful for obtaining safer paths, but this does not solve the above mentioned problem related to localization.
Another observation is that the blind zone in front of the robot is significantly reduced for high obstacles such as tables and chairs and not so much for lower obstacles. We are now considering merging both sources of data from the bottom and top sensors. During the interactive session at the conference, several people asked about interference problems but we have not experienced many issues in this regard. This is probably due to the top camera being quite separated from obstacles in the interference area1.
You can find some example videos related to this work attached below.
Note: the map used for the tests shown in the first two videos was built with another robot equipped with a laser, and these tests were conducted with the new prototype robot. The experiments shown in the paper were all conducted with the first prototype robot with the proposed RGB-D sensor setup, running Ros Fuerte.
1. A detailed study about this kind of problems has been conducted by Martín-Martín et al. Reference: Roberto Martin-Martin, Malte Lorbach and Oliver Brock. Deterioration of Depth Measurements Due to Interference of Multiple RGB-D Sensors. Proc. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems IROS'14. Chicago, Illinois, USA, September 2014.