Milestone one was roughly a week into the project. We were hoping that at this point in time, we'd have determined the feasibility of this project (particularly whether or not we could connect to and talk to multiple Neatos to perform a potentially complex task like searching a space). Our specific goal, verbatim, was:
By the first milestone, we want to have multiple robots able to move simultaneously at the very least, and some skeleton of ROS architecture established for sending multiple avenues of commands and making decisions.
Multi-robot control. We’re currently still working on packaging it into a nice launch file, but the proof-of-concept has demonstrated that our project will work.
We’ve also collectively come together to decide the pipeline of communication, where code is going to live, what is going to differentiate our MVP versus our reach goal, as well as some (very) preliminary task management with regards to who will be doing what. To be more specific
We’ve defined that the map creation/definition, search algorithm, and neato movement are all going to occur in different places.
We’ve defined that our MVP is going to be hard-coding division of space into a square, before lawnmower-ing each section to fully search the area
In particular, our hope is that by doing this we can define the workflow + state machine to differentiate searching + finding it.
Our reach is going to be using graphs, so we’re hoping to spend as little time as possible making a pretty + organized search system
Milestone two was roughly 3 weeks into the project. At this point, our goal was: to have our MVP completed or in the debugging stage at least.
To be more specific, our MVP was a system that automatically divided up a known map and sent commands to each individual neato to lawnmower their spaces.
Multi-robot control via a launch file. We no longer have to open 4 terminals to talk to 4 neatos -- we can launch them all from one place
Map processing + occupancy field generation complete. We now have a representation of the map in code that our Neatos can interact with and reference
Main processing (mostly) complete. The Brain node is capable of splitting up the space, and we're really close to being able to pass odometry between it and the Agent node which handles sending the commands that makes the Neato move.
Movement proof-of-concept complete. We have an example interface class that demonstrates that it is possible to move two robots in different directions at the same time.
Although we didn't quite reach our MVP yet, the finish line is in sight and it doesn't seem like we should be facing too many roadblocks in the remaining implementation)