Multi-agent robotics has the potential to solve problems of a huge scale in a more efficient and effective way than humans can. Our particular project -- implementing search and rescue -- can be a highly dangerous task for human teams to do, especially in disaster areas. Human eyes can easily miss signs for help under rubble and debris. Human legs and arms can be irreparably scraped or damaged. Swarms of robots can be mass-produced and enter spaces too dangerous for a human.
We acknowledge that “search and rescue” can easily be repurposed to “search and attack” or something more malicious. We will not be implementing any behavior once the target has been found, but fully acknowledge that our code base could be used for more malicious purposes. Our project, like many projects with potentially monumental positive benefits for society, is a double-edged sword. Although we haven't implemented code-based obstacles to malicious use, here are our intended Terms of Use:
Our system will not be used to physically harm the target of its search
Our system will not be used to search for somebody who explicitly does not wish to be found.
Should our system be used to search for somebody who makes known that they don’t wish to be found, it should be turned off immediately.
The robots in our multi-agent system will never be equipped with tools that are explicitly designed for harm.
We will not save any data with regards to a search session except with your explicit permission or elicitation.
By accessing and using the codebase, you agree to abide by these terms.