We present a set of repositories including the essential parts for a general semantic active perception system. The tools are related to: Neural Networks and Deep Learning, sensors bridges, semantic mapping, informative path planning, the airsim simulator, and deployment on a turtlebot robot.
This repository contains all the supplementary material associated to the paper with title "Physics-Informed Multi-Agent Reinforcement Learning for Distributed Multi-Robot Problems". Please check out our project website for more details:
We present a novel neural network that combines self-attention and a convex optimization algorithm to learn to identify the graph structure of a multi-robot or multi-agent task. Check the repository if you want to develop your neural network or replicate our experiments.
We present LEMURS, but not the animal: this is a general framework for learning distributed multi-robot interactions. Check the repository if you want to train your own multi-robot policies from demonstrations or replicate the experiments from our papers.
CineMPC is ready to play on a drone platform using ROS. The infrastructure to test it in the photorealistic simulator AirSim is provided as an example of use.
https://github.com/ppueyor/CineMPC_ros
CinemAirsim is a plugin for the Airsim simulator that enables the use of cinematographic cameras onboard the robots, offering the possibility to control in real time their intrinsic parameters.
If you want to try to herd your own group of evaders, check our repository. It contains the codes associated to different multi-robot policies, based on Implicit Control, and the different simulations and experiments found in our multi-robot herding papers.
https://github.com/EduardoSebastianRodriguez/Multi-Robot-Implicit-Control-Herd