SLAM is a fundamental problem in robotics that involves a robot navigating in an unknown environment, simultaneously mapping the surroundings and determining its own location within that space. SLAM is crucial for autonomous systems, enabling them to understand their environment and navigate effectively.
FastSLAM - A particle filter-based approach that represents the belief with a set of particles, each maintaining a hypothesis of the robot's pose and the map.
Graph-Based SLAM (Pose Graph SLAM) - Formulates the SLAM problem as a graph, where nodes represent robot poses and map features, and edges encode the spatial constraints between them.
Visual SLAM - Relies on visual sensors such as cameras to estimate the robot's pose and map features, leveraging visual information for navigation.
Lidar SLAM - Utilizes lidar sensors to create a 3D map of the environment and estimate the robot's pose, often applied in scenarios with varying lighting conditions.
RGB-D SLAM - Combines color and depth information from sensors like Microsoft Kinect to enhance the accuracy of mapping and localization.
Direct SLAM - Directly minimizes the photometric error between consecutive images, avoiding feature extraction and matching, and providing dense maps.
ORB-SLAM - An open-source SLAM system that integrates ORB features, a highly efficient keypoint descriptor, for robust mapping and localization.
Loop Closure Detection - Identifies and corrects accumulated errors in the robot's pose estimate by recognizing previously visited locations.
Semantic SLAM - Incorporates semantic information about the environment, recognizing and mapping different types of objects or structures.
Visual-Inertial SLAM (VI-SLAM) - Integrates visual and inertial sensors to improve pose estimation robustness, especially in dynamic environments.
Multi-Robot SLAM - Extends SLAM to scenarios with multiple robots collaborating to map and explore an environment collectively.