Reinforcement Learning Assisted Robotic Construction
Reinforcement Learning Assisted Robotic Construction
In the rapidly advancing world of construction technology, I am particularly drawn to the innovative possibilities that reinforcement learning offers in automating and improving building processes. The intersection of robotics and machine learning opens up new avenues for efficiency and precision in construction. I focus on developing intelligent systems capable of learning and adapting to complex construction tasks, from material handling and placement to assembling intricate structures. By leveraging reinforcement learning, I aim to push the boundaries of what robotic construction can achieve, making it more adaptive, efficient, and responsive to real-world challenges.
This research explores the effectiveness of deep reinforcement learning (DRL) algorithms in manipulating the robot arm. The most common tasks are tested, including picking, placing, and fetching blocks. The algorithms tested include proximal policy optimization (PPO), Hindsight Experience Replay (HER). The results indicated the robustness of DRL in the application of robot arm control. More tasks are under investigation, e.g., block stacking, moving.
Success rate vs training steps
Reward vs training steps
This research aims to develop advanced algorithms, with a focus on imitation learning, to train robots for complex construction tasks. By enabling robots to observe and learn from expert actions, the objective is to have them accurately replicate these behaviors in real-world construction settings. The approach begins with simple reinforcement learning challenges, such as the cartpole example, and progresses to more complex tasks like masonry block stacking in actual construction environments. Below is an illustration of behavior cloning (a method of imitation learning) applied to the cartpole example (Left: with behavior cloning; Right: without behavior cloning).
Custom environment for masonry block stacking