Function:
Bring up
D435i + T265
Publish local odometry
Teleoperation by joystick or keyboard
# Bringup D435i, T265, and local odom (add LiDAR, IMU, GPS in future)
roslaunch usv_docking bringup.launch
Teleoperation
roslaunch usv_docking teleop_usv_keyboard.launch
roslaunch usv_docking teleop_usv_joystick.launch
Visualization
roslaunch usv_docking rviz_usv.launch
roslaunch usv_description display.launch
Data Collection
rosrun usv_docking collect_ros_real.py
Train
rosrun usv_docking train.py
Rollout
rosrun usv_docking rollout_ros_real.py
Self-supervised data collection, collect TEB and image pair
Train modal to regression
Predict model to esitimate TEB based on image feedback, then send to position controller
Please see ROS tutorial and VRX classic wiki.
Install python packages:
pip install torchvision # must
pip install torch # must
pip install pyquaternion
pip install pyyaml
pip install rospkg # must
pip install pexpect
pip install opencv-python # must
pip install matplotlib # must
pip install einops
pip install packaging
pip install h5py
pip install ipython
pip install spatialmath-python # must
python -m pip install numpy-quaternion # must, but will remove in future
For real usv, install realsense sdk and ROS driver sudo apt-get install ros-$ROS_DISTRO-realsense2-camera
Test Realsense driver: realsense-viewer Reference
Test t265: roslaunch realsense2_camera demo_t265.launch
Test D435i: roslaunch realsense2_camera rs_camera.launch and rqt_image_view
Install Livox SDK 2 (not SDK 1) and ROS 1 tutorial
Test Livox Mid 360 roslaunch livox_ros_driver2 rviz_MID360.launch
Host IP: 192.168.1.1 255.255.255.0, LiDAR IP: 192.168.1.108, Default pointcloud freq: 10Hz.
Instal fast_lio tutorial
roslaunch livox_ros_driver2 msg_MID360.launch and roslaunchfast_lio mapping_mid360.launch
install rplidar ros driver link
roslaunch rplidar_ros view_rplidar_a1.launch
sudo cpufreq-set -g powersave and cpufreq-info
roslaunch vrx_gazebo vrx.launch # main simulation enviornment
roslaunch vrx_gazebo usv_keydrive.launch # enable `cmd_vel` controll
rqt_image_view # A bug to enable `rospy.wait_for_message` function to read image
python lvd_scripts/collect_data.py
~/Desktop/lvd_data # default dataset path
.
├── DockingSim
│ ├── camera # saved images
│ └── states.csv # saved TEB
└── model_best_lvd_sim.pth # Model weight
Change the simulation enviornments and repeat above step to collect diverse data.
TODO: Data Augmentation/Randomization
TODO: use HDF5 format.
Check if cuda is avaliable or use cpu.
Start training, and see train loss and validate loss
python lvd_scripts/train.py
TODO: the label and image are not normailised.
TODO: use advanced learning model.
(Optional) Restart VRX simulation (step 0).
(Optional) Start the data recorder.
Move the USV around the dock, then run rollout scripts:
python lvd_scripts/rollout_ros.py
Read data topics (default, simulation)
from sensor_msgs.msg import Image
from gazebo_msgs.msg import ModelStates
# USV ground truth states (x,y,yaw)
msg = rospy.wait_for_message('/gazebo/model_states', ModelStates)
states = msg.pose[17]
# USV Camera
img = rospy.wait_for_message("/wamv/sensors/cameras/front_left_camera/image_raw", Image)
Publish velocity control
from geometry_msgs.msg import Twist
pub = rospy.Publisher("/cmd_vel", Twist, queue_size=10)
vel_cmd = Twist()
vel_cmd.linear.x = 1.0
vel_cmd.linear.y = 0.0
vel_cmd.angular.z = 0.0
pub.publish(vel_cmd)
Fog [reference]
Add following block to /catkin_ws/src/vrx/vrx_gazebo/worlds/sydneyregatta.xacro, then recomplie workspace catkin_make.
<fog>
<type>linear</type>
<color>1 1 1 1</color>
<start>1</start>
<end>100</end>
<density>0.2</density>
</fog>
Wind
Wave
The code repo is coming soon.