Abstract:
Objectives Unmanned surface vehicles (USVs) have emerged as indispensable intelligent platforms in marine operations, widely used for tasks such as ocean monitoring, environmental surveys, and resource exploration. However, their operational capabilities are limited by size and payload, requiring deployment and recovery via motherships. Traditional manual remote-controlled recovery methods face critical limitations: they are ill-suited for high-speed and rough sea conditions, demand highly skilled operators, exhibit low efficiency, and pose significant safety risks. These shortcomings severely impair the operational efficiency of USVs in complex marine environments. Moreover, existing vision-based positioning technologies for autonomous recovery are heavily dependent on artificial markers and vulnerable to environmental disturbances such as strong light reflection, high winds, waves, and obstructions, leading to reduced robustness and potential detection failures. To address these challenges—specifically the real-time, accurate acquisition of relative pose (position and attitude) between the mothership and USV during recovery, and the over-reliance on artificial markers—this study proposes a LiDAR-based autonomous recovery guidance method for USVs. The primary goal is to enhance the environmental adaptability, positioning accuracy, and operational safety of USV recovery systems, providing a reliable technical solution for autonomous USV recovery.
Methods The proposed method adopts a three-stage technical framework to achieve precise and stable autonomous recovery. In the first stage (3D object detection), the PointPillars deep learning algorithm is employed to process real-time point cloud data acquired by LiDAR mounted on the mothership. PointPillars converts irregular point cloud data into regular 2D feature maps using a feature encoding network: discretizing the x−y plane into uniform grids (pillars), augmenting point features to a fixed dimension, and generating pseudo-images via max-pooling and spatial mapping. A backbone network then extracts multi-scale features through convolution and deconvolution operations, while an SSD (single shot multibox detector)-based detection head outputs the USV's 7-dimensional pose information: center coordinates (x, y, z), dimensions (width w, length l, height h), and heading angle (φ). Second, to address inevitable false detections and missed detections in 3D object detection (which could cause incorrect guidance or target loss), a target tracking framework integrating Kalman filtering and the Hungarian algorithm is developed. A constant-velocity motion model is used to predict the USV's state in the next frame via Kalman filtering. The Hungarian algorithm, combined with 3D-IoU (intersection over union), constructs a cost matrix to establish temporal associations between predicted states and detection results. Kalman filtering then updates the target state using matched detection results to filter out noise, while lifecycle management is applied to unmatched detections and tracks to ensure continuous and stable tracking. Finally, in the control stage, the line-of-sight (LOS) guidance algorithm calculates the desired heading angle, which is then fed into a PID (proportional-integral-derivative) controller. The controller converts the heading deviation into propeller control signals, adjusting the USV's course to approach and align with the stern slide.
Results Two sets of experiments were conducted to validate the performance of the proposed method. In the adaptability evaluation of the 3D object detection algorithm, a dedicated USV point cloud dataset was constructed, comprising 3 291 frames collected from Yuji Lake in Wuhan. Evaluation metrics showed an average precision (AP) of 83.69%, a position detection error of only 0.071 2 m, and a heading angle detection error of 1.518°. These results confirm the high accuracy and stability of PointPillars in USV pose estimation. In the USV autonomous recovery simulation experiment, real-time kinematic (RTK) technology was used to provide centimeter-level true position and heading data for comparison. Eight sets of experiments were performed, with the USV starting from different initial positions and headings. The recovery success rate reached 100%, with all centering errors less than 0.6 m (half the width of the USV). Additionally, the average processing time per frame of point cloud data was approximately 40 ms, meeting real-time operational requirements.
Conclusions The experiments comprehensively validate the feasibility and effectiveness of the LiDAR-only autonomous recovery guidance method for USVs. Compared with traditional vision-based methods, this approach eliminates reliance on artificial markers and exhibits superior environmental adaptability, overcoming challenges such as light interference and obstructions. The integration of PointPillars, Kalman filtering, and the Hungarian algorithm ensures high-precision real-time pose estimation and stable target tracking, while the LOS−PID control system guarantees accurate course adjustments and reliable docking. By meeting all practical requirements for accuracy and speed, the developed method provides a novel technical pathway for autonomous USV recovery and lays a foundation for the broader application of LiDAR in marine intelligent systems. This approach offers significant value in enhancing the operational efficiency and safety of USVs in complex environments..