基于激光雷达的无人艇自主回收引导方法

LiDAR-based autonomous recovery guidance method for unmanned surface vehicles

  • 摘要:
    目的 针对无人艇(USV)自主回收过程中母船与无人艇的相对位姿实时获取问题,提出一种基于激光雷达的无人艇自主回收引导方法。
    方法 首先,通过PointPillars深度学习算法对无人艇进行实时3D目标检测,获取母船与无人艇之间的相对位姿;然后,基于卡尔曼滤波和匈牙利算法构建目标跟踪框架,完成位姿信息的噪声滤波与检测结果的时序关联,确保待回收无人艇运动状态的稳定输出;最后,采用视线(LOS)制导算法计算航向偏差,驱动无人艇进行回收。
    结果 结果显示,在3D目标检测算法适应性评估实验中,无人艇的位置检测误差为0.071 2 m,航向检测误差为1.518°;在无人艇自主回收模拟实验中,对中误差均小于0.6 m。
    结论 所做研究验证了基于单激光雷达回收引导方法的可行性,为无人艇的自主回收引导提供了一种新的解决思路。

     

    Abstract:
    Objectives  Unmanned surface vehicles (USVs) have emerged as indispensable intelligent platforms in marine operations, widely used for tasks such as ocean monitoring, environmental surveys, and resource exploration. However, their operational capabilities are limited by size and payload, requiring deployment and recovery via motherships. Traditional manual remote-controlled recovery methods face critical limitations: they are ill-suited for high-speed and rough sea conditions, demand highly skilled operators, exhibit low efficiency, and pose significant safety risks. These shortcomings severely impair the operational efficiency of USVs in complex marine environments. Moreover, existing vision-based positioning technologies for autonomous recovery are heavily dependent on artificial markers and vulnerable to environmental disturbances such as strong light reflection, high winds, waves, and obstructions, leading to reduced robustness and potential detection failures. To address these challenges—specifically the real-time, accurate acquisition of relative pose (position and attitude) between the mothership and USV during recovery, and the over-reliance on artificial markers—this study proposes a LiDAR-based autonomous recovery guidance method for USVs. The primary goal is to enhance the environmental adaptability, positioning accuracy, and operational safety of USV recovery systems, providing a reliable technical solution for autonomous USV recovery.
    Methods  The proposed method adopts a three-stage technical framework to achieve precise and stable autonomous recovery. In the first stage (3D object detection), the PointPillars deep learning algorithm is employed to process real-time point cloud data acquired by LiDAR mounted on the mothership. PointPillars converts irregular point cloud data into regular 2D feature maps using a feature encoding network: discretizing the xy plane into uniform grids (pillars), augmenting point features to a fixed dimension, and generating pseudo-images via max-pooling and spatial mapping. A backbone network then extracts multi-scale features through convolution and deconvolution operations, while an SSD (single shot multibox detector)-based detection head outputs the USV's 7-dimensional pose information: center coordinates (x, y, z), dimensions (width w, length l, height h), and heading angle (φ). Second, to address inevitable false detections and missed detections in 3D object detection (which could cause incorrect guidance or target loss), a target tracking framework integrating Kalman filtering and the Hungarian algorithm is developed. A constant-velocity motion model is used to predict the USV's state in the next frame via Kalman filtering. The Hungarian algorithm, combined with 3D-IoU (intersection over union), constructs a cost matrix to establish temporal associations between predicted states and detection results. Kalman filtering then updates the target state using matched detection results to filter out noise, while lifecycle management is applied to unmatched detections and tracks to ensure continuous and stable tracking. Finally, in the control stage, the line-of-sight (LOS) guidance algorithm calculates the desired heading angle, which is then fed into a PID (proportional-integral-derivative) controller. The controller converts the heading deviation into propeller control signals, adjusting the USV's course to approach and align with the stern slide.
    Results Two sets of experiments were conducted to validate the performance of the proposed method. In the adaptability evaluation of the 3D object detection algorithm, a dedicated USV point cloud dataset was constructed, comprising 3 291 frames collected from Yuji Lake in Wuhan. Evaluation metrics showed an average precision (AP) of 83.69%, a position detection error of only 0.071 2 m, and a heading angle detection error of 1.518°. These results confirm the high accuracy and stability of PointPillars in USV pose estimation. In the USV autonomous recovery simulation experiment, real-time kinematic (RTK) technology was used to provide centimeter-level true position and heading data for comparison. Eight sets of experiments were performed, with the USV starting from different initial positions and headings. The recovery success rate reached 100%, with all centering errors less than 0.6 m (half the width of the USV). Additionally, the average processing time per frame of point cloud data was approximately 40 ms, meeting real-time operational requirements.
    Conclusions  The experiments comprehensively validate the feasibility and effectiveness of the LiDAR-only autonomous recovery guidance method for USVs. Compared with traditional vision-based methods, this approach eliminates reliance on artificial markers and exhibits superior environmental adaptability, overcoming challenges such as light interference and obstructions. The integration of PointPillars, Kalman filtering, and the Hungarian algorithm ensures high-precision real-time pose estimation and stable target tracking, while the LOS−PID control system guarantees accurate course adjustments and reliable docking. By meeting all practical requirements for accuracy and speed, the developed method provides a novel technical pathway for autonomous USV recovery and lays a foundation for the broader application of LiDAR in marine intelligent systems. This approach offers significant value in enhancing the operational efficiency and safety of USVs in complex environments..

     

/

返回文章
返回