Abstract:
Objective Aiming at the problem of accurate real-time pose acquisition in the autonomous recovery of an unmanned surface vehicle (USV), a STag marking visual guidance method for unmanned vehicle docking and recovery is proposed.
Methods Due to the stable attitude characteristics of STag markers, they are selected as the fiducial markers in the visual guidance of this work. By detecting STag markers in the video stream obtained by the camera on the USV, combined with the camera's internal parameters and the size of the markers, EPnP and direct linear transformation (DLT) algorithms are fused to calculate the relative pose of the recovery device and USV. Amplitude limiting filtering and first-order low-pass filtering are then performed to obtain the required lateral offset and heading deviation for line-of-sight (LOS) docking guidance.
Results In the static performance test, the average angular error of target detection is 6.85° and the average distance error is 0.056 m. In the guided autonomous recovery lake test, the accuracy of static and dynamic docking is within plus or minus 0.5 m.
Conclusion Compared to traditional USV docking and recovery methods, STag marking visual guidance can enhance the terminal accuracy of USV autonomous docking and improve the overall success rate of docking and recovery.