留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于轻量化快速卷积与双向加权特征融合网络的船舶裂纹检测

王冲 朱玉辉

王冲, 朱玉辉. 基于轻量化快速卷积与双向加权特征融合网络的船舶裂纹检测[J]. 中国舰船研究, 2023, 19(X): 1–13 doi: 10.19693/j.issn.1673-3185.03401
引用本文: 王冲, 朱玉辉. 基于轻量化快速卷积与双向加权特征融合网络的船舶裂纹检测[J]. 中国舰船研究, 2023, 19(X): 1–13 doi: 10.19693/j.issn.1673-3185.03401
WANG C, ZHU Y H. Ship crack detection based on lightweight fast convolution and bidirectional weighted feature fusion network[J]. Chinese Journal of Ship Research, 2023, 19(X): 1–13 doi: 10.19693/j.issn.1673-3185.03401
Citation: WANG C, ZHU Y H. Ship crack detection based on lightweight fast convolution and bidirectional weighted feature fusion network[J]. Chinese Journal of Ship Research, 2023, 19(X): 1–13 doi: 10.19693/j.issn.1673-3185.03401

基于轻量化快速卷积与双向加权特征融合网络的船舶裂纹检测

doi: 10.19693/j.issn.1673-3185.03401
基金项目: 国家自然科学基金资助项目(52101369)
详细信息
    作者简介:

    王冲,男,1988年生,博士,教授。研究方向:船舶先进制造技术。E-mail:chriswang@whut.edu.cn

    朱玉辉,男,1995年生,硕士生。研究方向:船舶智能制造,人工智能。 E-mail:319061@whut.edu.cn

    通信作者:

    王冲

  • 中图分类号: U672.7

Ship crack detection based on lightweight fast convolution and bidirectional weighted feature fusion network

知识共享许可协议
基于轻量化快速卷积与双向加权特征融合网络的船舶裂纹检测王冲,等创作,采用知识共享署名4.0国际许可协议进行许可。
  • 摘要:   目的  针对人工目视与超声波方法的船舶裂纹检测存在效率低下、成本高昂和危险性高的特点,提出一种基于深度学习的船舶裂纹检测方法。  方法  首先,在YOLOv5s的主干网络中使用轻量化卷积结构(GSConv)替代标准卷积并融入注意力机制,在降低主干网络参数量与计算量的同时,提升主干网络对裂纹特征的提取能力;其次,在网络的颈部(Neck)使用基于PConv构建的C3_Faster替代原C3模块,提升模型的图像处理速度,增强模型快速性;最后,设计一种简化的双向加权特征融合网络(BiFFN)以改进原模型(YOLOv5s)中的特征聚合网络,提升裂纹的语义信息与位置信息的融合效果,以及模型对裂纹的识别准确度与定位精度。  结果  通过对船舶裂纹原始数据与增强数据的学习,所提方法实现了94.11%的检测精确度和93.50%的召回率,模型的计算量降低了17.93%,参数量降低了15.81%。  结论  研究表明,基于轻量化快速卷积与双向加权特征融合网络(MLF-YOLO)的船舶裂纹检测方法,实现了模型轻量化与较高的检测精确度和召回率,结果可为开发自主无人机船舶检测提供参考。
  • 图  卷积神经网络结构

    Figure  1.  The structure of CNN

    图  GSConv 结构[16]

    Figure  2.  GSConv structure[16]

    图  CBAM注意力机制

    Figure  3.  CBAM attention mechanism

    图  通道注意力模块

    Figure  4.  Channel attention module

    图  空间注意力模块

    Figure  5.  Spatial attention module

    图  At-GSConv结构

    Figure  6.  At-GSConv structure

    图  PConv 结构

    Figure  7.  PConv structure

    图  双向加权特征融合网络

    Figure  8.  Bidirectional weighted feature fusion network

    图  MLF -YOLO模型结构

    Figure  9.  MLF-YOLO model structure

    图  10  数据增强图像

    Figure  10.  Data enhanced images

    图  11  IoU(左)与CIoU(右)

    Figure  11.  IoU(left)and CIoU(right)

    图  12  YOLOv5s and MLF-YOLO模型检测结果对比

    Figure  12.  The detection results comparison between YOLOv5s and MLF-YOLO

    图  13  模型表现对比

    Figure  13.  Model performance comparison

    图  14  模型对原始图像的检测结果

    Figure  14.  The detection results of the model on the original image

    图  15  模型表现对比

    Figure  15.  Model performance comparison

    图  16  特征热力图

    Figure  16.  Characteristic heat map

    图  17  模型各层计算量

    Figure  17.  Computation for each layer of the model

    图  18  模型图像处理时间

    Figure  18.  Time-consumption for image processing of the model

    表  消融试验结果

    Table  1.  Results of ablation experiment

    试验SConvGSConvC3_FasterBiFPN-SCBAMP/%R/%GFLOPsBest.pt
    (MB)
    187.189.716.614.3
    288.189.915.413.5
    388.288.813.412.2
    490.586.816.614.6
    590.990.416.614.4
    687.490.015.513.6
    788.287.112.912.0
    891.792.216.614.0
    987.589.513.011.7
    1088.688.113.412.2
    1188.590.114.012.1
    1288.389.915.413.5
    1388.888.0614.913.6
    1487.888.713.112.9
    1586.687.213.512.3
    1694.193.513.612.2
    下载: 导出CSV
  • [1] 王浩亮, 尹晨阳, 卢丽宇, 等. 面向海上搜救的UAV与USV集群协同路径跟踪控制[J]. 中国舰船研究, 2022, 17(5): 157–165. doi: 10.19693/j.issn.1673-3185.02916

    WANG H L, YIN C Y, LU L Y, et al. Cooperative path following control of UAV and USV cluster formaritime search and rescue[J]. Chinese Journal of Ship Research, 2022, 17(5): 157–165 (in Chinese). doi: 10.19693/j.issn.1673-3185.02916
    [2] YANN L, BENGIO Y, HINTON G. Deep learning [J] Nature, 2015, 521(7553): 436-444.
    [3] LI Y T, TENG F B, XIAN J H, et al. Underwater crack pixel-wise identification and quantification for dams via lightweight semantic segmentation and transfer learning[J]. Automation in Construction, 2022, 9(144): 104600.
    [4] DUNG C V, ANH L D. Autonomous concrete crack detection using deep fully convolutional neural network[J]. Automation in Construction, 2019, 99(01): 52–58.
    [5] 任秋兵, 李明超, 沈扬, 等. 水工混凝土裂缝像素级形态分割与特征量化方法[J]. 水力发电学报, 2021, 40(2): 234–246. doi: 10.11660/slfdxb.20210224

    REN Q B, LI M C, SHEN Y, et al. Pixel-level shape segmentation and feature quantification of hydraulic concrete cracks based on digital images[J]. Journal of Hydroelectric Engineering, 2021, 40(2): 234–246 (in Chinese) . doi: 10.11660/slfdxb.20210224
    [6] ZHAO X F, LI S Y. A method of crack detection based on convolutional neural networks [C]// Proceedings of the 11th International Workshop on Structural Health Monitoring. Stanford, CA, USA: DEStech Publications, Inc, 2017.
    [7] LI L F, Ma W F, LI L, et al. Research on detection algorithm for bridge cracks based on deep learning[J]. Acta Automatica Sinica, 2019, 45(9): 1727–1742.
    [8] CHA Y J, CHOI W, SUH G, et al. Autonomous structural visual inspection using region‐based deep learning for detecting multiple damage types[J]. Computer‐Aided Civil and Infrastructure Engineering, 2018, 33(9): 731–747. doi: 10.1111/mice.12334
    [9] 余加勇, 李锋, 薛现凯, 等. 基于无人机及Mask R-CNN的桥梁结构裂缝智能识别[J]. 中国公路学报, 2021, 34(12): 80–90. doi: 10.3969/j.issn.1001-7372.2021.12.007

    YU J Y, LI F, XUE X K, et al. Intellgent identification of bridge structural cracks based on unmanned aerial vehicle and Mask R-CNN[J]. China Journal of Highway and Transport, 2021, 34(12): 80–90 (in Chinese) . doi: 10.3969/j.issn.1001-7372.2021.12.007
    [10] LIN T Y, DOLLAR P, GIRSHICK R , et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE 2017 Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.
    [11] LIU S, QI L, QIN H, et al. Path aggregation network for instance segmentation[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA : IEEE, 2018: 8759−8768.
    [12] TAN M, PANG R, LE Q V. EfficientDet: scalable and efficient object detection[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA: IEEE, 2020: 10781-10790.
    [13] HOWARD, A G, ZHU M L, CHEN B, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications [J]. arXiv preprint arXiv. (2017-04-17)[2023-06-07]. https://arxiv.org/pdf/1704.04861.pdf.
    [14] ZHANG X Y, ZHOU X Y, LIN M X, et al. Shufflenet: an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 6848−6856.
    [15] HAN K, WANG Y H, TIAN Q, et al. Ghostnet: more features from cheap operations[C]// Proceedings of the 2020 IEEE /CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA : IEEE, 2020: 1580−1589.
    [16] LI H L, LI J, WEN H B, et al. Slim-neck by GSConv: a better design paradigm of detector architectures for autonomous vehicles[J/OL]. arXiv preprint arXiv. (2022-06-06)[ 2023-06-07]. https://arxiv.org/ftp/arxiv/papers/2206/2206.02424.pdf.
    [17] CHEN J R, KAO S H, HE H, et al. Run, don't walk: chasing higher FLOPS for faster neural networks [C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, BC, Canada: IEEE, 2023.
    [18] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[J/OL]. arXiv preprint arXiv. (2015-06-08)[ 2023-06-07]. https://arxiv.org/pdf/1506.02640.pdf.
    [19] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV). [S. 1. ]: Springer, 2018: 3−19.
    [20] ZHENG Z H, WANG P, LIU W, et al. Distance-IoU loss: Faster and better learning for bounding box regression[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York Hilton Midtown, New York, USA: AAAI Press, 2020, 34(7): 12993−13000.
  • 加载中
图(18) / 表(1)
计量
  • 文章访问数:  123
  • HTML全文浏览量:  5
  • PDF下载量:  0
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-06-07
  • 修回日期:  2023-09-30
  • 网络出版日期:  2023-10-07

目录

    /

    返回文章
    返回