计算机科学 ›› 2025, Vol. 52 ›› Issue (6A): 240400189-7.doi: 10.11896/jsjkx.240400189

• 图像处理&多媒体技术 • 上一篇    下一篇

基于改进YOLOv5s的飞机起落架安全销检测算法

陈世嘉1, 叶剑元2, 龚轩1, 曾康2, 倪鹏程2   

  1. 1 北京航空航天大学杭州创新研究院 杭州 310000
    2 浙江省长龙技术航空器智能维修高新技术企业研究开发中心 杭州 311200
  • 出版日期:2025-06-16 发布日期:2025-06-12
  • 通讯作者: 龚轩(gongxuan9497@163.com)
  • 作者简介:(chenshijiacigar@outlook.com)
  • 基金资助:
    国家自然科学基金(62122011,U21A20514);浙江“尖兵”“领雁”研发攻关计划(2023C01030)

Aircraft Landing Gear Safety Pin Detection Algorithm Based on Improved YOlOv5s

CHEN Shijia1, YE Jianyuan2, GONG Xuan1, ZENG Kang2, NI Pengcheng2   

  1. 1 Beihang Hangzhou Innovation Research Institute,Beihang University,Hangzhou 310000,China
    2 Zhejiang Changlong Technolgy Aviation Maintenance High Tech Enterprise Research and Development Center,Hangzhou 311200,China
  • Online:2025-06-16 Published:2025-06-12
  • About author:CHEN Shijia,born in 1997,master.His main research interest includes artificial intelligence.
    GONG Xuan,born in 1973,Ph.D,associate researcher.His main research interests include multi-source heteroge-neous object perception,machine lear-ning,computer vision.
  • Supported by:
    National Natural Science Foundation of China(62122011,U21A20514) and “Pioneer” and “Leading Goose” R&D Program of Zhejiang(2023C01030).

摘要: 飞机起落架安全销是一种飞机安全保护装置,飞机起飞前应确保安全销拔出,以保障飞机飞行安全。传统的飞机起落架安全销检查方式是基于人工巡检,这种方式通常会因人为因素产生安全隐患,且效率低下。为了解决这一问题,首次将基于深度学习的目标检测算法应用于飞机起落架安全销检测并从算法模型的轻量化和性能方面进行优化,以更好地在算力资源、存储资源及算法性能方面满足检测任务。基于工业级深度学习目标检测模型YOLOv5进行改进,在模型轻量化方面,引入MobileNetV3作为主干网络用于特征提取,在保证精度的同时大大减少了模型的参数量与计算量;在算法性能方面,引入轻量级坐标注意力模块,帮助算法网络更准确地定位目标并提高目标检测精度。实验结果表明,改进的YOLOv5模型能够有效地进行飞机起落架安全销检测任务,与优化前相比,mAP提升了2.5%,F1评分增加了1.4%,参数量降低了50%,计算量下降了61%。该算法可为飞机起落架安全销自动检测方法提供参考。

关键词: 起落架安全销, 深度学习, 目标检测, YOLOv5, 坐标注意力

Abstract: Aircraft landing gear safety pin is a kind of aircraft safety protection device.Before takeoff,it should be ensured that the safety pin is pulled out,so as to protect the safety of aircraft flight.The traditional aircraft landing gear safety pin inspection method is based on manual patrol,which usually produces safety hazards due to human factors and is also inefficient.In order to solve this problem,deep learning-based target detection algorithms are applied to aircraft landing gear safety pin inspection for the first time and optimised in terms of lightweight and performance of the algorithm model to better meet the inspection task in terms of arithmetic resources,storage resources and algorithm performance.Based on the industrial-grade deep learning target detection model YOLOv5,the model is improved in terms of lightweighting while MobileNetV3 is introduced as the backbone network for feature extraction,which greatly reduces the parameters and the GFLOPs while ensuring the accuracy,and in terms of algorithmic performance,the lightweight coordinate attention module is inserted to help the algorithmic network to locate the target more accurately and to improve the accuracy of target detection.Experimental results show that the improved YOLOv5 model can effectively perform the aircraft landing gear safety pin detection task.Compared with the pre-optimization model,the mAP is increased by 2.5%,F1 score is increased by 1.4%,and the parameter is reduced by 50%,GFLOPs are reduced by 61%.The algorithm can provide a reference for automatic aircraft landing gear safety pin detection methods.

Key words: Landing gear safety pin, Deep learning, Target detection, YOLOv5, Coordinate attention

中图分类号: 

  • TP391.41
[1]CHENG H.Study on the line maintenance risk management of z aircraft maintenance company[D].Chengdu:University of Electronic Science and Technology of China,2016.
[2]SHI Z J,WANG H W,XU X.Prediction of human error probability in aviation maintenance based on CREAM and Bayesian network[J].Journal of Safety Science and Technology,2015,11(4):185-191.
[3]LU J W,WANG Q Y.Design and research of a kind of auto-ejected landing gear safety pin[J].Journal of Civil Aviation,2017,1(1):58-61.
[4]GIRSHICK R,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2014:580-587.
[5]HE K M,ZHANG X Y,REN S Q,et al.Spatial pyramid pooling in deep convolutional networks for visual recognition[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1904-1916.
[6]REN S Q,HE K M,GIRSHICK R,et al.Faster R-CNN:Towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149.
[7]REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once:Unified real-time object detection[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2016:779-788.
[8]BOCHKOVSKIY A,WANG C Y,LIAO H Y M.YOLOv4:Optimal speed and accuracy of object detection[J].arXiv:2004.10934,2020.
[9]LIU W,ANGUELOV D,ERHAN D,et al.SSD:Single shotmultiBox detector[C]//Computer Vision-ECCV 2016:14th European Conference,Amsterdam,The Netherlands,Proceedings,Part I 14.Springer International Publishing,2016:21-37.
[10]LIN T Y,GOYAL P,GIRSHICK R,et al.Focal loss for dense object detection[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2020,42(2):318-327.
[11]MANDAL V,MUSSAH A R,ADU-GYAMFI Y.Deep learning frameworks for pavement distress classification:A comparative analysis[C]//2020 IEEE International Conference on Big Data.IEEE,2020:5577-5583.
[12]SANDLER M,HOWARD A,ZHU M,et al.Mobilenetv2:Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognitio.2018:4510-4520.
[13]FRAN C.Xception:Deep learning with depth wise separableconvolutions[C]//30th IEEE Conference on Computer Vision and Pattern Recognition.2017:1800-1807.
[14]HU J,SHEN L,ALBANIE S,et al.Squeeze-and-Excitation networks[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2018:7132-7141.
[15]WOO S,PARK J,LEE J Y,et al.Cbam:Convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision.2018:3-19.
[16]HOU Q B,ZHOU D Q,FENG J S.Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2021:13713-13722.
[17]ZHOU B,KHOSLA A,LAPEDRIZA A,et al.Learning deepfeatures for discriminative localization[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2016:2921-2929.
[18]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-CAM:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:618-626.
[19]CHATTOPADHAY A,SARKAR A,HOWLADER P,et al.Grad-CAM++:Generalized gradient-based visual explanations for deep convolutional networks[C]//2018 IEEE Winter Conference on Applications of Computer Vision.IEEE,2018:839-847.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!