计算机科学 ›› 2025, Vol. 52 ›› Issue (6A): 240400189-7.doi: 10.11896/jsjkx.240400189
陈世嘉1, 叶剑元2, 龚轩1, 曾康2, 倪鹏程2
CHEN Shijia1, YE Jianyuan2, GONG Xuan1, ZENG Kang2, NI Pengcheng2
摘要: 飞机起落架安全销是一种飞机安全保护装置,飞机起飞前应确保安全销拔出,以保障飞机飞行安全。传统的飞机起落架安全销检查方式是基于人工巡检,这种方式通常会因人为因素产生安全隐患,且效率低下。为了解决这一问题,首次将基于深度学习的目标检测算法应用于飞机起落架安全销检测并从算法模型的轻量化和性能方面进行优化,以更好地在算力资源、存储资源及算法性能方面满足检测任务。基于工业级深度学习目标检测模型YOLOv5进行改进,在模型轻量化方面,引入MobileNetV3作为主干网络用于特征提取,在保证精度的同时大大减少了模型的参数量与计算量;在算法性能方面,引入轻量级坐标注意力模块,帮助算法网络更准确地定位目标并提高目标检测精度。实验结果表明,改进的YOLOv5模型能够有效地进行飞机起落架安全销检测任务,与优化前相比,mAP提升了2.5%,F1评分增加了1.4%,参数量降低了50%,计算量下降了61%。该算法可为飞机起落架安全销自动检测方法提供参考。
中图分类号:
[1]CHENG H.Study on the line maintenance risk management of z aircraft maintenance company[D].Chengdu:University of Electronic Science and Technology of China,2016. [2]SHI Z J,WANG H W,XU X.Prediction of human error probability in aviation maintenance based on CREAM and Bayesian network[J].Journal of Safety Science and Technology,2015,11(4):185-191. [3]LU J W,WANG Q Y.Design and research of a kind of auto-ejected landing gear safety pin[J].Journal of Civil Aviation,2017,1(1):58-61. [4]GIRSHICK R,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2014:580-587. [5]HE K M,ZHANG X Y,REN S Q,et al.Spatial pyramid pooling in deep convolutional networks for visual recognition[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1904-1916. [6]REN S Q,HE K M,GIRSHICK R,et al.Faster R-CNN:Towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149. [7]REDMON J,DIVVALA S,GIRSHICK R,et al.You only look once:Unified real-time object detection[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2016:779-788. [8]BOCHKOVSKIY A,WANG C Y,LIAO H Y M.YOLOv4:Optimal speed and accuracy of object detection[J].arXiv:2004.10934,2020. [9]LIU W,ANGUELOV D,ERHAN D,et al.SSD:Single shotmultiBox detector[C]//Computer Vision-ECCV 2016:14th European Conference,Amsterdam,The Netherlands,Proceedings,Part I 14.Springer International Publishing,2016:21-37. [10]LIN T Y,GOYAL P,GIRSHICK R,et al.Focal loss for dense object detection[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2020,42(2):318-327. [11]MANDAL V,MUSSAH A R,ADU-GYAMFI Y.Deep learning frameworks for pavement distress classification:A comparative analysis[C]//2020 IEEE International Conference on Big Data.IEEE,2020:5577-5583. [12]SANDLER M,HOWARD A,ZHU M,et al.Mobilenetv2:Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognitio.2018:4510-4520. [13]FRAN C.Xception:Deep learning with depth wise separableconvolutions[C]//30th IEEE Conference on Computer Vision and Pattern Recognition.2017:1800-1807. [14]HU J,SHEN L,ALBANIE S,et al.Squeeze-and-Excitation networks[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2018:7132-7141. [15]WOO S,PARK J,LEE J Y,et al.Cbam:Convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision.2018:3-19. [16]HOU Q B,ZHOU D Q,FENG J S.Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2021:13713-13722. [17]ZHOU B,KHOSLA A,LAPEDRIZA A,et al.Learning deepfeatures for discriminative localization[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.2016:2921-2929. [18]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-CAM:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:618-626. [19]CHATTOPADHAY A,SARKAR A,HOWLADER P,et al.Grad-CAM++:Generalized gradient-based visual explanations for deep convolutional networks[C]//2018 IEEE Winter Conference on Applications of Computer Vision.IEEE,2018:839-847. |
|