计算机科学 ›› 2024, Vol. 51 ›› Issue (11A): 240100162-7.doi: 10.11896/jsjkx.240100162
张峰
ZHANG Feng
摘要: 在工业仪表液晶显示屏检测过程中,由于显示屏像素尺寸较小,像素缺陷难以被检测。传统的计算机视觉方法对环境变化敏感,需要手动设置参数。针对上述问题,设计了一种基于深度学习的液晶屏缺陷检测算法,其能够在较低的算力条件下识别液晶屏的像素级别像素缺陷。主要工作包括:(1)针对小尺寸目标正负样本匹配过程中正样本数量较少的问题,提出了一种不同尺寸目标的自适应正样本数量增强方法;(2)针对小尺寸目标正样本IoU小导致训练困难的问题,提出了一种自适应正样本IoU补偿加权方法;(3)针对小数据集对超参数敏感的问题,设计了一种正负交叉熵不平衡权重分类损失函数;(4)针对小尺寸目标细节特征提取困难的问题,在主干网络中引入了频域通道注意力,强化了小目标的细节特征提取能力。实验结果表明,相较于基线模型YOLOV8,此算法的小尺寸检测目标的mAP_s达到63.3%,提高了3.7%。其中,小尺寸像素缺陷的mAP_s达到78.8%,提升了4.5%;灰尘杂质检测目标的mAP_s达到47.8%,提升了3%;像素缺陷召回率达到99.8%。以上结果充分验证了算法的有效性。
中图分类号:
[1]MING W,ZHANG S,LIU X,et al.Survey of Mura Defect Detection in Liquid Crystal Displays Based on Machine Vision[J].Crystals,2021,11(12):1444. [2]QIAN J D,CHEN B,QIAN J Y,et al.Fast detection of sub-pixeldefects in LCD based on improved gradient algorithm[J].Journal of Computer Applications,2017,37(S1):201-205. [3]LUO Q Y.Design and Implementation of Screen Surface DefectDetection System Based on Improved Faster RCNN[D].Beijing:BUPT,2021. [4]GONG H,MU T,LI Q,et al.Swin-Transformer-EnabledYOLOv5 with Attention Mechanism for Small Object Detection on Satellite Images[J].Remote Sensing,2022,14(12):2861. [5]LIU Y,SHAO Z.NAM:Normalization-based Attention Module[J].arXiv:2111.12419,2021. [6]ZHU X,LYU S,WANG X,et al.TPH-YOLOv5:ImprovedYOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios[C]//IEEE/CVF International Conference on Computer Vision.2021:2778-2788. [7]WOO S,PARK J.CBAM:Convolutional Block Attention Module[C]//European Conference on Computer Vision(ECCV).2018:3-19. [8]GHIASI G,LIN T Y,LE Q V.DropBlock:A regularizationmethod for convolutional networks[J].Advances in Neural Information Processing Systems,2018,31. [9]QIN Z,ZHANG P,WU F,et al.FcaNet:Frequency Channel At-tention Networks[C]//IEEECVF International Conference on Computer Vision.2021:783-792. [10]LYU C,ZHANG W,HUANG H,et al.RTMDet:An Empirical Study of Designing Real-Time Object Detectors[J].arXiv:2212.07784,2022. [11]ZHANG S,CHI C,YAO Y,et al.Bridging the Gap Between Anchor-Based and Anchor-Free Detection via Adaptive Training Sample Selection[C]//IEEE CVF Conference on Computer Vision and Pattern Recognition.2020:9759-9768. [12]LI X,WANG W,WU L,et al.Generalized Focal Loss:Learning Qualified and Distributed Bounding Boxes for Dense Object Detection[J].Advances in Neural Information Processing Systems.2020:21002-21012. [13]ZHANG H,WANG Y,DAYOUB F,et al.VarifocalNet:AnIoU-aware Dense Object Detector[C]//IEEECVF Conference on Computer Vision and Pattern Recognition.2021:8514-8523. [14]LI S,HE C,LI R,et al.A Dual Weighting Label Assignment Scheme for Object Detection[C]//IEEE CVF Conference on Computer Vision and Pattern Recognition.2022:9387-9396. [15]TAN M,PANG R,LE Q V.EfficientDet:Scalable and Efficient Object Detection[C]//IEEE CVF Conference on Computer Vision and Pattern Recognition.2020:10781-10790. [16]WANG Q,WU B,ZHU P,et al.ECA-Net:Efficient ChannelAttention for Deep Convolutional Neural Networks[C]//IEEECVF Conference on Computer Vision and Pattern Recognition.2020:11534-11542. [17]FENG C,ZHONG Y,GAO Y,et al.TOOD:Task-aligned One-stage Object Detection[C]//IEEE CVF International Confe-rence on Computer Vision(ICCV).2021:3490-3499. [18]LIU S,HUANG D,WANG Y.Learning Spatial Fusion for Single-Shot Object Detection[J].arXiv:1911.09516,2019. [19]CARION N,MASSA F,SYNNAEVE G,et al.End-to-End Object Detection with Transformers[C]//European Conference on Computer Vision.2020:213-229. [20]ZHU X,SU W,LU L,et al.Deformable DETR:DeformableTransformers for End-to-End Object Detection[J].arXiv:2010.04159,2020. |
|