计算机科学 ›› 2023, Vol. 50 ›› Issue (11A): 230200089-6.doi: 10.11896/jsjkx.230200089

• 图像处理&多媒体技术 • 上一篇    下一篇

一种面向工业产品表面缺陷图像的色调增强方法

罗月童1,2, 李超1, 段昶1, 周波1,2   

  1. 1 合肥工业大学计算机与信息学院 合肥 230601
    2 安全关键工业测控技术教育部工程研究中心 合肥 230009
  • 发布日期:2023-11-09
  • 通讯作者: 周波(zhoubo810707@hfut.edu.cn)
  • 作者简介:( ytluo@hfut.edu.cn)
  • 基金资助:
    国家自然科学基金(61602146);国家重点基础研究发展计划(2017YFB1402200)

Hue Augmentation Method for Industrial Product Surface Defect Images

LUO Yuetong1,2, LI Chao1, DUAN Chang1, ZHOU Bo1,2   

  1. 1 School of Computer and Information,Hefei University of Technology,Hefei 230601,China
    2 Engineering Research Center of Safety Critical Industrial Measurement and Control Technology,Ministry of Education,Hefei 230009,China
  • Published:2023-11-09
  • About author:LUO Yuetong,born in 1978,Ph.D,professor,is a member of China Computer Federation.His main research interests include image processing and scientific visualization.
    ZHOU Bo,born in 1981,Ph.D,associate professor.His main research interests include deep learning,image proces-sing,and artificial intelligence.
  • Supported by:
    National Natural Science Foundation of China(61602146) and National Basic Research Program of China(2017YFB1402200).

摘要: 在基于深度学习的工业缺陷检测中,采样数据的色调分布、缺陷的位置分布往往与检测数据存在着差异,这会导致检测模型性能不佳,基于GAN(Generative Adversarial Networks)的数据增强方法为常用的解决方法,文中设计了HC-GAN和T-GAN来分别进行色调和缺陷位置的增强。在HC-GAN中,通过构建语义保持模块和色调控制模块,能够在不改变缺陷特征的前提下实现基于参考数据的色调增强;在T-GAN中,通过输入、输出数据的成对设定,实现了缺陷位置转移;在实际应用中,两个GAN的串联使用能降低训练数据在色调和空间上的不均衡性,提高了模型的检测性能。最后进行了实验验证,结果表明,所提方法生成的数据实现了缺陷图像的色调增强和位置增强,提高了工业产品表面缺陷检测的精度。

关键词: 生成对抗网络, 深度学习, 数据增强, 缺陷检测

Abstract: The hue distribution of industrial sampling data and the spatial distribution of defects are often different from test data,which often leads to poor performance of defect detection models based on deep learning.Therefore,data augmentation based on generative adversarial networks(GAN) is a common solution.Two GANs (HC-GAN and T-GAN) are designed to perform hue augmentation and defect location augmentation respectively.By constructing content consistency module and hue controlled module,HC-GAN can achieve hue augmentation based on reference data without changing defect characteristics.By pairing the input and output data,T-GAN realizes the defect location transfer.In addition,two GANs can also be used in tandem to achieve both hue augmentation and position transfer.Finally,hue distribution statistics and object detection effect tests are carried out on the generated data.The results show that the data generated by the proposed method can achieve hue augmentation and position augmentation,and improve the accuracy of surface defect detection of industrial products.

Key words: GAN, Deep learning, Data augmentation, Defect detection

中图分类号: 

  • TP391.41
[1]TAO X,HOU W,XU D.A review of deep learning-based me-thods for surface defect detection[J].Journal of Automation,2021,47(5):1017-1034.
[2]XU M,YOON S,FUENTES A,et al. A Comprehensive Survey of Image Augmentation Techniques for Deep Learning[J]. ar-Xiv:2205.01491,2022.
[3]CRESWELL A,WHITE T,DUMOULIN V,et al.Generativeadversarial networks:An overview[J].IEEE Signal Processing Magazine,2018,35(1):53-65.
[4]ISOLA P,ZHU J Y,ZHOU T,et al.Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1125-1134.
[5]ZHU J,PARK T,ISOLA P,et al.Unpaired Image-to-ImageTranslation Using Cycle-Consistent Adversarial Networks[C]//2017 IEEE International Conference on Computer Vision(ICCV).2017:2242-2251.
[6]LI T,QIAN R,DONG C,et al.Beautygan:Instance-level facial makeup transfer with deep generative adversarial network[C]//Proceedings of the 26th ACM International Conference on Multimedia.2018:645-653.
[7]JIANG W,LIU S,GAO C,et al.Psgan:Pose and expression robust spatial-aware gan for customizable makeup transfer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:5194-5202.
[8]CAO Y,ZHOU Z,ZHANG W,et al.Unsupervised diverse colorization via generative adversarial networks[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases.Cham:Springer,2017:151-166.
[9]YOO S,BAHNG H,CHUNG S,et al.Coloring with limited data:Few-shot colorization via memory augmented networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:11283-11292.
[10]NIU S,LI B,WANG X,et al.Defect image sample generationwith GAN for improving defect recognition[J].IEEE Transactions on Automation Science and Engineering,2020,17(3):1611-1622.
[11]GAO C,LI W,TAO R,et al.MS-HLMO:Multiscale Histogram of Local Main Orientation for Remote Sensing Image Registration[J].IEEE Transactions on Geoscience and Remote Sensing,2022,60:1-14.
[12]DOU H,ZHANG L M,HAN F,et al.A review of interpretability studies of convolutional neural networks[J].Journal of Software,2021,32(7):1-27.
[13]DENG J,DONG W,SOCHER R,et al.Imagenet:A large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2009:248-255.
[14]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[15]LIN T Y,GOYAL P,GIRSHICK R,et al.Focal loss for dense object detection[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:2980-2988.
[16]LIN T Y,MAIRE M,BELONGIE S,et al.Microsoft coco:Common objects in context[C]//European Conference on Computer Vision.Cham:Springer,2014:740-755.
[17]WANG J,FENG S C,CHENG Y.A review of research on lightweight neural network structures for deep learning [J].Computer Engineering,2021,47(8):1-13.
[18]WANG J,FENG S C,CHENG Y.A review of research on lightweight neural network structures for deep learning [J].Computer Engineering,2021,47(8):1-13.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!