Computer Science ›› 2023, Vol. 50 ›› Issue (12): 148-155.doi: 10.11896/jsjkx.230500217

• Computer Graphics & Multimedia • Previous Articles     Next Articles

Prior-guided Blind Iris Image Restoration Algorithm

WANG Jia1, XIANG Liuyu1, HUANG Yubo2, XIA Yufeng3, TIAN Qing4, HE Zhaofeng1   

  1. 1 School of Artificial Intelligence,Beijing University of Posts and Telecommunications,Beijing 100876,China
    2 School of Integrated Circuits,Beijing University of Posts and Telecommunications,Beijing 100876,China
    3 School of Modern Post(School of Automation),Beijing University of Posts and Telecommunications,Beijing 100876,China
    4 School of Information,North China University of Technology,Beijing 100144,China
  • Received:2023-05-29 Revised:2023-09-13 Online:2023-12-15 Published:2023-12-07
  • About author:WANG Jia,born in 1996,Ph.D candidate,is a member of China Computer Federation.His main research in-terests include computer vision,image restoration,and iris recognition.
    HE Zhaofeng,born in 1982,Ph.D,professor,is a member of China Computer Federation.His main research interests include computer vision,biometrics,and reinforcement learning.
  • Supported by:
    National Natural Science Foundation of China(62176025,62106015,U21B2045).

Abstract: As one of the most potential biometric technologies,iris recognition has been widely used in various industries.How-ever,the existing iris recognition system is easily disturbed by external factors during the image acquisition process,and the acquired iris images have inevitable problems of insufficient resolution and easy blurring.To address these challenges,a prior-guided blind iris image restoration method is proposed,which utilizes the generative adversarial network and iris priors to recover unknown degraded iris images mixed with degradation factors such as low resolution,motion blur,and out-of-focus blur.The network includes a degradation removal sub-network,a prior estimation sub-network,and a prior fusion sub-network.The prior estimation sub-network models the distribution of the style information of the input as prior knowledge to guide the generative network.Besides,the prior fusion sub-network uses an attentive fusion mechanism to integrate multi-level style features,which improves the utilization of information.Experimental results show that the proposed method outperforms other methods in both qualitative and quantitative indexes,achieves blind recovery of degraded irises,and improves the robustness of iris recognition.

Key words: Iris restoration, Iris recognition, Iris segmentation, Style information, Attentive fusion

CLC Number: 

  • TP391
[1]DAUGMAN J.How iris recognition works[J].IEEE Transactions on Circuits and Systems for Video Technology,2004,14(1):21-30.
[2]FERGUS R,SINGH B,HERTZMANN A,et al.Removing ca-mera shake from a single photograph[C]//ACM Siggraph 2006 Papers.ACM,2006.
[3]LEVIN A,FERGUS R,DURAND F,et al.Image and depthfrom a conventional camera with a coded aperture[J].ACM Transactions on Graphics,2007,26(3):70-es.
[4]SHAN Q,JIA J,AGARWALA A.High-quality motion deblurring from a single image[J].ACM Transactions on Graphics,2008,27(3):1-10.
[5]HUANG X,REN L,YANG R.Image deblurring for less intrusive iris capture[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2009:1558-1565.
[6]ALONSO-FERNANDEZ F,FARRUGIA R A,BIGUN J.Very low-resolution iris recognition via Eigen-patch super-resolution and matcher fusion[C]//2016 IEEE 8th International Con-ference on Biometrics Theory,Applications and Systems.IEEE,2016:1-8.
[7]ALONSO-FERNANDEZ F,FARRUGIA R A,BIGUN J.Irissuper-resolution using iterative neighbor embedding[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.IEEE,2017:153-161.
[8]KINGMA D P,WELLING M.Auto-encoding variational bayes[J].arXiv:1312.6114,2013.
[9]GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Ge-nerative adversarial nets[C]//Advances in Neural Information Processing Systems.2014:2672-2680.
[10]GUO Y,WANG Q,HUANG H,et al.Adversarial iris superresolution[C]//2019 International Conference on Biometrics.IEEE,2019:1-8.
[11]WANG X,ZHANG H,LIU J,et al.Iris image super resolution based on GANs with adversarial triplets[C]//Chinese Confe-rence on Biometric Recognition.Springer,2019:346-353.
[12]LEE M B,KANG J K,YOON H S,et al.Enhanced iris recognition method by generative adversarial network-based image reconstruction[J].IEEE Access,2021,9:10120-10135.
[13]LI X,CHEN C,ZHOU S,et al.Blind face restoration via deep multi-scale component dictionaries[C]//European Conference on Computer Vision.Springer,2020:399-415.
[14]KARRAS T,LAINE S,AILA T.A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.IEEE,2019:4401-4410.
[15]BROCK A,DONAHUE J,SIMONYAN K.Large scale GANtraining for high fidelity natural image synthesis[C]//International Conference on Learning Representations.2018.
[16]ZHU J Y,PARK T,ISOLA P,et al.Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision.IEEE,2017:2223-2232.
[17]CHOI Y,CHOI M,KIM M,et al.Stargan:Unified generativeadversarial networks for multi-domain image-to-image translation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2018:8789-8797.
[18]PARMAR N,VASWANI A,USZKOREIT J,et al.Image transformer[C]//International Conference on Machine Learning.PMLR,2018:4055-4064.
[19]SOHL-DICKSTEIN J,WEISS E,MAHESWARANATHANN,et al.Deep unsupervised learning using nonequilibrium thermodynamics[C]//International Conference on Machine Lear-ning.PMLR,2015:2256-2265.
[20]LEDIG C,THEIS L,HUSZÁR F,et al.Photo-realistic singleimage super-resolution using a generative adversarial network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2017:4681-4690.
[21]WANG X,YU K,WU S,et al.Esrgan:Enhanced super-resolution generative adversarial networks[C]//Proceedings of the European Conference on Computer Vision Workshops.Sprin-ger,2018.
[22]KUPYN O,BUDZAN V,MYKHAILYCH M,et al.Deblurgan:Blind motion deblurring using conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2018:8183-8192.
[23]KUPYN O,MARTYNIUK T,WU J,et al.Deblurgan-v2:De-blurring(orders-of-magnitude) faster and better[C]//Procee-dings of the IEEE/CVF International Conference on Computer Vision.IEEE,2019:8878-8887.
[24]ALJADAANY R,LUU K,VENUGOPALAN S,et al.Iris super-resolution via nonparametric over-complete dictionary lear-ning[C]//2015 IEEE International Conference on Image Processing.IEEE,2015:3856-3860.
[25]ZHANG Q,LI H,HE Z,et al.Image super-resolution for mobile iris recognition[C]//Chinese Conference on Biometric Recognition.Springer,2016:399-406.
[26]RIBEIRO E,UHL A,ALONSO-FERNANDEZ F,et al.Exploring deep learning image super-resolution for iris recognition[C]//2017 25th European Signal Processing Conference.Greece:IEEE,2017:2176-2180.
[27]RIBEIRO E,UHL A.Exploring texture transfer learning via convolutional neural networks for iris super resolution[C]//2017 International Conference of the Biometrics Special Interest Group.IEEE,2017:1-5.
[28]ALAOUI F,ASSID K,DEMBELE V,et al.Application of blind deblurring algorithm for iris biometric[J].International Journal of Computer Applications,2013,79(3):11-15.
[29]LIU J,SUN Z,TAN T.A novel image deblurring method to improve iris recognition accuracy[C]//IEEE International Joint Conference on Biometrics.IEEE,2011:1-8.
[30]LIM B,SON S,KIM H,et al.Enhanced deep residual networks for single image super-resolution[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.IEEE,2017:136-144.
[31]CHEN Y,SHEN C,WEI X,et al.Adversarial posenet:A structure-aware convolutional network for human pose estimation[C]//Proceedings of the IEEE International Conference on Computer Vision.IEEE,2017:1212-1221.
[32]JOHNSON J,ALAHI A,FEI-FEI L.Perceptual losses for real-time style transfer and super-resolution[C]//European Confe-rence on Computer Vision.Springer International Publishing,2016:694-711.
[33]RUSSAKOVSKY O,DENG J,SU H,et al.Imagenet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-252.
[34]ZHANG Z,ZHANG H,WANG J,et al.Region attention mecha-nism based dual human iris completion technology[J].Journal of Image and Graphics,2022,27(5):1669-1681.
[35]KINGMA D P,BA J.Adam:A method for stochastic optimization[J].arXiv:1412.6980,2014.
[36]PASZKE A,GROSS S,MASSA F,et al.Pytorch:An imperative style,high-performance deep learning library[J].Advances in Neural Information Processing Systems,2019,32:8026-8037.
[37]CHEN C,GONG D,WANG H,et al.Learning spatial attention for face super-resolution[J].IEEE Transactions on Image Processing,2020,30:1219-1231.
[38]ZHANG Y,LI K,LI K,et al.Image super-resolution using very deep residual channel attention networks[C]//Proceedings of the European Conference on Computer Vision.Springer,2018:286-301.
[39]LU T,WANG Y,ZHANG Y,et al.Face hallucination via split-attention in split-attention network[C]//Proceedings of the 29th ACM International Conference on Multimedia.2021:5501-5509.
[40]OTHMAN N,DORIZZI B,GARCIA-SALICETTI S.OSIRIS:An open source iris recognition software[J].Pattern Recognition Letters,2016,82:124-131.
[1] FANG Qiang and YAO Peng. Reliable Iris Recognition Using 2D Quadrature Filters [J]. Computer Science, 2015, 42(5): 281-285.
[2] JIN Xin,NIE Ren-can and ZHOU Dong-ming. Improved Iris Recognition Algorithm Based on PCNN [J]. Computer Science, 2014, 41(Z11): 110-115.
[3] LIU Jing,SUN Zhe-nan and TAN Tie-niu. Iris Image Deblurring Method Based on Non-local Regularization and Reliable Region Detection [J]. Computer Science, 2014, 41(1): 54-58.
[4] PENG Hong-chao,TONG Ming-wen,ZOU Jun-hua and HAO Qiu-hong. Rule-based Preprocessing Algorithm for Web Page Segmentation [J]. Computer Science, 2013, 40(Z11): 379-382.
[5] . [J]. Computer Science, 2007, 34(12): 139-142.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!