Computer Science ›› 2021, Vol. 48 ›› Issue (11A): 278-282.doi: 10.11896/jsjkx.210300111

• Image Processing & Multimedia Technology • Previous Articles     Next Articles

Low-light Image Enhancement Method Based on U-net++ Network

LI Hua-ji, CHENG Jiang-hua, LIU Tong, CHENG Bang, ZHAO Kang-cheng   

  1. College of Electronic Science,National University of Defense Technology,Changsha 410073,China
  • Online:2021-11-10 Published:2021-11-12
  • About author:LI Hua-ji,born in 1996,postgraduate.His main research interests include computer vision and intelligent information processing.
    CHENG Jiang-hua,born in 1979,Ph.D,professor,master supervisor.His main research interests include computer vision and intelligent information processing.
  • Supported by:
    Natural Science Foundation of Hunan Province(2020JJ4670).

Abstract: Low-light image enhancement is one of the most challenging tasks in computer vision.The current algorithms have some problems,such as uneven brightness,low contrast,color distortion and serious noise.In this paper,a more natural dark light enhanced network framework based on improved U-net++ network is proposed.First of all,the low light image is input to the improved U-net++ network,and the dense connection of each layer is used to enhance the correlation of different levels of image features.Secondly,the image features of each level are fused and input to the convolution network layer for detail reconstruction.The experimental results show that the proposed method not only improves the brightness of the image,but also restores the detail features of the low light image better,and the color feature of the normal light image is closer to the nature.Tests on the PASCAL VOC test set show that the two important indicators,structural similarity (SSIM) and peak signal-to-noise ratio (PSNR),are 0.87 and 26.36,which are 18.6% and 11.4% higher than similar optimal algorithms respectively.

Key words: Dense connection, Detail reconstruction, Low-light enhancement, U-net++ network

CLC Number: 

  • TP391
[1]ARICI T,DIKBAS S,ALTUNBASAK Y.A histogram modification frame-work and its application for image contrast enhancement [J].IEEE Transactions on Image Processing,2009,18(9):1921-1935.
[2]NAKAI K,HOSHI Y,TAGUCHI A.Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms[C]//IEEE International Symposiumon Intelligent Signal Processing and Communications Systems (ISPACS).2013:445-449.
[3]WANG S H,ZHENG J,HU H M,et al.Naturalness preserved enhancement algorithm for nonuniform illumination images [J].IEEE Transactions on Image Processing,2013,22(9):3538-3548.
[4]FU X Y,ZENG D L,HUANG Y,et al.A weighted variational model for simultaneous reflectance and illumination estimation[C]//IEEE Conference on Computer Vision and Pattern Recognition.2016:2782-2790.
[5]LORE K G,AKINTAYO A,SARKAR S.Llnet:A deep autoencoder approach to natural low-light image enhancement [J].Pattern Recognition,2017,61:650-662.
[6]TAO L,ZHU C,XIANG G Q,et al.Llcnn:A convolutional neural network for low-light image enhancement[C]//IEEE Visual Communications and Image Processing (VCIP).2017:1-4.
[7]SHEN L,YUE Z,FENG F,et al.MSR-net:Low-light ImageEnhancement Using Deep Convolutional Network[EB/OL].[2017-11-07].https://arxiv.org/abs/1711.02488.
[8]WEI C,WANG W,YANG W,et al.Deep Retinex Decomposition for Low-Light Enhancement[C]//British Machine Vision Conference.2018.
[9]DABOV K,FOI A,KATKOVNIK V,et al.Image Denoising by Sparse 3D Transform-Domain Collaborative Filtering [J].IEEE Transactions on Image Processing,2007,16(8):2080-2095.
[10]WANG W,CHEN W,YANG W,et al.GLADNet:Low-LightEnhancement Network with Global Awareness[C]//IEEE International Conference on Automatic Face & Gesture Recognition.2018:751-755.
[11]CHEN C,CHEN Q,XU J,et al.Learning to See in the Dark[C]//IEEE Conference on Computer Vision and Pattern Recognition.2018:3291-3300.
[12]ZHOU Z,SIDDIQUEE M M R,TAJBAKHSH N,et al.UNet++:a nested U-Net architecture for medical image segmentation[EB/OL].[2019-10-16].https://arxiv.org/pdf/1807.10165v1.pdf.
[13]WANG Z,BOVIK A C,SHEIKH H R,et al.Image quality assessment:from error visibility to structural similarity [J].IEEE Transactions on Image Processing,2004,13(4):600-612.
[14]HE K,ZHANG X,REN S,et al.Deep Residual Learning for Image Recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2016:770-778.
[15]LV F,LU F,WU J,et al.MBLLEN:Low-Light Image/Video Enhancement Using CNNs[C]//BMVC.2018:220.
[16]HAMID R S,ALAN C B.Image information and visual quality [J].IEEE Transactions on Image Processing,2006,15(2):430-444.
[17]WANG S,ZHENG J,HU H,et al.Naturalness preserved en-hancement algorithm for non-uniform illumination images [J].IEEE Transactions on Image Processing,2013,22(9):3538-3548.
[18]MITTAL A,SOUNDARARAJAN R,BOVIK A.Making acompletely blind image quality analyzer[J].IEEE Signal Processing Letters,2013,20(3):209-212.
[19]YING Z Q,LI G,REN Y R,et al.A new image contrast enhancement algorithm using exposure fusion framework[C]//International Conference on Computer Analysis of Images and Patterns.Springer,2017:36-46.
[20]GUO X J,LI Y,LING H B.Lime:Low-light image enhancement via illumination map estimation [J].IEEE Transactions on Image Processing,2017,26(2):982-993.
[21]FU X Y,ZENG D L,HUANG Y,et al.A fusion-based enhancing method for weakly illuminated images[J].Signal Processing,2016,129:82-96.
[22]ZHANG Y H,ZHANG J W,GUO X J.Kindling the Darkness:A Practical Low-light Image Enhancer[C]//27th ACM International Conference on Multimedia.2019:1632-1640.
[23]GUO C,LI C,GUO J,et al.Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).IEEE,2020:1777-1786.
[1] WANG Li-fang, WANG Rui-fang, LIN Su-zhen, QIN Pin-le, GAO Yuan, ZHANG Jin. Multimodal Medical Image Fusion Based on Dual Residual Hyper Densely Networks [J]. Computer Science, 2021, 48(2): 160-166.
[2] ZHENG Cheng, XUE Man-yi, HONG Tong-tong, SONG Fei-bao. DC-BiGRU_CNN Model for Short-text Classification [J]. Computer Science, 2019, 46(11): 186-192.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!