计算机科学 ›› 2024, Vol. 51 ›› Issue (5): 92-99.doi: 10.11896/jsjkx.231100067

• 计算机图形学&多媒体 • 上一篇    下一篇

一种多阶段的黑白影像智能色彩修复算法

宋建锋, 张文英, 韩露, 胡国正, 苗启广   

  1. 西安电子科技大学计算机科学与技术学院 西安 710071
  • 收稿日期:2023-11-10 修回日期:2024-03-25 出版日期:2024-05-15 发布日期:2024-05-08
  • 通讯作者: 宋建锋(jfsong@mail.xidian.edu.cn)
  • 基金资助:
    西安电子科技大学教育教学改革研究项目(继续教育)资金资助项目(JA2301);国家自然科学基金(62272364)

Multi-stage Intelligent Color Restoration Algorithm for Black-and-White Movies

SONG Jianfeng, ZHANG Wenying, HAN Lu, HU Guozheng, MIAO Qiguang   

  1. School of Computer Science and Technology,Xidian University,Xi'an 710071,China
  • Received:2023-11-10 Revised:2024-03-25 Online:2024-05-15 Published:2024-05-08
  • About author:SONG Jianfeng,born in 1978,associate professor.His main research interests include computer vision and deep lear-ning.
  • Supported by:
    Continuing Education Teaching Reform Research Program of Xidian University(JA2301) and National Natural Science Foundation of China(62272364).

摘要: 针对黑白电影的上色过程中,自动上色模型只生成一种结果导致上色结果单一、基于参考示例上色方法需要用户指定参考图像、参考图像的高要求会耗费大量人力的问题,提出了一种多阶段的黑白影像智能色彩修复算法(A Multi-Stage Intelligent Color Restoration Algorithm for Black-and-White Movies,MSICRA)。首先,使用VGG19网络将电影分割为多个场景片段;其次,将每个场景片段逐帧切割,将每帧图像的边缘强度和灰度差作为图像清晰度评判指标,筛选出每个场景中清晰度位于[0.95,1]区间的图像;然后,选择筛选出的图像中的第一张,使用不同的渲染因子值进行上色,利用饱和度进行上色效果的评估,选择合适的渲染因子值对筛选出的图像上色;最后,利用上色前和上色后图像之间的均方误差选择上色质量较好的图像作为该场景片段上色的参考图像。实验结果表明,所提算法在黑白电影《雷锋》和《永不消逝的电波》的PSNR上分别提高了1.32%和2.15%,SSIM分别提高了1.84%和1.04%。该算法不仅可以实现全自动上色,而且颜色真实,符合人们的认知。

关键词: 深度学习, 自动上色, 场景分割, 清晰度

Abstract: In the process of colorization for black and white movies,the existing automatic colorization models often produce singular result,and the reference-based colorization methods require users to specify reference images,posing a significant challenge in meeting the high requirement of reference images and consuming substantial human efforts.To address this issue,this paper proposes a multi-stage intelligent color restoration algorithm for black-and-white movies (MSICRA).Firstly,the movie is splitted into multiple scene segments by the VGG19 network. Secondly,each scene segment is cut frame by frame,and the edge intensity and grayscale difference of each frame image are used as criteria to assess image clarity,selecting images with clarity ranging from 0.95 to 1 in each scene.Subsequently,we select the first frame that meets the clarity criteria from the filtered images and apply different render factor values to colorize the selected image.We assess the colorization effects using saturation and choose the appropriate render factor for the colorization.Finally,we use the mean squared error between the pre-colorized and post-colorized images to select the best quality colorized image as the reference for the scene segment.Experimental results demonstrate that the proposed algorithm improves the PSNR by 1.32% for the black and white films Lei Feng and 2.15% for The Eternal Wave,and the SSIM by 1.84% and 1.04% respectively.The algorithm not only enables fully automatic colorization but also produces realistic colors that align with human perception.

Key words: Deep learning, Auto-colorization, Scene segmentation, Clarity

中图分类号: 

  • TP315
[1]LEI C Y,CHEN Q F.Fully Automatic Video Colorization with Self-Regularization and Diversity[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Long Beach,USA,2019:3753-3761.
[2]LIU Y,ZHAO H,CHAN K C K,et al.Temporally consistent video colorization with deep feature propagation and self-regularization learning[J].arXiv:2110.04562,2021.
[3]ZHAO Y,PO L M,YU W Y,et al.VCGAN:video colorization with hybrid generative adversarial network[J].IEEE Transactions on Multimedia,2022,25:3017-3032.
[4]ZHAO Y,PO L M,CHEUNG K W,et al.SCGAN:Saliency Map-guided Colorization with Generative Adversarial Network [J].IEEE Transactions on Circuits and Systems for Video Technology,2020,31(8):3062-3077.
[5]AKIMOTO N,HAYAKAWA A,SHIN A,et al.Reference-based video colorization with spatiotemporal correspondence[J].arXiv:2011.12528,2020.
[6]ZHANG B,HE M M,LIAO J,et al.Deep Exemplar-based VideoColorization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Long Beach,USA,2019:8052-8061.
[7]SHI M,ZHANG J Q,CHEN S Y,et al.Reference-based deepline art video colorization[J].IEEE Transactions on Visualization & Computer Graphics,2023,29(6):2965-2979.
[8]HE M M,CHEN D D,LIAO J,et al.Deep exemplar-based colo-rization[J].ACM Transac tions on Graphics (TOG),2018,37(4):1-16.
[9]WAN Z Y,ZHANG B,CHEN D D,et al.Bringing Old FilmsBack to Life[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New Orleans,USA,2022:17694-17703.
[10]JI X,JIANG B,LUO D,et al.ColorForm er:Image colorization via color memory assist ed hybrid-attention transformer[C]//European Conference on Computer Vision.Tel-Aviv,Israel,2022:20-36.
[11]ANTIC J.DeOldify [EB/OL].(2018-10-31) [2021-12-10].https://github.com/jantic/DeOld ify.
[12]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[13]WANG X M,ZHANG S Y,ZHANG J,et al.Contour Recon-struction Method for Noisy Image Based on Depth Residual Learning[J].Journal of Xidian University,2020,47(3):66-71.
[14]VASWANI A,SHAZEER N,PARMAR N,et al.Attention Is All You Need[C]//Proceedings of the Thirty-first Conference on Neural Information Processing Systems.Long Beach,USA,2017:1-11.
[15]RONNEBERGER O,FISCHER P,BROX T.U-net:Convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention(MICCAI 2015).Munich,Germany,2015:234-241.
[16]HE K M,ZHANG X Y,REN S Q,et al.Deep Residual Learning for Image Recogni tion[C]//Proceedings of the IEEE Confe-rence on Computer Vision and Pattern Recognition.Las Vegas,USA,2016:770-778.
[17]JIANG G Y,HUANG D J,WANG X,et al.Overview on Image Quality Assessment Methods[J].Journal of Electronics & Information Technology,2010,32(1):219-226.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!