计算机科学 ›› 2024, Vol. 51 ›› Issue (6A): 230700125-5.doi: 10.11896/jsjkx.230700125

• 图像处理&多媒体技术 • 上一篇    下一篇

基于高斯增强模块的相机模型辨别

黄远航1,2, 边山1,2,3, 王春桃1,2   

  1. 1 华南农业大学数学与信息学院 广州 510642
    2 农业农村部华南热带智慧农业技术重点实验室 广州 510642
    3 广东省智能信息处理重点实验室 深圳市媒体信息内容安全重点实验室 广东 深圳 518060
  • 发布日期:2024-06-06
  • 通讯作者: 边山(bianshan@scau.edu.cn)
  • 作者简介:(806834034@qq.com)
  • 基金资助:
    广东省智能信息处理重点实验室(2023B1212060076);国家自然科学基金(62172165);广东省自然科学基金(2022A1515010325);广州市基础和应用基础研究项目(202201010742)

Gaussian Enhancement Module for Reinforcing High-frequency Details in Camera ModelIdentification

HUANG Yuanhang1,2, BIAN Shan1,2,3, WANG Chuntao1,2   

  1. 1 College of Mathematics and Informatics,South China Agricultural University,Guangzhou 510642,China
    2 Key Laboratory of Smart Agricultural Technology in Tropical South China,Ministry of Agriculture and Rural Affairs,Guangzhou 510642,China
    3 Guangdong Provincial Key Laboratory of Intelligent Information Processing & Shenzhen Key Laboratory of Media Security,Shenzhen,Guangdong 518060,China
  • Published:2024-06-06
  • About author:HUANG Yuanhang,born in 1997,postgraduate,is a member of CCF(No.C0385G).His main research interests include camera model identification and image forgery localization.
    BIAN Shan,born in 1986,Ph.D,asso-ciate professor,is a member of CCF(No.21153M).Her main research interests include video forensics and tampering detection.
  • Supported by:
    Guangdong Provincial Key Laboratory of Intelligent Information Processing(2023B1212060076),National Natural Science Foundation of China(62172165),Natural Science Foundation of Guangdong Province,China(2022A1515010325) and Guangzhou Basic and Applied Basic Research Project(202201010742).

摘要: 在多媒体取证中,高通滤波器是卷积神经网络常用的预处理层之一,用于抑制图像内容的影响,只强调高频特征。但与此同时,其他一些包含取证痕迹的有用信息也将被不加区别地剔除。为了解决这一问题,文中提出了一个简单而高效的高斯增强模块(Gaussian Enhancement Module,GEM)来提取“扩展的”高频特征,即在维持原有特征强度的基础上增强高频细节信息。GEM由两个连续的一维低通高斯滤波器组成,以获得一个模糊版本的特征图,并进一步得到相应的扩展高频残差。通过以高频残差作为空间掩膜,它可以自适应地强化脆弱和细微的低级取证特征,并防止在特征传递过程中出现衰减现象。在相机模型辨别数据集上进行实验,通过将该模块插入多个主流骨干网络,GEM仅仅带来非常轻微的模型复杂度的增加,网络性能和鲁棒性却显著提高,表明该模块支持“即插即用”,与特定的网络架构无关。

关键词: 相机模型辨别, 深度学习, 图像取证, 高通滤波器, 高斯增强

Abstract: In multimedia forensics,a high-pass filter is one of the commonly used pre-processing layers by convolutional neural network to depress the impact of image content and only highlight high-frequency features.However,some other useful information containing forgery traces would also be removed indiscriminately in the meantime.To address this issue,in this paper,a simple yet effective Gaussian enhancement module is proposed to extract “extended” high-frequency features,namely,reinforce high-frequency details while maintaining the original feature strength.The GEM comprises two successive low-pass Gaussian filters to acquire a blurry version of the feature map and further get the corresponding extended high-frequency residual.It can strengthen fragile and subtle low-level forgery features adaptively and prevent feature attenuation as well.Experiments are conducted on the camera-model identification dataset by plugging the module into several mainstream backbone networks,indicating that it supports “plug and play” and is non-related to the specific network architecture.The proposed GEM brings a significant improvement both in the performance and the robustness of networks with the slightly increased complexity of models.

Key words: Camera model identification, Deep learning, Image forensics, High-pass filter, Gauss enhancement

中图分类号: 

  • TP391.41
[1]FILLER T,FRIDRICH J,GOLJAN M.Using sensor patternnoise for camera model identification[C]//2008 15th IEEE International Conference on Image Processing.IEEE,2008:1296-1299.
[2]SAN CHOI K,LAM E Y,WONG K K Y.Automatic source camera identification using the intrinsic lens radial distortion[J].Optics Express,2006,14(24):11551-11565.
[3]MILANI S,BESTAGINI P,TAGLIASACCHIM,et al.Demo-saicing strategy identification via eigenalgorithms[C]//2014 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP).IEEE,2014:2659-2663.
[4]ALLES E J,GERADTS Z J M H,VEENMANC J.Source ca-mera identification for low resolution heavily compressed images[C]//2008 International Conference on Computational Sciences and Its Applications.IEEE,2008:557-567.
[5]TUAMA A,COMBY F,CHAUMONTM.Camera model identification with the use of deep convolutional neural networks[C]//2016 IEEE International Workshop on Information Forensics and Security(WIFS).IEEE,2016:1-6.
[6]YAO H,QIAO T,XU M,et al.Robust multi-classifier for ca-mera model identification based on convolution neural network[J].IEEE Access,2018,6:24973-24982.
[7]BAYAR B,STAMM M C.Augmented convolutional featuremaps for robust cnn-based camera model identification[C]//2017 IEEE International Conference on Image Processing(ICIP).IEEE,2017:4098-4102.
[8]QIAN Y,DONG J,WANG W,et al.Deep learning for stegana-lysis via convolutional neural networks[C]//Media Watermar-king,Security,and Forensics 2015.SPIE,2015,9409:171-180.
[9]GLOE T,BÖHME R.The ‘Dresden Image Database’ forbenchmarking digital image forensics[C]//Proceedings of the 2010 ACM Symposium on Applied Computing.2010:1584-1590.
[10]BAYAR B,STAMM M C.Constrained convolutional neural networks:A new approach towards general purpose image manipulation detection[J].IEEE Transactions on Information Forensics and Security,2018,13(11):2691-2706.
[11]BAYAR B,STAMM M C.A deep learning approach to universal image manipulation detection using a new convolutional layer[C]//Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security.2016:5-10.
[12]CHEN Y,HUANG Y,DING X.Camera model identificationwith residual neural network[C]//2017 IEEE International Conference on Image Processing(ICIP).IEEE,2017:4337-4341.
[13]HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[14]DING X,CHEN Y,TANG Z,et al.Camera identification based on domain knowledge-driven deep multi-task learning[J].IEEE Access,2019,7:25878-25890.
[15]KUZIN A,FATTAKHOV A,KIBARDIN I,et al.Camera model identification using convolutional neural networks[C]//2018 IEEE International Conference on Big Data(Big Data).IEEE,2018:3107-3110.
[16]RAFI A M,KAMAL U,HOQUE R,et al.Application ofDenseNet in Camera Model Identification and Post-processing Detection[C]//CVPR Workshops.2019:19-28.
[17]HUANG G,LIU Z,VAN DER MAATEN L,et al.Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:4700-4708.
[18]FERREIRA A,CHEN H,LI B,et al.An inception-based data-driven ensemble approach to camera model identification[C]//2018 IEEE International Workshop on Information Forensics and Security(WIFS).IEEE,2018:1-7.
[19]YANG P,NI R,ZHAO Y.Recapture image forensics based on Laplacian convolutional neural networks[C]//Digital Forensics and Watermarking:15th International Workshop(IWDW 2016).Revised Selected Papers 15.Springer International Publishing,2017:119-128.
[20]LUO Y,ZHANG Y,YAN J,et al.Generalizing face forgery detection with high-frequency features[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:16317-16326.
[21]ZHOU P,HAN X,MORARIU V V I,et al.Learning rich features for image manipulation detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:1053-1061.
[22]KAREN S,ZISSERMAN A.Very Deep Convolutional Net-works for Large-Scale Image Recognition[J].arXiv:1409.1556,2014.
[23]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinkingthe inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[24]BAYAR B,STAMM M C.Design principles of convolutionalneural networks for multimedia forensics[J].Electronic Imaging,2017,29:77-86.
[25]CHEN X,DONG C,JI J,et al.Image manipulation detection by multi-view multi-scale supervision[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:14185-14193.
[26]DONG C,CHEN X,HU R,et al.Mvss-net:Multi-view multi-scale supervised networks for image manipulation detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,45(3):3539-3553.
[27]HU X,ZHANG Z,JIANG Z,et al.SPAN:Spatial pyramid attention network for image manipulation localization[C]//Computer Vision-ECCV 2020:16th European Conference,Glasgow,UK,Part XXI 16.Springer International Publishing,2020:312-328.
[28]WU Y,ABDALMAGEED W,NATARAJAN P.Mantra-net:Manipulation tracing network for detection and localization of image forgeries with anomalous features[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:9543-9552.
[29]HUANG Y,BIAN S,LI H,et al.DS-UNet:A dual streamsUNet for refined image forgery localization[J].Information Sciences,2022,610:73-89.
[30]SHULLANI D,FONTANI M,IULIANI M,et al.Vision:a vi-deo and image dataset for source identification[J].EURASIP Journal on Information Security,2017,2017(1):1-16.
[31]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!