计算机科学 ›› 2023, Vol. 50 ›› Issue (8): 280-285.doi: 10.11896/jsjkx.221100124
周风帆1, 凌贺飞1, 张锦元2, 夏紫薇1, 史宇轩1, 李平1
ZHOU Fengfan1, LING Hefei1, ZHANG Jinyuan2, XIA Ziwei1, SHI Yuxuan1, LI Ping1
摘要: 人脸物理对抗样本攻击(Facial Physical Adversarial Attack,FPAA)指攻击者通过粘贴或佩戴物理对抗样本,如打印的眼镜、纸片等,在摄像头下被识别成特定目标的人脸,或者让人脸识别系统无法识别的攻击方式。已有FPAA的性能评测会受到多种环境因素的影响,且需要多个人工操作的环节,导致性能评测效率非常低下。为了减少人脸物理对抗样本性能评测方面的工作量,结合数字图片和环境因素之间的多模态性,提出了多模态特征融合预测算法(Multimodal Feature Fusion Prediction Algorithm,MFFP)。具体地,使用不同的网络提取攻击者人脸图片、受害者人脸图片和人脸数字对抗样本图片的特征,使用环境特征网络来提取环境因素中的特征,然后使用一个多模态特征融合网络对这些特征进行融合,多模态特征融合网络的输出即为所预测的人脸物理对抗样本图片和受害者图片之间的余弦相似度。MFFP算法在未知环境、未知FPAA算法的实验场景下取得了0.003的回归均方误差,其性能优于对比算法,验证了MFFP算法对FPAA性能预测的准确性,可以对FPAA性能进行快速评估,同时大幅降低人工操作的工作量。
中图分类号:
[1]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[C]//International Conference on Learning Representations.2014:1-10. [2]QIU H N,XIAO C W,YANG L,et al.SemanticAdv:Generating Adversarial Examples via Attribute-conditional Image Editing[C]//European Conference on Computer Vision.Springer.2020:19-37. [3]SHEN M,YU H,ZHU L H,et al.Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems [J].IEEE Transactions on Information Forensics and Security,2021,16:4063-4077. [4]SATO T,SHEN J J,WANG N F,et al.Dirty Road Can Attack:Security of Deep Learning based Automated Lane Centering under Physical-World Attack[C]//USENIX Security Symposium.USENIX Association.2021:3309-3326. [5]DUAN R J,MAO X F,QIN K.A,et al.Adversarial laser beam:Effective physical-world attack to DNNs in a blink[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition.IEEE,2021:16062-16071. [6]DONG Y P,LIAO F Z,PANG T Y,et al.Boosting adversarial attacks with momentum[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition.IEEE,2018:9185-9193. [7]XIE C H,ZHANG Z S,ZHOU Y Y,et al.Improving transfer-ability of adversarial examples with input diversity[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition.IEEE,2019:2730-2739. [8]ZHONG Y Y,DENG W H.Towards transferable adversarial attack against deep face recognition [J].IEEE Transactions on Information Forensics and Security,2021,16:1452-1466. [9]YANG X,DONG Y P,PANG T Y,et al.Towards face encryption by generating adversarial identity masks[C]//International Conference on Computer Vision.IEEE,2021:3897-3907. [10]SHARIF M,BHAGAVATULA S,BAUER L,et al.Accessorize to a Crime:Real and Stealthy Attacks on State-of-the-Art Face Recognition[C]//{ACM} {SIGSAC} Conference on Computer and Communications Security.ACM,2016:1528-1540. [11]KOMKOV S,PETIUSHKO A.AdvHat:Real-World Adversa-rial Attack on ArcFace Face {ID} System[C]//International Conference on Pattern Recognition.IEEE,2020:819-826. [12]YIN B J,WANG W X.YAO T P,et al.Adv-Makeup:A New Imperceptible and Transferable Attack on Face Recognition[C]//International Joint Conference on Artificial Intelligence.2021:1252-1258 [13]DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.Animage is worth 16x16 words:Transformers for image recognition at scale[C]//International Conference on Learning Representations.2021:1-21. [14]TOLSTIKHIN I,HOULSBY N,KOLESNIKOV A,et al.Mlp-mixer:An all-mlp architecture for vision[C]//Advances in Neural Information Processing Systems.MIT Press,2021:24261-24272. [15]DU L,GAO F,CHEN X,et al.TabularNet:A Neural Network Architecture for Understanding Semantic Structures of Tabular Data[C]//ACM SIGKDD Conference on Knowledge Discovery &Data Mining.ACM,2021:322-331. [16]TANM X,QUOC V L E.Efficientnet:Rethinking model scaling for convolutional neural networks[C]//International Conference on Machine Learning.PMLR,2019:6105-6114. [17]HORNIK K,STINCHCOMBE M,WHITE H.Multilayer feed-forward networks are universal approximators [J].Neural Networks,1989,2:359-366. |
|