计算机科学 ›› 2021, Vol. 48 ›› Issue (8): 145-149.doi: 10.11896/jsjkx.200800207

• 计算机图形学& 多媒体 • 上一篇    下一篇

利用全局与局部帧级特征进行基于共享注意力的视频问答

王雷全1, 候文艳2, 袁韶祖1, 赵欣2, 林瑶2, 吴春雷1   

  1. 1 中国石油大学(华东)计算机科学与技术学院 山东 青岛266555
    2 中国石油大学(华东)海洋与空间信息学院 山东 青岛266555
  • 收稿日期:2020-08-29 修回日期:2020-09-30 发布日期:2021-08-10
  • 通讯作者: 王雷全(richiewlq@gmail.com)
  • 基金资助:
    科技部重点研发计划(2018YFC1406204),中央高校基本科研业务费专项资金(19CX05003A-11)

Multi-Shared Attention with Global and Local Pathways for Video Question Answering

WANG Lei-quan1, HOU Wen-yan2, YUAN Shao-zu1, ZHAO Xin2, LIN Yao2, WU Chun-lei1   

  1. 1 College of Computer Science and Technology,China University of Petroleum,Qingdao,Shandong 266555,China;
    2 College of Oceanography and Space Informatics,China University of Petroleum,Qingdao,Shandong 266555,China
  • Received:2020-08-29 Revised:2020-09-30 Published:2021-08-10
  • About author:WANG Lei-quan,born in 1981,Ph.D,senior experimenter,is a member of China Computer Federation.His main research interests include cross media analysis and action recognition.
  • Supported by:
    National Key Research and Development Program(2018YFC1406204) and Fundamental Research Funds for the Central Universities(19CX05003A-11).

摘要: 视频问答是视觉理解领域中非常重要且具有挑战性的任务。目前的视觉问答(VQA)方法主要关注单个静态图片的问答,而现实生活中的数据是立体动态的视频。 此外,由于问题的复杂性,视频问答任务必须根据问答问题恰当地处理多种视觉特征才能获得高质量的答案。文中提出了一个通过利用局部和全局帧级别的视觉信息来进行视频问答的多共享注意力网络。具体来说,以不同帧率提取视频帧,并以此提取帧级的全局与局部视觉特征,这两种特征包含了多个帧级别特征,用于对视频时间动态建模,再以共享注意力的形式建模全局与局部视觉特征的相关性,然后结合文本问题来推断答案。在天池视频问答数据集上进行了大量的实验,验证了所提方法的有效性。

关键词: 共享注意力机制, 全局和局部帧级特征, 视频问答

Abstract: Video question answering is a challenging task of significant importance toward visual understanding.However,current visual question answering (VQA) methods mainly focus on a single static image,which is distinct from the sequential visual data we faced in the real world.In addition,due to the diversity of textual questions,the VideoQA task has to deal with various visual features to obtain the answers.This paper presents a multi-shared attention network by utilizing local and global frame-level visualinformation for video question answering (VideoQA).Specifically,a two-pathway model is proposed to capture the global and local frame-level features with different frame rates.The two pathways are fused together with the multi-shared attention by sharing the same attention funtion.Extensive experiments are conducted on Tianchi VideoQA dataset to validate the effectiveness of the proposed method.

Key words: Global and local pathways, Shared attention mechanism, Video question answering

中图分类号: 

  • TP391
[1]WU C,WEI Y,CHU X,et al.Hierarchical attention-based multimodal fusion for video captioning[J].Neurocomputing,2018,315:362-370.
[2]XU Z L,DONG H W.Video Question Answering Scheme Based on Prior MASK Attention Mechanism[J].Computer Enginee-ring,2021,47(2):52-59.
[3]XU H,SAENKO K.Ask,attend and answer:Exploring question-guided spatial attention for visual question answering[C]//European Conference on Computer Vision.Cham:Springer,2016:451-466.
[4]XIONG C,ZHONG V,SOCHER R.Dynamic coattention net-works for question answering[J].arXiv:1611.01604,2016.
[5]LU J,YANG J,BATRA D,et al.Hierarchical question-imageco-attention for visual question answering[C]//Advances in Neural Information Processing Systems.2016:289-297.
[6]FUKUI A,PARK D H,YANG D,et al.Multimodal compact bilinear pooling for visual question answering and visual grounding[J].arXiv:1606.01847,2016.
[7]KIM K M,CHOI S H,KIM J H,et al.Multimodal dual attention memory for video story question answering[C]//Procee-dings of the European Conference on Computer Vision (ECCV).2018:673-688.
[8]YANG Z,HE X,GAO J,et al.Stacked attention networks for image question answering[C]//Proceedings of the IEEE Confe-rence on Computer Vision and Pattern Recognition.2016:21-29.
[9]ANDERSON P,HE X,BUEHLER C,et al.Bottom-up and top-down attention for image captioning and visual question answe-ring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:6077-6086.
[10]KRISHNA R,ZHU Y,GROTH O,et al.Visual genome:Connecting language and vision using crowdsourced dense image annotations[J].International Journal of Computer Vision,2017,123(1):32-73.
[11]YU Y,KO H,CHOI J,et al.End-to-end concept word detection for video captioning,retrieval,and question answering[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:3165-3173.
[12]KIM K M,HEO M O,CHOI S H,et al.Deepstory:Video story qa by deep embedded memory networks[J].arXiv:1707.00836,2017.
[13]NA S,LEE S,KIM J,et al.A read-write memory network for movie story understanding[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:677-685.
[14]GAO L,ZENG P,SONG J,et al.Structured two-stream attention network for video question answering[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:6391-6398.
[15]JANG Y,SONG Y,YU Y,et al.Tgif-qa:Toward spatio-temporal reasoning in visual question answering[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:2758-2766.
[16]PENNINGTON J,SOCHER R,MANNING C D.Glove:Global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Proces-sing (EMNLP).2014:1532-1543.
[17]CHO K,VAN MERRIËNBOER B,GULCEHRE C,et al.Learning phrase representations using RNN encoder-decoder for statistical machine translation[J].arXiv:1406.1078,2014.
[18]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[19]REN S,HE K,GIRSHICK R,et al.Faster r-cnn:Towards real-time object detection with region proposal networks[C]//Advances in Neural Information Processing Systems.2015:91-99.
[20]JABRI A,JOULIN A,LAURENS V D M.Revisiting visualquestion answering baselines[C]//European Conference on Computer Vision.Cham:Springer,2016:727-739.
[21]IOFFE S,SZEGEDY C.Batch Normalization:Accelerating Deep Network Training by Reducing Internal Covariate Shift[J].ar-Xiv:1502.03167,2015.
[22]XU K,BA J,KIROS R,et al.Show,attend and tell:Neuralimage caption generation with visual attention[C]//Internatio-nal Conference on Machine Learning.2015:2048-2057.
[23]KIM J H,JUN J,ZHANG B T.Bilinear attention networks[C]//Advances in Neural Information Processing Systems.2018:1564-1574.
[1] 武阿明, 姜品, 韩亚洪.
基于视觉和语言的跨媒体问答与推理研究综述
Survey of Cross-media Question Answering and Reasoning Based on Vision and Language
计算机科学, 2021, 48(3): 71-78. https://doi.org/10.11896/jsjkx.201100176
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!