Computer Science ›› 2018, Vol. 45 ›› Issue (8): 50-53.doi: 10.11896/j.issn.1002-137X.2018.08.009

• ChinaMM 2017 • Previous Articles     Next Articles

Two-stage Method for Video Caption Detection and Extraction

WANG Zhi-hui1, LI Jia-tong2, XIE Si-yan2, ZHOU Jia2, LI Hao-jie1, FAN Xin1   

  1. Department of International Information and Software Technology,Dalian University of Technology,Dalian,Liaoning 116621,China1
    Department of Software Technology,Dalian University of Technology,Dalian,Liaoning 116621,China2
  • Received:2017-10-24 Online:2018-08-29 Published:2018-08-29

Abstract: Video caption detection and extraction is one of the key technologies forvideo understanding.This paper proposed a two-stage approach which divides the process into caption frame and caption area,improving the caption detection efficiency and accuracy.In the first stage,caption frame detection and extraction is conducted.Firstly,the motion detection is performed according to the gray correlation frame difference,the captions are judged initially,and a new binary image sequence is obtained.Then,according to dynamic characteristics of ordinary captions and scrolling captions,the new sequence is screened two times to get caption frame.In the second stage,caption area detection and extraction is conducted.Firstly,the Sobel edge detection algorithm is used to detect the caption region,and the background is eliminated according to the constraint height.Then according to the aspect ratio,the vertical and horizontal captions are distinguished,and all captions in the caption frame can be obtained,including static captions,ordinary captions and scrol-ling captions.This method reduces the frames which need to be detected and improves caption detection efficiency by 11%.The experimental results show that the proposed method can approximately improve the F score by 9% compared with the methods of separately using the gray correlation frame difference and edge detection.

Key words: Detection and extraction, Dynamic characteristics, Gray correlation frame difference, Sobel edge detection, Video caption

CLC Number: 

  • TP391
[1]WANG G H,WANG Z,YANG Y M,et al.Detection and positioning of video captions based on ICA algorithm [J].Journal of Xi’an Shiyou University (Natural Science Edition),2011,26(3):100-103.(in Chinese)王国红,王喆,杨永民,等.基于ICA算法的视频字幕检测与定位[J].西安石油大学学报(自然科学版),2011,26(3):100-103.
[2]WANG R R,JIN W J,WU L D.A new algorithm for detecting video caption by using multi-frame combination [J].Journal of Computer Research and Development,2005,42(7):1191-1197.(in Chinese)王蓉蓉,金万军,吴立德.一种新的利用多帧结合检测视频标题文字的算法[J].计算机研究与发展,2005,42(7):1191-1197.
[3]ZHAO X,LIN K H,HU Y X,et al.Caption form corners:A novel approach to detect caption and caption in videos [J].IEEE Transactions on Circuits and Systems for Video Technology,2005,15(2):243-255.
[4]SATO T,KANADE T,HUGHES E K,et al.Video OCR:indexing digital news libraries by recognition of superimposed captions[J].Multimedia Systems,1999,7(5):385-395.
[5]ANGADI S A,KODABAGI M M.Text region extraction from low resolution natural scene images using texture features[C]∥2010 IEEE 2nd International Advance Computing Conference (IACC).2010.
[6]OUYANG P R,ZHANG W J,GUPTA M M.An adaptiveswitching learning control method for trajectory tracking of robot manipulators[J].Mechatronics,2006,16(1):51-61.
[7]SU Y X,PARRA-VEGA V .Global asymptotic saturated output feedback control of robot manipulators[C]∥Proceedings of the 7th World Congress on Intelligent Control and Automation.2008.
[8]ZHAO X,LIN K H,FU Y,et al.Text from corners:A novel approach to detect text and caption in videos[J].IEEE Transactions on Image Processing,2011,20(3):790-799.
[9]LYU M R,SONG J Q,CAI M.A comprehensive method formultilingual video caption detection,localization and extraction[J].IEEE Transactions on Circuits and Systems for Video Technology,2005,15(2):243-255.
[10]LANG Y,ZHENG D.An Improved Sobel Edge Detection Ope-rator[C]∥IEEE International Conference on Computer Science and Information Technology.IEEE,2010:67-71.
[1] XIAO Xiao, KONG Fan-zhi, LIU Jin-hua. Monitoring Video Fire Detection Algorithm Based on DynamicCharacteristics and Static Characteristics [J]. Computer Science, 2019, 46(6A): 284-286.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!