Computer Science ›› 2020, Vol. 47 ›› Issue (6A): 6-11.doi: 10.11896/JsJkx.191000007

• Artificial Intelligence • Previous Articles     Next Articles

Automatic Summarization Method Based on Primary and Secondary Relation Feature

ZHANG Ying, ZHANG Yi-fei, WANG Zhong-qing and WANG Hong-ling   

  1. School of Computer Science and Technology,Soochow University,Suzhou,Jiangsu 215006,China
  • Published:2020-07-07
  • About author:ZHANG Ying, born in 1994, postgra-duate, is a member of China Computer Federation.Her main research interests include natural language processing and so on.
    WANG Hong-ling, born in 1975, Ph.D, associate professor, is a member of China Computer Federation.Her main research interests include natural language processing and so on.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China (61806137,61976146) and Natural Science Foundation of the Jiangsu Higher Education Institutions of China (18KJB520043).

Abstract: Automatic summarization technology refers to providing users with a concise text description by compressing and refining the original text while retaining the core idea of document.Usually,the traditional method only considers the shallow textual semantic information and neglects the guiding role of the structure information such as the primary and secondary relations in core sentences extraction.Therefore,this paper proposes an automatic summarization method based on the primary and secondary relation feature.This method utilizes the neural network to construct a single document extractive summarization model based on primary and secondary relation feature.The Bi-LSTM neural network model is used to encode the sentence information and primary and secondary relation information,and the LSTM neural network is utilized to summarize the encoded information.Experimental results show that the proposed method has a significant improvement in accuracy,stability and the ROUGE evaluation index compared with the current mainstream single document extractive summarization methods.

Key words: Natural language processing, Extractive summarization, Primary and secondary relation, Neural network, LSTM

CLC Number: 

  • TP391
[1] LUHN H P.The automatic creation of literature abstracts.IBM Journal of Research and Development,1958,2(2):159-165.
[2] CHU X M,ZHU Q M,ZHOU G D.Discourse Primary-Secondary Relationships in Natural Language Processing.Journal of Computer Science,2017,40(4):842-860.
[3] MIHALCEA R,TARAU P.TextRank:Bringing Order into Texts//Proc Conference on Empirical Methods in Natural Language Processing.2004:404-411.
[4] ERKAN G,RADEV D R.LexPageRank:Prestige in Multi-Do-cument Text Summariztion//Conference on Empirical Methodsin Natural Language Processing(EMNLP 2004).ACL,2004:365-371.
[5] KUPIEC J,PEDERSEN J,CHEN F.A trainable document sum-marizer//Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.ACM,1995:68-73.
[6] CONROY J M,SCHLESINGER J D,DIANNE P O,et al.Using {HMM} and Logistic Regression to Generate Extract Summaries for {DUC}.温泉科学,2001,63(329):66-75.
[7] KIM Y.Convolutional neural networks for sentence classification.arXiv:1408.5882,2014.
[8] LIU P,QIU X,HUANG X.Recurrent neural network for text classification with multi-task learning.arXiv:1605.05101,2016.
[9] ZHOU C,SUN C,LIU Z,et al.A C-LSTM neural network for text classification.arXiv:1511.08630,2015.
[10] CHOPRA S,AULI M,RUSH A M.Abstractive sentence summarization with attentive recurrent neural networks//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.2016:93-98.
[11] RUSH A M,CHOPRA S,WESTON J.A neural attention model for abstractive sentence summarization.arXiv:1509.00685,2015.
[12] SEE A,LIU P J,MANNING C D.Get To The Point:Summarization with Pointer-Generator Net-works//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.2017:1073-1083.
[13] CARLSON,LYNN,MARCU,et al.Building a dis-course-tagged corpus in the framework of Rhetorical Structure Theory//Current and New Directions in Discourse and Dialogue.Netherlands:Springer,2003:2655-2661.
[14] ZHOU Y,XUE N.The Chinese Discourse TreeBank:A Chinese corpus annotated with discourse relations.Language Resources and Evaluation,2015,49(2):397-431.
[15] HERNAULT H,PRENDINGER H,ISHIZUKA M.HILDA:A discourse parser using support vector machine classification.Dialogue & Discourse,2010,1(3).
[16] FENG V W,HIRST G.Text-level discourse parsing with rich linguistic features//Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics.2012:60-68.
[17] CHU X,WANG Z,ZHU Q,et al.Recognizing nuclearity between Chinese Discourse units//2015 International Conferenceon Asian Language Processing (IALP).IEEE,2015:197-200.
[18] SUN J,LI Y C,ZHOU G D,et al.Research of Chinese Implicit Discourse Relation Recognition.Journal of Peking University (Natural Science Edition),2014,50(1):111-117.
[19] SUN C,KONG F.A Transition-based Framework for Chinese Discourse Structure Parsing.Journal of Chinese Information Processing,2018,32(12):48-56.
[20] LI Y C.Research of Chinese Discourse Stucture Representation and Resource Construction.Suzhou:Soochow University of Defense Technology,2015.
[21] LIN C Y,HOVY E.Automatic Evaluation of Summaries Using n-gram Co-occurrence Statistics//Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology.Association for Computational Linguistics,2003.
[22] HUA L,WAN X,LI L.Overview of the NLPCC 2017 Shared Task:Single Document Summarization//National CCF Conference on Natural Language Processing and Chinese Computing.Cham:Springer,2017:942-947.
[23] WAN X,YANG J.Multi-document summarization using cluster-based link analysis//Proceedings of the 31st annual international ACM SIGIR Conference on Research and Development in Information Retrieval.ACM,2008:299-306.
[24] KIM Y.Convolutional neural networks for sentence classification.arXiv:1408.5882,2014.
[1] YU Xue-yong, CHEN Tao. Privacy Protection Offloading Algorithm Based on Virtual Mapping in Edge Computing Scene [J]. Computer Science, 2021, 48(1): 65-71.
[2] SHAN Mei-jing, QIN Long-fei, ZHANG Hui-bing. L-YOLO:Real Time Traffic Sign Detection Model for Vehicle Edge Computing [J]. Computer Science, 2021, 48(1): 89-95.
[3] HE Yan-hui, WU Gui-xing, WU Zhi-qiang. Domain Alignment Based Object Detection of X-ray Images [J]. Computer Science, 2021, 48(1): 175-181.
[4] LI Ya-nan, HU Yu-jia, GAN Wei, ZHU Min. Survey on Target Site Prediction of Human miRNA Based on Deep Learning [J]. Computer Science, 2021, 48(1): 209-216.
[5] TONG Xin, WANG Bin-jun, WANG Run-zheng, PAN Xiao-qin. Survey on Adversarial Sample of Deep Learning Towards Natural Language Processing [J]. Computer Science, 2021, 48(1): 258-267.
[6] ZHANG Yan-mei, LOU Yin-cheng. Deep Neural Network Based Ponzi Scheme Contract Detection Method [J]. Computer Science, 2021, 48(1): 273-279.
[7] LU Long-long, CHEN Tong, PAN Min-xue, ZHANG Tian. CodeSearcher:Code Query Using Functional Descriptions in Natural Languages [J]. Computer Science, 2020, 47(9): 1-9.
[8] TIAN Ye, SHOU Li-dan, CHEN Ke, LUO Xin-yuan, CHEN Gang. Natural Language Interface for Databases with Content-based Table Column Embeddings [J]. Computer Science, 2020, 47(9): 60-66.
[9] ZHUANG Shi-jie, YU Zhi-yong, GUO Wen-zhong, HUANG Fang-wan. Short Term Load Forecasting via Zoneout-based Multi-time Scale Recurrent Neural Network [J]. Computer Science, 2020, 47(9): 105-109.
[10] ZHANG Jia-jia, ZHANG Xiao-hong. Multi-branch Convolutional Neural Network for Lung Nodule Classification and Its Interpretability [J]. Computer Science, 2020, 47(9): 129-134.
[11] ZHU Ling-ying, SANG Qing-bing, GU Ting-ting. No-reference Stereo Image Quality Assessment Based on Disparity Information [J]. Computer Science, 2020, 47(9): 150-156.
[12] ZHAO Qin-yan, LI Zong-min, LIU Yu-jie, LI Hua. Cascaded Siamese Network Visual Tracking Based on Information Entropy [J]. Computer Science, 2020, 47(9): 157-162.
[13] YOU Lan, HAN Xue-wei, HE Zheng-wei, XIAO Si-yu, HE Du, PAN Xiao-meng. Improved Sequence-to-Sequence Model for Short-term Vessel Trajectory Prediction Using AIS Data Streams [J]. Computer Science, 2020, 47(9): 169-174.
[14] CUI Tong-tong, WANG Gui-ling, GAO Jing. Ship Trajectory Classification Method Based on 1DCNN-LSTM [J]. Computer Science, 2020, 47(9): 175-184.
[15] LIU Hai-chao, WANG Li. Graph Classification Model Based on Capsule Deep Graph Convolutional Neural Network [J]. Computer Science, 2020, 47(9): 219-225.
Full text



[1] LEI Li-hui and WANG Jing. Parallelization of LTL Model Checking Based on Possibility Measure[J]. Computer Science, 2018, 45(4): 71 -75 .
[2] ZHANG Jia-nan and XIAO Ming-yu. Approximation Algorithm for Weighted Mixed Domination Problem[J]. Computer Science, 2018, 45(4): 83 -88 .
[3] YANG Yu-qi, ZHANG Guo-an and JIN Xi-long. Dual-cluster-head Routing Protocol Based on Vehicle Density in VANETs[J]. Computer Science, 2018, 45(4): 126 -130 .
[4] HAN Kui-kui, XIE Zai-peng and LV Xin. Fog Computing Task Scheduling Strategy Based on Improved Genetic Algorithm[J]. Computer Science, 2018, 45(4): 137 -142 .
[5] WU Shu, ZHOU An-min and ZUO Zheng. PDiOS:Private API Call Detection in iOS Applications[J]. Computer Science, 2018, 45(4): 163 -168 .
[6] LUO Xiao-yang, HUO Hong-tao, WANG Meng-si and CHEN Ya-fei. Passive Image-splicing Detection Based on Multi-residual Markov Model[J]. Computer Science, 2018, 45(4): 173 -177 .
[7] GUO Shuai, LIU Liang and QIN Xiao-lin. Spatial Keyword Range Query with User Preferences Constraint[J]. Computer Science, 2018, 45(4): 182 -189 .
[8] WENG Li-guo, KONG Wei-bin, XIA Min and CHOU Xue-fei. Satellite Imagery Cloud Fraction Based on Deep Extreme Learning Machine[J]. Computer Science, 2018, 45(4): 227 -232 .
[9] WEI Qin-shuang, WU You-xi, LIU Jing-yu and ZHU Huai-zhong. Distinguishing Sequence Patterns Mining Based on Density and Gap Constraints[J]. Computer Science, 2018, 45(4): 252 -256 .
[10] QIN Ke-yun and LIN Hong. Relationships among Several Attribute Reduction Methods of Decision Formal Context[J]. Computer Science, 2018, 45(4): 257 -259 .