Computer Science ›› 2020, Vol. 47 ›› Issue (4): 178-183.doi: 10.11896/jsjkx.190600149

• Artificial Intelligence • Previous Articles     Next Articles

Truncated Gaussian Distance-based Self-attention Mechanism for Natural Language Inference

ZHANG Peng-fei, LI Guan-yu, JIA Cai-yan   

  1. School of Computer and Information Technology,Beijing Jiaotong University,Beijing 100044,ChinaBeijing Key Lab of Traffic Data Analysis and Mining,Beijing Jiaotong University,Beijing 100044,China
  • Received:2019-06-26 Online:2020-04-15 Published:2020-04-15
  • Contact: JIA Cai-yan,born in 1976,professor.Her main research interests include data mining,social computing and natural language processing.
  • About author:ZHANG Peng-fei,born in 1995,postgraduate.His main research interests include natural language inference and rumor detection.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China (61876016) and Fundamental Research Funds for the Central Universities (2019JBZ110)

Abstract: In the task of natural language inference,attention mechanisms have attracted a lot of attention because it can effectively capture the importance of words in the context and improve the effectiveness of natural language inference tasks.Transformer,a deep feedforward network model solely based on attention mechanisms,not only achieves state-of-the-art performance on machine translation with much less parameters and training time,but also achieves remarkable results in tasks such as natural language inference (Gaussian-Transformer) and word representation learning (Bert).Moreover,Gaussian-Transformer has become one of the best methods for natural language inference tasks.However,the Gaussian prior distribution in Transformer,which weights the positional importance of words,although greatly improves the importance of adjacent words,the importance of non-neighborhood words in Gaussian distribution will quickly become 0,the influence of non-neighborhood words that plays an important role in the current word representation will disappear as the distance deepens.Therefore,this paper proposed a position weighting method based on the self-attention mechanism of clipped Gaussian distance distribution for natural language inference.This method not only highlights the importance of neighboring words,but also preserves non-neighborhood words those are important to the current word representation.The experimental results on the natural language inference benchmark datasets SNLI and MultiNLI confirm the validity of the cliped Gaussian distance distribution used in the self-attention mechanism for extracting the relative position information of the words in sentences.

Key words: Clipped Gaussian distance mask, Distance mask, Natural language inference, Self-attention mechanism

CLC Number: 

  • TP181
[1]HERMANN K M,KOCISKY T,GREFENSTETTE E,et al.Teaching machines to read and comprehend[J].Neural Information Processing Systems,2015:1693-1701.
[2]DU X,SHAO J,CARDIE C,et al.Learning to Ask:NeuralQuestion Generation for Reading Comprehension[C]//Meeting of the Association for Computational Linguistics.2017:1342-1352.
[3]LAN W,XU W.Neural Network Models for Paraphrase Identification,Semantic Textual Similarity,Natural Language Infe-rence,and Question Answering[C]//International Conference on Computational Linguistics.2018:3890-3902.
[4]VASWANI A,SHAZEER N,PARMAR N,et al.Attention isAll you Need[C]//Neural Information Processing Systems.2017:5998-6008.
[5]HE K,ZHANG X,REN S,et al.Deep Residual Learning for Image Recognition[C]//Computer Vision and Pattern Recognition.2016:770-778.
[6]DEVLIN J,CHANG M,LEE K,et al.BERT:Pre-training ofDeep Bidirectional Transformers for Language Understanding[C]//North American Chapter of the Association for Computational Linguistics.2019:4171-4186.
[7]SHEN T,ZHOU T,LONG G,et al.DiSAN:Directional Self-Attention Network for RNN/CNN-Free Language Understanding[C]//National Conference on Artificial Intelligence.2018:5446-5455.
[8]BOWMAN S R,ANGELI G,POTTS C,et al.A large annotated corpus for learning natural language inference[C]//Empirical Methods in Natural Language Processing.2015:632-642.
[9]IM J,CHO S.Distance-based Self-Attention Network for Natural Language Inference[J].arXiv:1712.02047,2017.
[10]GUO M,ZHANG Y,LIU T,et al.Gaussian Transformer:aLightweight Approach for Natural Language Inference[C]//National Conference on Artificial Intelligence.2019:6489-6496.
[11]KLAMBAUER G,UNTERTHINER T,MAYR A,et al.SelfNormalizing Neural Networks[C]//Neural Information Processing Systems.2017:971-980.
[12]PENNINGTON J,SOCHER R,MANNING C D,et al.Glove:Global Vectors for Word Representation[C]//Empirical Me-thods in Natural Language Processing.2014:1532-1543.
[13]MIKOLOV T,CHEN K,CORRADO G S,et al.Efficient Estimation of Word Representations in Vector Space[J].arXiv:1301.3781.
[14]BOJANOWSKI P,GRAVE E,JOULIN A,et al.EnrichingWord Vectors with Subword Information[J].Transactions of the Association for Computational Linguistics,2017,5(1):135-146.
[15]CHEN Q,ZHU X,LING Z,et al.Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference[C]//Workshop on Evaluating Vector Space Representations for Nlp.2017:36-40.
[16]WILLIAMS A,NANGIA N,BOWMAN S R,et al.A BroadCoverage Challenge Corpus for Sentence Understanding through Inference [C]//North American Chapter of the Association for Computational Linguistics.2018:1112-1122.
[17]KINGMA D P,BA J.Adam:A Method for Stochastic Optimization[J].arXiv:1412.6980,2014.
[18]ZEILER M D.ADADELTA:An Adaptive Learning Rate Method[J].arXiv:1212.5701,2012.
[19]SRIVASTAVA N,HINTON G E,KRIZHEVSKY A,et al.Dropout:a simple way to prevent neural networks from overfitting[J].Journal of Machine Learning Research,2014,15(1):1929-1958.
[20]ABADI M,AGARWAL A,BARHAM P,et al.TensorFlow:Large-Scale Machine Learning on Heterogeneous Distributed Systems[J].arXiv:1603.04467,2016.
[1] JIN Fang-yan, WANG Xiu-li. Implicit Causality Extraction of Financial Events Integrating RACNN and BiLSTM [J]. Computer Science, 2022, 49(7): 179-186.
[2] ZHANG Jia-hao, LIU Feng, QI Jia-yin. Lightweight Micro-expression Recognition Architecture Based on Bottleneck Transformer [J]. Computer Science, 2022, 49(6A): 370-377.
[3] ZHAO Dan-dan, HUANG De-gen, MENG Jia-na, DONG Yu, ZHANG Pan. Chinese Entity Relations Classification Based on BERT-GRU-ATT [J]. Computer Science, 2022, 49(6): 319-325.
[4] HU Yan-li, TONG Tan-qian, ZHANG Xiao-yu, PENG Juan. Self-attention-based BGRU and CNN for Sentiment Analysis [J]. Computer Science, 2022, 49(1): 252-258.
[5] WANG Xi, ZHANG Kai, LI Jun-hui, KONG Fang, ZHANG Yi-tian. Generation of Image Caption of Joint Self-attention and Recurrent Neural Network [J]. Computer Science, 2021, 48(4): 157-163.
[6] ZHOU Xiao-shi, ZHANG Zi-wei, WEN Juan. Natural Language Steganography Based on Neural Machine Translation [J]. Computer Science, 2021, 48(11A): 557-564.
[7] ZHANG Yi-jie, LI Pei-feng, ZHU Qiao-ming. Event Temporal Relation Classification Method Based on Self-attention Mechanism [J]. Computer Science, 2019, 46(8): 244-248.
[8] FAN Zi-wei, ZHANG Min, LI Zheng-hua. BiLSTM-based Implicit Discourse Relation Classification Combining Self-attention
Mechanism and Syntactic Information
[J]. Computer Science, 2019, 46(5): 214-220.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!