计算机科学 ›› 2024, Vol. 51 ›› Issue (5): 193-199.doi: 10.11896/jsjkx.230300193

• 人工智能 • 上一篇    下一篇

面向前提选择的新型图约简表示与图神经网络模型

兰咏琪1, 何星星1, 李莹芳2, 李天瑞3   

  1. 1 西南交通大学数学学院 成都 611756
    2 西南财经大学计算机与人工智能学院 成都 611130
    3 西南交通大学计算机与人工智能学院 成都 611756
  • 收稿日期:2023-03-24 修回日期:2023-08-10 出版日期:2024-05-15 发布日期:2024-05-08
  • 通讯作者: 李莹芳(liyf@swufe.edu.cn)
  • 作者简介:(302941597@qq.com)
  • 基金资助:
    中央高校基本科研业务费专项资金(2682020ZT107);国家自然科学基金(62106206);教育部人文社科项目(19YJCZH048,20XJCZH016);四川省科技计划(2023YFH0066)

New Graph Reduction Representation and Graph Neural Network Model for Premise Selection

LAN Yongqi1, HE Xingxing1, LI Yingfang2, LI Tianrui3   

  1. 1 School of Mathematics,Southwest Jiaotong University,Chengdu 611756,China
    2 School of Computer and Artificial Intelligence,Southwestern University of Finance and Economics,Chengdu 611130,China
    3 School of Computer and Artificial Intelligence,Southwest Jiaotong University,Chengdu 611756,China
  • Received:2023-03-24 Revised:2023-08-10 Online:2024-05-15 Published:2024-05-08
  • About author:LAN Yongqi,born in 1998,postgra-duate.Her main research interests include graph neural networks for pre-mise selection.
    LI Yingfang,born in 1985,Ph.D,asso-ciate professor.Her main research in-terests include intelligent information processing,intelligent decision making and control,reasoning and problem solving and so on.
  • Supported by:
    Fundamental Research Funds for the Central Universities of Ministry of Education of China(2682020ZT107),National Natural Science Foundation of China(62106206),Humanities and Social Sciences Project of Ministry of Education(19YJCZH048,20XJCZH016) and Science and Technology Program of Sichuan Province(2023YFH0066).

摘要: 自动定理证明器在证明问题时其搜索空间通常会呈现爆炸式增长,前提选择为该问题提供了新的解决思路。针对现有前提选择方法中逻辑公式图、图神经网络模型难以捕捉到公式图潜在信息的问题,提出了一种基于删除重复量词的简化逻辑公式图表示和具有注意力机制的项游走图神经网络模型,充分利用逻辑公式的语法和语义信息提高前提选择问题的分类精度。首先,将一阶逻辑猜想和前提公式转化为基于删除重复量词的简化一阶逻辑公式图;其次,利用消息传递图神经网络对节点和节点的项游走特征信息进行聚合和更新,随后使用注意力机制为图上的节点分配权重,进而调整图节点嵌入信息;最后,将前提图向量和猜想图向量拼接并输入二元分类器中实现前提分类。实验结果表明,所提方法在MPTP数据集和CNF数据集上的准确率分别达到了88.61%和84.74%,超越现有最优的前提选择方法。

关键词: 图神经网络, 前提选择, 注意力机制, 一阶逻辑公式, 图约简表示方法

Abstract: The search space of an automatic theorem prover usually grow explosively when it proves problems.Premise selection provides a new solution to this problem.Aiming at the problem that the logical formula graph and graph neural network models in the existing premise selection methods are difficult to capture the potential information of the formula graph,this paper proposes a simplified logical formula graph representation based on removing repeated quantifiers and a term-walk graph neural network with attention mechanism,which makes full use of the syntactic and semantic information of logical formulas to improve the classification accuracy of premise selection problems.Firstly,the conjecture formulas and premise formulas are transformed into simplified first-order logic formula graphs based on removing repeated quantifiers.Secondly,the message passing graph neural network is used to aggregate and update nodes and their term walk feature information,and then the attention mechanism is used to assign weights to nodes on the graph,so as to adjust the graph nodes embedding information.Finally,the premise vector and the conjecture vector are concatenated and input into the binary classifier to realize the classification.Experimental results show that the accuracy of the proposed method on MPTP dataset and CNF dataset reaches 88.61% and 84.74%,respectively,which surpasses the existing premise selection method.

Key words: Graph neural network, Premise selection, Attention mechanism, First order logical formula, Graph reduction representation

中图分类号: 

  • TP391
[1]LE SERGENT T.SCADE:A comprehensive framework for cri-tical system and software engineering[C]//Integrating System and Software Modeling:15th International SDL Forum Toulouse(SDL 2011).France,2011.Revised Papers 15.Berlin Heidelberg:Springer,2012:2-3.
[2]LARRABEE T.Test pattern generation using Boolean satisfiability[J].IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,1992,11(1):4-15.
[3]EGGERSGLUSS S,DRECHSLER R.Efficient data structures and methodologies for SAT-based ATPG providing high fault coverage in industrial application[J].IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,2011,30(9):1411-1415.
[4]DARWIVHE A.Compiling knowledge into decomposable negation normal form[C]//IJCAI.1999,99:284-289.
[5]JACKSON D,SCHECHTER I,SHLYAHTER H.Alcoa:the alloy constraint analyzer[C]//Proceedings of the 22nd International Conference on Software Engineering.2000:730-733.
[6]KALISZYK C,URBAN J.Mizar 40 for mizar 40[J].Journal of Automated Reasoning,2015,55(3):245-256.
[7]MENG J,PAULSON L C.Lightweight relevance filtering for machine-generated resolution problems[J].Journal of Applied Logic,2009,7(1):41-57.
[8]KALISZYK C,URBAN J.Learning-assisted automated reasoning with flyspeck[J].Journal of Automated Reasoning,2014,53(2):173-213.
[9]IRVING G,SZEGEDY C,ALEMI A A,et al.Deepmath-deep sequence models for premise selection[J].arXiv:1606.04442,2016.
[10]LOOS S,IRVING G,SZEGEDY C,et al.Deep network guided proof search[J].arXiv:1701.06972,2017.
[11]CHVALOVSKY K,JAKUBUV J,SUDA M,et al.ENIGMA-NG:efficient neural and gradient-boosted inference guidance for E[C]//Automated Deduction-CADE 27:27th International Conference on Automated Deduction.Natal,Brazil:Springer International Publishing,2019:197-215.
[12]SCARSELLI F,GORI M,TSOI A C,et al.The graph neural network model[J].IEEE Transactions on Neural Networks,2008,20(1):61-80.
[13]WANG M,TANG Y,WANG J,et al.Premise selection for theorem proving by deep graph embedding[C]//Advances in Neural Information Processing Systems.2017:2786-2796.
[14]SCHLICHTKRULL M,KIPF T N,BLOEM P,et al.Modeling relational data with graph convolutional networks[C]//The Semantic Web:15th International Conference(ESWC 2018).Hera-klion,Crete,Greece,Springer International Publishing,2018:593-607.
[15]CROUSE M,ABDELAZIZ I,CORNELIO C,et al.Improvinggraph neural network representations of logical formulae with subgraph pooling[J].arXiv:1911.06904,2019.
[16]PALIWAL A,LOOS S,RABE M,et al.Graph representations for higher-order logic and theorem proving[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:2967-2974.
[17]LIU X H.Automatic Reasoning Based on Conclusion Method[M].Science Press,1994:15-16.
[18]BOLLOBAS B.Modern graph theory[M].Springer Science & Business Media,1998.
[19]LIU Q H.Research on First-order Logic Premise Selection for Grand Theory [D].Chengdu:Southwest Jiaotong University,2021.
[20]JAKUBUV J,URBAN J.ENIGMA:efficient learning-based inference guiding machine[C]//10th International Conference Intelligent Computer Mathematics(CICM 2017).Edinburgh,UK,2017,Springer International Publishing,2017:292-302.
[21]PASZKE A,GROSS S,MASSA F,et al.Pytorch:An imperative style,high-performance deep learning library[J].arXiv:1912.01703,2019.
[22]RUDNICKI P.An overview of the Mizar project[C]//Procee-dings of the 1992 Workshop on Types for Proofs and Programs.1992:311-330.
[23]KINGMA D P,BA J.Adam:A method for stochastic optimization[J].arXiv:1412.6980,2014.
[24]KIPF T N,WELLING M.Semi-supervised classification withgraph convolutional networks[J].arXiv:1609.02907,2016.
[25]VELICKOVIC P,CUCURULL G,CASANOVA A,et al.Graph attention networks[J].arXiv:1710.10903,2017.
[26]HAMILTON W,YING Z,LESKOVEC J.Inductive representation learning on large graphs[J].arXiv:1706.02216,2017.
[27]WU F,SOUZA A,ZHANG T,et al.Simplifying graph convolutional networks[C]//International Conference on Machine Learning.PMLR,2019:6861-6871.
[28]DEFFERRARD M,BRESSON X,VANDERGHEYNST P.Convolutional neural networks on graphs with fast localized spectral filtering[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems.2016:3844-3852.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!