计算机科学 ›› 2015, Vol. 42 ›› Issue (1): 220-226.doi: 10.11896/j.issn.1002-137X.2015.01.049

• 人工智能 • 上一篇    下一篇

一种基于ILP和ASP的学习B语言描述的动作模型方法

刘振,张志政   

  1. 东南大学计算机科学与工程学院 南京211189,东南大学计算机科学与工程学院 南京211189
  • 出版日期:2018-11-14 发布日期:2018-11-14
  • 基金资助:
    本文受国家自然科学基金项目(60803061),江苏省自然科学基金项目(BK2008293),东南大学科技基金项目(XJ2008315)资助

Learning Action Models Described in Action Language B by Combining ILP and ASP

LIU Zhen and ZHANG Zhi-zheng   

  • Online:2018-11-14 Published:2018-11-14

摘要: 动作模型学习可以使Agent主动适应动态环境中的变化,从而提高Agent的自治性,同时也可为动态域建模提供一个初步模型,为后期的模型完善和修改提供了基础。通过结合归纳逻辑程序设计(Inductive Logic Programming,ILP)和回答集程序设计(Answer Set Programming,ASP),设计了一个学习B语言描述的动作模型算法,该算法可以在混合规模的动态域中进行学习,并采用经典规划实例验证了该学习算法的有效性。

关键词: 动作模型学习,动作语言B,归纳逻辑程序设计,回答集逻辑程序设计

Abstract: Action model learning is beneficial to autonomous and automated systems.If an Agent can update its action model according to the changes occurred in the environment,it can be more adaptable to the world and operate more effectively.Simultaneously,action model learning can provide modeling dynamic domain with an initial rough model which is the foundation for further improvement and modification.We designed an algorithm used for learning action models described in language B by combining ILP and ASP.This algorithm can work on dynamic domains consisting of objects of different scale.In the experiments,we tested the learning algorithm through using classic planning cases and verified the soundness of the learning algorithm.

Key words: Action model learning,Action language B,Inductive logic programming,Answer set programming

[1] Certicky M.Action Learning with Reactive Answer Set Pro-gramming:Preliminary Report[C]∥The Eighth International Conference on Autonomic and Autonomous Systems(ICAS 2012).2012:107-111
[2] Estlin T,Gaines D,Chouinard C,et al.Increased Mars rover autonomy using AI planning,scheduling and execution[C]∥IEEE International Conference on Robotics and Automation,2007.IEEE,2007:4911-4918
[3] Yang Q,Wu K,Jiang Y.Learning action models from plan ex-amples using weighted MAX-SAT[J].Artificial Intelligence,2007,171(2):107-143
[4] McCarthy J.Elaboration tolerance.1997-09-09.http:/www-formal.stanford.edu/juc./elaboration.html
[5] Gelfond M,Kahl Y.Knowledge Representation,Reasoning,and the Design of Intelligent Agents.http://redwood.cs.ttu.edu/~mgelfond/FALL-2012/book.pdf
[6] IPC(2003).http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume20/long03a-html/ node37.html
[7] Balduccini M.Learning Action Descriptions with A-Prolog:Ac-tion Language C[C]∥AAAI Spring Symposium:Logical Formalizations of Commonsense Reasoning.2007:13-18
[8] Gebser M,Grote T,Kaminski R,et al.Reactive answer set programming[M].Logic Programming and Nonmonotonic Reasoning.Springer Berlin Heidelberg,2011:54-66
[9] Wang X.Learning by observation and practice:An incremental approach for planning operator acquisition[C]∥ICML.1995:549-557
[10] Sil A,Yates A.Extracting STRIPS Representations of Actions and Events[C]∥RANLP.2011:1-8
[11] Boose J H,Gaines B R.Knowledge acquisition for knowledge-based systems:Notes on the state-of-the-art[J].Machine Learning,1989,4(3/4):377-394
[12] Benson S.Learning action models for reactive autonomous Agents[D].Stanford university,1996
[13] 谢颖.归纳逻辑程序设计初探[D].北京:北京师范大学哲学系,2008
[14] Stuart R,Peter N.人工智能—一种现代方法(第2版)[M].姜哲,金栾江,等译.北京:人民邮电出版社,2010
[15] Lorenzo D.Learning non-monotonic causal theories from narratives of actions[C]∥NMR.2002:349-355
[16] Pasula H,Zettlemoyer L S,Kaelbling L P.Learning Probabilistic Relational Planning Rules[C]∥ICAPS.2004:73-82
[17] Yang Q,Wu K,Jiang Y.Learning Actions Models from Plan Examples with Incomplete Knowledge[C]∥ICAPS.2005:241-250

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!