计算机科学 ›› 2023, Vol. 50 ›› Issue (9): 90-100.doi: 10.11896/jsjkx.221200053

• 计算机软件 • 上一篇    下一篇

自主机器人的主动观察模式及软件实现架构

肖怀宇1,2, 杨硕3, 毛新军1,2   

  1. 1 国防科技大学计算机学院 长沙 410073
    2 湖南省复杂系统软件工程重点实验室 长沙 410073
    3 国防科技大学系统工程学院 长沙 410073
  • 收稿日期:2022-12-08 修回日期:2023-04-20 出版日期:2023-09-15 发布日期:2023-09-01
  • 通讯作者: 杨硕(yangshuo11@nudt.edu.cn)
  • 作者简介:(xiaohuaiyu20@nudt.edu.cn)
  • 基金资助:
    国家自然科学基金(62172426)

Active Observation Schemes and Software Implementation Architecture of Autonomous Robot

XIAO Huaiyu1,2, YANG Shuo3, MAO Xinjun1,2   

  1. 1 College of Computer Science and Technology,National University of Defense Technology,Changsha 410073,China
    2 Key Laboratory of Software Engineering for Complex Systems,Changsha 410073,China
    3 College of Systems Engineering,National University of Defense Technology,Changsha 410073,China
  • Received:2022-12-08 Revised:2023-04-20 Online:2023-09-15 Published:2023-09-01
  • About author:XIAO Huaiyu,born in 1999,postgra-duate,is a student member of China Computer Federation.His main research interest is robot software engineering.
    YANG Shuo,born in 1992,Ph.D,lectu-rer,is a member of China Computer Fe-deration.His main research interests include intelligent robot software engineering,software engineering theory and methods and robot behavior mode-ling and task decision-making.
  • Supported by:
    National Natural Science Foundation of China(62172426).

摘要: 自主机器人运行在开放环境中,对环境信息的感知受限,难以获得有关环境完整、及时的信息。为有效完成任务,自主机器人需要主动地观察环境,即根据任务的需求,自发地决策、调度和执行观察行为,针对性地获取与任务相关的环境信息。主动观察的需求给自主机器人的观察模式以及软件系统构造提出了两方面的挑战:一方面,为支持任务的有效实现,需设计主动的观察模式,从机制层面确保自主机器人能够基于任务需求,观察所需的环境信息;另一方面,主动的观察模式导致自主机器人观察、决策等软构件的功能抽象和数据交互更为复杂,需要针对上层复杂机制的实现设计适配的软件架构。为应对上述挑战,文中将自主机器人的行为定义为任务行为和观察行为两类,针对开放环境中两类典型的环境信息受限场景——片面观察场景和过时观察场景,提出了两类主动观察模式来构建观察行为与任务行为的协同机制,并基于这两类主动观察模式设计了观察行为的决策和调度算法。另外,还设计了一个基于多智能体系统的自主机器人软件架构,实现了所提出的主动观察模式。最后,为验证所提出的主动观察模式的有效性,选取开放环境中一个典型的任务——图书馆服务机器人的图书传送任务开展实验验证。该任务中,自主机器人对图书的位置信息受限,容易导致图书传送任务失败。文中选取当前自主机器人领域主流的反应式观察和伴随观察模式作为对比方法,通过从行为执行过程、行动轨迹和时间消耗3方面进行对比,验证了所提方法的有效性。

关键词: 自主机器人, 开放环境, 观察行为, 主动观察模式, 自主机器人软件

Abstract: Autonomous robots operate in the open environment,their perception of environmental information is limited,and it is difficult to obtain complete and timely information about the environment.In order to effectively complete tasks,autonomous robots need to actively observe the environment,that is,according to the requirements of tasks,to make decisions,schedule and execute observation behaviors spontaneously,and obtain task-related environmental information.The demand for active observation poses two challenges to the observation scheme of autonomous robots and the construction of software systems.On the one hand,in order to support the effective implementation of tasks,active observation schemes should be designed to ensure that autonomous robots can observe the required environmental information based on task requirements from the mechanism level.On the other hand,the active observation schemes make the function abstraction and data interaction of the software components such as observation and decision-making of autonomous robots more complicated,so it is necessary to design a software architecture sui-table for the implementation of the complex mechanism on the upper level.In order to deal with the above challenges,this paper defines the behaviors of autonomous robots as task behaviors and observation behaviors.Two kinds of active observation schemes are proposed to construct a collaborative mechanism between observation behaviors and task behaviors,aiming at the two typical scenes with limited environmental information in the open environment:one-sided observation and outdated observation scenes,and the decision and scheduling algorithms of observation behavior are designed based on these two active observation schemes.In addition,an autonomous robot software architecture based on the multi-agent system is designed to implement the proposed active observation schemes.Finally,in order to verify the effectiveness of the proposed active observation schemes,a typical task in the open environment:the book transfer task of the library service robot is selected to carry out experimental verification.In this task,the location information of the book is limited by the autonomous robot,which easily leads to the failure of the book transfer task.In this paper,the reactive observation and the accompanying observation schemes of the current mainstream in the field of autonomous robots are selected as the comparison method,and the effectiveness of the proposed method is verified by comparing the behavior execution process,motion trajectory and time consumption.

Key words: Autonomous robot, Open environment, Observation behavior, Active observation scheme, Autonomous robot software

中图分类号: 

  • TP311
[1]MAO X J.A Systematic Review on Software Engineering for Autonomous Robot[J].Chinese Journal of Computers,2021,44(8):1661-1687.
[2]PETTERSSON O.Execution monitoring in robotics:A survey[J].Robotics and Autonomous Systems,2005,53(2):73-88.
[3]MURPHY R R.Introduction to AI robotics[M].MIT press,2019.
[4]MICHAUD F,NICOLESCU M.Behavior-based systems[M].Springer Handbook of Robotics.Cham:Springer,2016:307-328.
[5]HOMBERG B S,KATZSCHMANN R K,DOGAR M R,et al.Robust proprioceptive grasping with a soft robot hand[J].Autonomous Robots,2019,43(3):681-696.
[6]SZOT A,CLEGG A,UNDERSANDER E,et al.Habitat 2.0:Training home assistants to rearrange their habitat[J].Advances in Neural Information Processing Systems,2021,34:251-266.
[7]BREYER M,OTT L,SIEGWART R,et al.Closed-Loop Next-Best-View Planning for Target-Driven Grasping[C]//2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).IEEE,2022:1411-1416.
[8]CHEN C,LIU Y,KREISS S,et al.Crowd-robot interaction:Crowd-aware robot navigation with attention-based deep reinforcement learning[C]//2019 International Conference on Robotics and Automation(ICRA).IEEE,2019:6015-6022.
[9]CHEN Y,LIU C,SHI B E,et al.Robot navigation in crowds by graph convolutional networks with attention learned from human gaze[J].IEEE Robotics and Automation Letters,2020,5(2):2754-2761.
[10]LI J K,HSU D,LEE W S.Act to see and see to act:POMDP planning for objects search in clutter[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).IEEE,2016:5701-5707.
[11]LIU Z,MAO X,YANG S.A Dual-Loop Control Model andSoftware Framework for Autonomous Robot Software[C]//2017 24th Asia-Pacific Software Engineering Conference (APSEC).IEEE,2017:229-238.
[12]ZHENG K.Ros navigation tuning guide[M].Robot Operating System(ROS).Cham:Springer,2021:197-226.
[13]DONG H,GIAKOUMIDIS N,FIGUEROA N,et al.Approac-hing behaviour monitor and vibration indication in developing a General Moving Object Alarm System(GMOAS)[J].International Journal of Advanced Robotic Systems,2013,10(7):290.
[14]RAMAKRISHNAN S K,JAYARAMAN D,GRAUMAN K.Emergence of exploratory look-around behaviors through active observation completion[J].Science Robotics,2019,4(30):eaaw6326.
[15]CASHMORE M,FOX M,LONG D,et al.Rosplan:Planning in the robot operating system[C]//Proceedings of the Interna-tional Conference on Automated Planning and Scheduling.2015,25:333-341.
[16]BAJCSY R,ALOIMONOS Y,TSOTSOS J K.Revisiting active perception[J].Autonomous Robots,2018,42(2):177-196.
[17]RUEHL S W,XUE Z,KERSCHER T,et al.Towards automatic manipulation action planning for service robots[C]//Annual Conference on Artificial Intelligence.Berlin,Heidelberg:Sprin-ger,2010:366-373.
[18]NIEMUELLER T,HOFMANN T,LAKEMEYER G.CLIPS-based execution for PDDL planners[C]//ICAPS Workshop on Integrated Planning,Acting and Execution(IntEx).2018.
[19]YANG S,MAO X J,LIU Z,et al.The accompanying behavior model and implementation architecture of autonomous robot software[C]//2017 24th Asia-Pacific Software Engineering Conference(APSEC).IEEE,2017:209-218.
[20]MAO X J,YANG S,HUANG Y H,et al.Towards Software Architecture and Accompanying Behavior Mechanism of Auto-nomous Robotic Control Software Based on Multi-agent System[J].Journal of Software,2020,31(6):1619-1637.
[21]ODDI A,RASCONI R,SANTUCCI V G,et al.Integratingopen-ended learning in the sense-plan-act robot control paradigm[M].ECAI 2020.IOS Press,2020:2417-2424.
[22]GARATTONI L,BIRATTARI M.Autonomous task sequen-cing in a robot swarm[J].Science Robotics,2018,3(20):eaat0430.
[23]BROOKS R.A robust layered control system for a mobile robot[J].IEEE Journal on Robotics and Automation,1986,2(1):14-23.
[24]ARNOLD R D,YAMAGUCHI H,TANAKA T.Search andrescue with autonomous flying robots through behavior-based cooperative intelligence[J].Journal of International Humanita-rian Action,2018,3(1):1-18.
[25]RUSSELL S J.Artificial intelligence a modern approach[M].Pearson Education,Inc.,2010.
[26]BURKHARD H D.Agent oriented techniques for programming autonomous robots[J].Fundamenta Informaticae,2010,102(1):49-62.
[27]IÑIGO-BLASCO P,DIAZ-DEL-RIO F,ROMERO-TERNERO M C,et al.Robotics software frameworks for multi-agent robo-tic systems development[J].Robotics and Autonomous Systems,2012,60(6):803-821.
[28]GARRIDO-JURADO S,MUÑOZ-SALINAS R,MADRID-CUEVAS F J,et al.Automatic generation and detection of highly reliable fiducial markers under occlusion[J].Pattern Recognition,2014,47(6):2280-2292.
[29]LIU Z,MAO X,YANG S.Autorobot:A multi-agent softwareframework for autonomous robots[J].IEICE TRANSACTIONS on Information and Systems,2018,101(7):1880-1893.
[30]QUIGLEY M,CONLEY K,GERKEY B,et al.ROS:an open-source Robot Operating System[C]//ICRA Workshop on Open Source Software.2009:5.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!