大尺度环境的多机器人视觉激光同步定位与制图研究
发布时间:2018-11-16 12:49
【摘要】:移动机器人的同步定位与制图(SLAM,Simultaneous Localization and Mapping)是移动机器人在未知环境下工作的核心技术。随着机器人的技术快速发展,研究人员提出了很多优秀的SLAM算法,但大都集中在单机器人上,当机器人工作在环境规模比较大或者环境比较复杂的条件下时,单机器人就无法稳定的实现SLAM。本文提出了基于激光和视觉的多机器人SLAM算法,来解决单机器人效率低、任务量小、系统鲁棒性弱的问题。虽然多机器人SLAM可以有效解决单机器人的问题,但是它也面临了单机器人所没有的挑战。首先机器人不知道各自初始相对位置关系,导致无法直接建立机器人之间的联系,进而无法预知合适的策略将多个机器人地图融合成一张完整性和连续性的地图。其次,每个机器人的定位都存在累积误差,这些累积误差在地图融合后会叠加,从而导致融合后的地图出错,如何消除这些误差的影响也是一个难点。针对以上问题,本文提出了一种基于激光和视觉融合的方法,来实现多机器人SLAM算法。激光SLAM具有实时性高和建图精确的优点,但是当机器人处于环境结构比较相似或者比较复杂的环境中,只依靠激光很容易做出错误的闭环检测,进而导致建图效果不理想。而视觉具有信息丰富的特点和快速场景识别能力,所以本文提出了采用视觉辅助激光的方法来帮助机器人进行闭环检测,从而消除单机器人累积误差对于地图创建和定位的影响。此外,多个机器人通过TCP/IP Socket交换各自的视觉信息,彼此之间利用视觉信息,建立与其他机器人之间的联系。当一个机器人走到另一个机器人的曾经走过的轨迹时,通过本文采用的视觉场景识别算法,就可以建立多机器人位姿图节点之间的位姿约束关系。以这些节点为桥梁,通过随机抽样一致性和最小二乘算法,我们就可以求出多个机器人坐标系之间的变换关系,通过这个变换关系将多个机器人的位姿图变换到一个坐标系下,从而完成位姿图的融合。融合后的位姿图用高斯牛顿迭代算法进行优化可以矫正累积误差的影响,最后结合每个节点的激光数据,就可以生成全局栅格地图。无论单机器人的闭环检测还是多机器人约束建立都用到视觉的场景识别算法,本文为了提高场景识别算法的效率,采用ORB特征和视觉词袋技术,使用视觉单词表示图像提取的ORB特征,并根据视觉单词建立查询表文件,由于匹配查询表文件非常迅速,从而提高了匹配效率,保证场景识别算法的实时性。为了验证本文所提算法的有效性,本文搭建了基于Turtlebot的多机器人实验平台,并在在不同的实验环境下进行了多组实验,多个机器人融合后地图与真实环境基本一致,从而验证了算法的有效性和可行性。本文所获得的成果可以广泛的应用于家庭服务机器人、物流机器人、无人机等领域,对于机器人的在我国应用推广有较大的意义。
[Abstract]:Synchronous location and Mapping (SLAM,Simultaneous Localization and Mapping) is the core technology of mobile robot working in unknown environment. With the rapid development of robot technology, researchers have proposed a lot of excellent SLAM algorithms, but most of them are concentrated on single robot, when the robot works under the condition of large scale or complex environment. Single robot can't implement SLAM. stably In this paper, a multi-robot SLAM algorithm based on laser and vision is proposed to solve the problems of low efficiency, small task and weak system robustness of a single robot. Although multi-robot SLAM can effectively solve the problem of single robot, it also faces a challenge that single robot does not. First, the robot does not know their initial relative position relationship, which makes it impossible to establish the relationship between the robots directly, and then it is impossible to predict the appropriate strategy to fuse multiple robot maps into a map of integrity and continuity. Secondly, there are accumulative errors in each robot's location, which will be superimposed after map fusion, which leads to the map error after fusion. How to eliminate the influence of these errors is also a difficulty. In order to solve the above problems, this paper proposes a method based on laser and vision fusion to realize the SLAM algorithm of multiple robots. Laser SLAM has the advantages of high real-time and accurate mapping, but when the robot is in a similar or more complex environment, it is easy to make the wrong closed-loop detection only by the laser, which leads to the poor mapping effect. The vision has the characteristic of rich information and the ability of fast scene recognition, so this paper proposes a method of visual assisted laser to help the robot to carry out closed-loop detection. Thus, the effect of accumulated errors on map creation and location is eliminated. In addition, multiple robots exchange their own visual information through TCP/IP Socket, and use visual information between each other to establish a relationship with other robots. When one robot walks to the track of another robot, the position and pose constraint relationship between the nodes of multi-robot pose map can be established by the visual scene recognition algorithm used in this paper. Taking these nodes as bridges, we can find out the transformation relationship between multiple robot coordinate systems by random sampling consistency and least square algorithm. By this transformation relation, we can transform the pose map of multiple robots into one coordinate system. In order to complete the fusion of the pose map. The fused pose map can be optimized by using Gao Si Newton iteration algorithm to correct the influence of cumulative error. Finally, the global raster map can be generated by combining the laser data of each node. In order to improve the efficiency of the scene recognition algorithm, ORB feature and visual word bag technology are adopted in this paper, in order to improve the efficiency of the scene recognition algorithm, both the closed-loop detection of single robot and the establishment of multi-robot constraints are used in the scene recognition algorithm. The visual words are used to represent the ORB features extracted from the image, and the query table files are established according to the visual words. Because the matching query table files are very fast, the matching efficiency is improved and the real-time of scene recognition algorithm is ensured. In order to verify the validity of the proposed algorithm, a multi-robot experimental platform based on Turtlebot is set up in this paper, and many experiments are carried out in different experimental environments. The map of multi-robot fusion is basically consistent with the real environment. The validity and feasibility of the algorithm are verified. The results obtained in this paper can be widely used in the fields of home service robot, logistics robot, UAV and so on, and have great significance for the application and promotion of robot in our country.
【学位授予单位】:哈尔滨工业大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP242
本文编号:2335560
[Abstract]:Synchronous location and Mapping (SLAM,Simultaneous Localization and Mapping) is the core technology of mobile robot working in unknown environment. With the rapid development of robot technology, researchers have proposed a lot of excellent SLAM algorithms, but most of them are concentrated on single robot, when the robot works under the condition of large scale or complex environment. Single robot can't implement SLAM. stably In this paper, a multi-robot SLAM algorithm based on laser and vision is proposed to solve the problems of low efficiency, small task and weak system robustness of a single robot. Although multi-robot SLAM can effectively solve the problem of single robot, it also faces a challenge that single robot does not. First, the robot does not know their initial relative position relationship, which makes it impossible to establish the relationship between the robots directly, and then it is impossible to predict the appropriate strategy to fuse multiple robot maps into a map of integrity and continuity. Secondly, there are accumulative errors in each robot's location, which will be superimposed after map fusion, which leads to the map error after fusion. How to eliminate the influence of these errors is also a difficulty. In order to solve the above problems, this paper proposes a method based on laser and vision fusion to realize the SLAM algorithm of multiple robots. Laser SLAM has the advantages of high real-time and accurate mapping, but when the robot is in a similar or more complex environment, it is easy to make the wrong closed-loop detection only by the laser, which leads to the poor mapping effect. The vision has the characteristic of rich information and the ability of fast scene recognition, so this paper proposes a method of visual assisted laser to help the robot to carry out closed-loop detection. Thus, the effect of accumulated errors on map creation and location is eliminated. In addition, multiple robots exchange their own visual information through TCP/IP Socket, and use visual information between each other to establish a relationship with other robots. When one robot walks to the track of another robot, the position and pose constraint relationship between the nodes of multi-robot pose map can be established by the visual scene recognition algorithm used in this paper. Taking these nodes as bridges, we can find out the transformation relationship between multiple robot coordinate systems by random sampling consistency and least square algorithm. By this transformation relation, we can transform the pose map of multiple robots into one coordinate system. In order to complete the fusion of the pose map. The fused pose map can be optimized by using Gao Si Newton iteration algorithm to correct the influence of cumulative error. Finally, the global raster map can be generated by combining the laser data of each node. In order to improve the efficiency of the scene recognition algorithm, ORB feature and visual word bag technology are adopted in this paper, in order to improve the efficiency of the scene recognition algorithm, both the closed-loop detection of single robot and the establishment of multi-robot constraints are used in the scene recognition algorithm. The visual words are used to represent the ORB features extracted from the image, and the query table files are established according to the visual words. Because the matching query table files are very fast, the matching efficiency is improved and the real-time of scene recognition algorithm is ensured. In order to verify the validity of the proposed algorithm, a multi-robot experimental platform based on Turtlebot is set up in this paper, and many experiments are carried out in different experimental environments. The map of multi-robot fusion is basically consistent with the real environment. The validity and feasibility of the algorithm are verified. The results obtained in this paper can be widely used in the fields of home service robot, logistics robot, UAV and so on, and have great significance for the application and promotion of robot in our country.
【学位授予单位】:哈尔滨工业大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP242
【参考文献】
相关期刊论文 前4条
1 马家辰;张琦;谢玮;马立勇;;基于粒子群优化的移动机器人SLAM方法[J];北京理工大学学报;2013年11期
2 谭民;王硕;;机器人技术研究进展[J];自动化学报;2013年07期
3 汤一平;叶永杰;朱艺华;顾校凯;;智能全方位视觉传感器及其应用研究[J];传感技术学报;2007年06期
4 戴博,肖晓明,蔡自兴;移动机器人路径规划技术的研究现状与展望[J];控制工程;2005年03期
相关硕士学位论文 前3条
1 钟进;基于路径图优化的多机器人同步定位与制图研究[D];哈尔滨工业大学;2014年
2 吴庆生;基于时差法的激光测距方法与应用[D];东北石油大学;2014年
3 张宝先;基于异构传感器的多机器人SLAM地图融合研究[D];哈尔滨工业大学;2013年
,本文编号:2335560
本文链接:https://www.wllwen.com/kejilunwen/zidonghuakongzhilunwen/2335560.html