基于Kinect的移动机器人视觉SLAM研究
[Abstract]:Intelligent mobile robot requires autonomous navigation and positioning in a complex environment, while the simultaneous positioning and mapping (SLAM) is the premise and the key to realize the complete autonomous movement of the mobile robot. The visual-based SLAM technology is widely concerned by the researchers because of its low price, rich information and easy extraction of features. Because the Kinect camera can easily and quickly get the RGB-D information of the environment, it is widely used in the visual SLAM. At present, the mainstream visual SLAM scheme with RGB-D sensor is composed of image processing front end and pose optimization back end. In view of the real-time problem of the visual SLAM system, the paper mainly focuses on the front-end image processing part, and studies the key link of the real-time performance of the system and puts forward the improvement method. The image processing efficiency of the front end directly affects the real-time performance of the whole SLAM system, and the paper introduces the optical flow method to track the motion of the feature point in the image rapidly, and compared with the traditional feature matching method, and provides a combination method of the optical flow and the characteristic matching method. In the motion estimation part, the optical flow method is adopted to estimate the motion of the mobile robot in real time; in order to eliminate the accumulated error in the motion estimation process, a loop detection based on the feature matching method is adopted to increase the constraint of the position of the robot. In addition, in order to improve the subsequent pose optimization efficiency, a local loop and a random loop-back combination strategy is used in the loop detection process. In the rear-end pose optimization part, the robot pose and pose constraints are obtained according to the motion estimation and the loop detection part, and the global optimization of the robot pose is carried out by using the g2o algorithm. Based on the experimental results of the reference data set, the performance of the optical flow method and the characteristic matching method is analyzed and compared, and the operation efficiency is improved by 28.5% under the premise of ensuring the positioning accuracy of the SLAM system compared with the traditional feature matching method. And the real-time performance of the visual SLAM system is effectively improved. Finally, on-line experimental results of the actual scene show that the paper can estimate the robot's motion track and build the three-dimensional map of the indoor scene in real time.
【学位授予单位】:南昌大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP391.41;TP242
【参考文献】
相关期刊论文 前10条
1 兰红;周伟;齐彦丽;;动态背景下的稀疏光流目标提取与跟踪[J];中国图象图形学报;2016年06期
2 付梦印;吕宪伟;刘彤;杨毅;李星河;李玉;;基于RGB-D数据的实时SLAM算法[J];机器人;2015年06期
3 张毅;陈起;罗元;;室内环境下移动机器人三维视觉SLAM[J];智能系统学报;2015年04期
4 杨志娟;袁纵横;乔宇;代毅;胡放荣;;基于图像处理的指针式仪表智能识别方法研究[J];计算机测量与控制;2015年05期
5 肖雨;崔荣一;怀丽波;;一种融合位置信息的字符串相似度度量方法[J];计算机应用研究;2015年11期
6 王亚龙;张奇志;周亚丽;;基于RGB-D相机的室内环境3D地图创建[J];计算机应用研究;2015年08期
7 辛菁;苟蛟龙;马晓敏;黄凯;刘丁;张友民;;基于Kinect的移动机器人大视角3维V-SLAM[J];机器人;2014年05期
8 王亚龙;张奇志;周亚丽;;基于Kinect的三维视觉里程计的设计[J];计算机应用;2014年08期
9 陈添丁;胡鉴;吴涤;;稀疏光流快速计算的动态目标检测与跟踪[J];中国图象图形学报;2013年12期
10 梁明杰;闵华清;罗荣华;;基于图优化的同时定位与地图创建综述[J];机器人;2013年04期
相关博士学位论文 前3条
1 韩震峰;面向煤矿井下探测的多节履带式机器人及其关键技术研究[D];哈尔滨工业大学;2012年
2 唐利民;非线性最小二乘的不适定性及算法研究[D];中南大学;2011年
3 余洪山;移动机器人地图创建和自主探索方法研究[D];湖南大学;2007年
相关硕士学位论文 前6条
1 张彦珍;基于g2o的SLAM后端优化算法研究[D];西安电子科技大学;2014年
2 夏文玲;基于Kinect与单目视觉SLAM的实时三维重建算法的实现[D];湖南大学;2013年
3 郑驰;基于光流法的单目视觉里程计研究[D];浙江大学;2013年
4 江龙;基于SURF特征的单目视觉SLAM技术研究与实现[D];南京理工大学;2012年
5 袁亮;三维重建过程中的点云数据配准算法的研究[D];西安电子科技大学;2010年
6 唐永鹤;基于特征点的图像匹配算法研究[D];国防科学技术大学;2007年
,本文编号:2499510
本文链接:https://www.wllwen.com/kejilunwen/zidonghuakongzhilunwen/2499510.html