全景视觉环境避障测距方法研究
[Abstract]:In the field of autonomous obstacle avoidance navigation of mobile robots, visual sensors have many advantages in obtaining information about the surrounding environment. For example, the image information is abundant and the interference between multiple vision sensors is small when they work together. Compared with the narrow field of view of traditional vision, panoramic vision has a wide field of vision to make up for the shortcomings of traditional vision in the field of view. Therefore, panoramic vision has been widely used in autonomous robot navigation, 3D reconstruction, video surveillance and other fields. Experts and scholars at home and abroad have done a lot of research on panoramic vision, but there are still many shortcomings. At present, most of the panoramic cameras used in obstacle avoidance are single view panoramic cameras, and the images are often distorted seriously. Multi-view panoramic camera can obtain 360-degree images at the same time, and the distortion is small, but there are few applications for multi-view panoramic vision. Therefore, it is of great significance to seek a stable, effective and convenient ranging method for multi-view panoramic cameras. At the same time, our lab has made some achievements in the Bug obstacle avoidance algorithm. The Bug obstacle avoidance algorithm is a simple obstacle avoidance algorithm which requires the sensor to have a 360-degree detection range. Based on the laser rangefinder, a non-360 degree range Bug obstacle avoidance algorithm has been implemented in our laboratory, and a smooth path can be used to bypass the obstacle to reach the end point. However, due to the lack of comprehensive environmental information, the robot needs to turn frequently in the process of obstacle avoidance, which leads to the low efficiency of obstacle avoidance. Therefore, aiming at the above problems, this paper attempts to replace the laser rangefinder with a 360-degree vision sensor, and puts forward a panoramic ranging algorithm based on a panoramic camera Ladybug3 system platform. How to realize panoramic obstacle avoidance ranging is studied. In order to realize the ranging algorithm, the following work is done: (1) the basic knowledge of panoramic camera ranging is studied. The main contents include camera calibration: after comparing several calibration methods, Zhang Zhengyou calibration method is adopted; ranging image preprocessing: histogram equalization and median filter are used to enhance image contrast and remove noise; Stereo matching: several matching methods are analyzed and the coordinates of feature pairs are extracted by improved SURF matching method to improve the matching robustness. (2) the principle of monocular binocular fusion panoramic ranging is expounded. First, it discusses how to determine whether the obstacle is located in an overlapping area or a non-overlapping area; secondly, it determines a binocular ranging mechanism in an overlapped area and a single visual distance measurement mechanism in a non-overlapping region; and third, Using the perspective principle to realize binocular ranging, the principle of ranging is deduced in detail. Fourthly, the single visual distance is realized by nonlinear regression modeling method, and the concrete process of establishing nonlinear regression model is described. (3) Mono-binocular ranging experiment is completed. The school playground is chosen as the test site, and the calibration experiment, ranging image preprocessing, ranging image stereo matching experiment, binocular ranging experiment and single eye distance measurement experiment are carried out. The experimental results and errors are compared and analyzed respectively. The experimental results show that the ranging error of binocular ranging results is in the range of 1.08% and 4.48%, and the maximum error is 4.48% at 4m. The error range of the single visual distance is 0.27 and 12.57, and the maximum error is 12.57 when the maximum error is 1.4 m. The variation of the error is random and does not increase with the increase of distance, but the error is obviously larger than that of binocular ranging. The above errors are within acceptable range and can be used to avoid obstacles.
【学位授予单位】:华南农业大学
【学位级别】:硕士
【学位授予年份】:2016
【分类号】:TP391.41;TP242
【相似文献】
相关期刊论文 前10条
1 王健;张振海;李科杰;许涛;石志国;张东红;邵海燕;张亮;;全景视觉系统发展与应用[J];计算机测量与控制;2014年06期
2 荣玉斌;胡英;马孜;;基于全景视觉的海上救助机器人[J];清华大学学报(自然科学版);2007年S2期
3 毛臣健;周忆;;一种通用的全景视觉水平边线快速检测方法[J];制造技术与机床;2012年03期
4 龚小林;田向阳;;全景视觉还原算法分析与应用[J];计算机技术与发展;2014年06期
5 马子领;王建中;;作战机器人折反射全景视觉侦察技术[J];兵工学报;2011年04期
6 李盛辉;周俊;姬长英;田光兆;顾宝兴;王海青;;基于全景视觉的智能农业车辆运动障碍目标检测[J];农业机械学报;2013年12期
7 杨鹏;高晶;刘作军;万文献;;基于全景与前向视觉的足球机器人定位方法研究[J];控制与决策;2008年01期
8 夏桂华;韩瑞;马铁钢;沈建永;;防爆型高分辨率全景视觉监控系统设计[J];微计算机信息;2008年25期
9 刘栋栋;;基于全景视觉的监控系统设计[J];微型电脑应用;2012年03期
10 钱厚亮;梅雪;林锦国;;多目移动机器人全景视觉子系统标定研究[J];计算机工程与应用;2010年13期
相关会议论文 前3条
1 刘伟;刘斐;郑志强;;机器人单视点全景视觉系统综述[A];2004中国机器人足球比赛暨学术研讨会论文集[C];2004年
2 唐超;高朝辉;唐庆博;童科伟;张霞;王小锭;;基于双目全景视觉的月面信息检测技术[A];中国宇航学会深空探测技术专业委员会第十届学术年会论文集[C];2013年
3 武娜;李小坚;;Mean shift算法在全景视觉中的应用与研究[A];中国计量协会冶金分会2010年会论文集[C];2010年
相关博士学位论文 前2条
1 张帆;全景视觉图像质量优化方法研究[D];哈尔滨工程大学;2010年
2 李科;移动机器人全景视觉归航技术研究[D];哈尔滨工程大学;2011年
相关硕士学位论文 前10条
1 张柱;基于全景视觉的足球机器人目标跟踪研究[D];湘潭大学;2015年
2 杨善宝;基于DSP的全景视觉泊车辅助系统研究[D];东北大学;2014年
3 曹文君;全景视觉环境避障测距方法研究[D];华南农业大学;2016年
4 唐超;基于双目全景视觉的运动目标特征检测技术[D];哈尔滨工程大学;2012年
5 陈泽茂;基于全景视觉的汽车安全驾驶辅助系统的平台设计与实现[D];华南理工大学;2014年
6 韩瑞;高分辨率全景视觉监控系统设计[D];哈尔滨工程大学;2008年
7 吴自新;全景视觉系统设计与图像处理技术研究[D];哈尔滨工程大学;2006年
8 黄苗;足球机器人全景视觉系统研究与设计[D];长沙理工大学;2010年
9 凌云峰;移动机器人全景视觉应用研究[D];哈尔滨工程大学;2007年
10 林蓓;基于达芬奇平台的全景视觉处理系统关键技术的研究[D];浙江工业大学;2012年
,本文编号:2174570
本文链接:https://www.wllwen.com/kejilunwen/zidonghuakongzhilunwen/2174570.html