当前位置:主页 > 科技论文 > 自动化论文 >

基于点云的喷漆机器人对汽车保险杠识别和位姿估计

发布时间:2018-09-19 11:40
【摘要】:深度摄像头的出现和日益完善,获取物体的三维信息变得方便和快速,点云作为物体三维信息的一种重要表达形式,以点云为背景的计算机视觉近年来得到了发展。在许许多多的领域,三维视觉有着平面信息无法取代的作用,将机器视觉的应用拓展到了新的领域。本文研究了以喷漆机器人对汽车保险杠的识别和位姿估计相关的问题,喷漆机器人对三维信息的依赖决定了三维视觉的必要性,点云的运用使得喷漆机器人自动完成对零件的识别成为可能,对于计算机视觉这个领域的进一步发展,也是有着重要的意义。论文提出了三维点云的识别和位姿估计方案,包括点云处理和分割、点云识别、位姿估计三部分。首先决定了获取点云的设备选择,选择kinect作为机器人的视觉硬件,并且人工获取了各个保险杠的全视角的完成点云。针对实验各个阶段得到的三维点云,利用直通滤波和统计离群点滤波方法除去了比较明显的噪声点,然后通过进一步的稀疏滤波手段,获得了比较适合后序处理的点云密度,并且由于实验的需要,提出一种特征点区别滤波方法,对于Thrift特征点周围的点云保持比较高的密度,离特征点比较远的部分保留比较稀疏的点云,并设置了对比实验验证效果。对全视角点云进行模拟单视角采集,并计算这些单视角点云的视点特征直方图VFH(Viewpoint Feature Histogram)特征计算,利用这些数据训练主成分分析SVM(Support Vector Machine)分类器。在识别和位姿估计阶段,对于滤波处理后的点云数据,选择基于最小欧式距离的聚类分割方法实现了对单视角情况下点云数据的分割,并对分割后的各个聚类进行提取视点特征直方图VFH,然后利用已经训练好的SVM分类器对这些VFH特征进行分类。利用建立kd-tree(kdemention)和BP(Back Propagation)神经网络识别两种手段进行了位姿估计并进行对比。其中,识别和位姿估计部分还分别设置了应用主成分分析PCA(Principal Component Analysis)降维和不降维两种方式的对比试验。实验结果表明本论文设计的点云预处理、分割识别和位姿估计具有可行性,能够更快速的完成识别和位姿估计功能,有较大的探究价值。
[Abstract]:With the appearance and improvement of depth camera, it becomes more and more convenient and fast to obtain 3D information of objects. As an important expression of 3D information of objects, point cloud has been developed in computer vision with point cloud as the background in recent years. In many fields, 3D vision plays an irreplaceable role in plane information, which extends the application of machine vision to new fields. This paper studies the problems related to the identification and pose estimation of automobile bumper by painting robot. The dependence of painting robot on 3D information determines the necessity of 3D vision. The application of the point cloud makes it possible for the painting robot to recognize the parts automatically, which is of great significance to the further development of the field of computer vision. In this paper, three dimensional point cloud recognition and pose estimation schemes are proposed, including point cloud processing and segmentation, point cloud recognition and pose estimation. First, the selection of the device to obtain the point cloud is determined, and the kinect is chosen as the vision hardware of the robot, and the complete point cloud of the full angle of view of each bumper is obtained manually. For the 3D point cloud obtained in each stage of the experiment, the obvious noise points are removed by means of direct pass filtering and statistical outlier filtering, and the point cloud density suitable for post-sequence processing is obtained by further sparse filtering. Because of the need of experiment, a method of distinguishing feature points is proposed, which keeps a high density for the point cloud around the Thrift feature point, and keeps the sparse point cloud in the far part from the feature point, and sets up a comparative experiment to verify the effect. The full view point cloud is simulated and the view feature histogram (VFH (Viewpoint Feature Histogram) of these single view point clouds is calculated. The principal component analysis (SVM (Support Vector Machine) classifier is trained by these data. In the phase of recognition and pose estimation, the minimum Euclidean distance based clustering segmentation method is used to segment the point cloud data with single view angle. The view feature histogram (VFH,) is extracted from each clustering, and then the trained SVM classifier is used to classify these VFH features. Kd-tree (kdemention) and BP (Back Propagation) neural network recognition are used to estimate the position and pose. In the part of recognition and pose estimation, a comparative experiment of using principal component analysis (PCA) to reduce PCA (Principal Component Analysis) and not to reduce dimension is also carried out. The experimental results show that the point cloud preprocessing, segmentation recognition and pose estimation designed in this paper are feasible, and the functions of recognition and pose estimation can be completed more quickly.
【学位授予单位】:哈尔滨工业大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP391.41;TP242

【参考文献】

相关期刊论文 前10条

1 苏本跃;马金宇;彭玉升;盛敏;;基于K-means聚类的RGBD点云去噪和精简算法[J];系统仿真学报;2016年10期

2 袁华;庞建铿;莫建文;;基于体素化网格下采样的点云简化算法研究[J];电视技术;2015年17期

3 刘辉;王伯雄;任怀艺;李鹏程;;ICP算法在双目结构光系统点云匹配中的应用[J];清华大学学报(自然科学版);2012年07期

4 谭志国;鲁敏;胡延平;郭裕兰;庄钊文;;基于点云-模型匹配的激光雷达目标识别[J];计算机工程与科学;2012年04期

5 方旭;;基于BP神经网络人脸识别方法的研究与改进[J];电脑知识与技术;2011年04期

6 魏永超;刘长华;杜冬;;基于曲面分割的三维点云物体识别[J];光子学报;2010年12期

7 宇雪垠;曹拓荒;陈本盛;;基于特征脸的人脸识别及实现[J];河北工业科技;2009年05期

8 孙亚;;基于粒子群BP神经网络人脸识别算法[J];计算机仿真;2008年08期

9 梁新合;宋志真;;改进的点云精确匹配技术[J];装备制造技术;2008年03期

10 白裔峰;肖建;于龙;黄景春;;基于结构风险最小化的加权偏最小二乘法[J];计算机应用;2007年04期

相关博士学位论文 前1条

1 苏宏涛;基于统计特征的人脸识别技术研究[D];西北工业大学;2004年

相关硕士学位论文 前6条

1 张楠;铁路场景下三维点云识别与分类算法研究[D];北京交通大学;2016年

2 常江;基于特征匹配的三维点云配准算法研究[D];中北大学;2015年

3 林志强;面向智能服务机器人的物体感知研究[D];中国科学技术大学;2014年

4 赵春雷;粗糙空间上结构风险最小化原则[D];河北大学;2011年

5 戴永前;基于二维激光雷达的移动机器人三维环境的识别[D];南京理工大学;2007年

6 孙宇;基于激光雷达的机器人三维地形构建和草丛中障碍物检测[D];浙江大学;2007年



本文编号:2250025

资料下载
论文发表

本文链接:https://www.wllwen.com/kejilunwen/zidonghuakongzhilunwen/2250025.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户22e05***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com