图像处理及多传感器信息融合在管道相贯线焊缝检测机器人中的应用
发布时间:2019-05-16 03:25
【摘要】:在无损检测领域中,管道相贯线焊缝缺陷的检测难度较大且检测效率较低。通过磁性轮吸附在钢管壁上的管道相贯线焊缝检测机器人需要携带无损检测设备沿着相贯线焊缝运动并对焊缝缺陷进行检测,检测过程中还需要不断调整检测探头与焊缝之间的角度与距离。因此,针对于管道相贯线焊缝的自动化无损检测机器人必须具备自动寻迹以及对焊缝缺陷定位的能力。基于对管道相贯线焊缝无损检测实际需求及现场应用场景的详细分析和考量,并对比多种机器人领域常用的自主寻迹和定位方案之后,本文确立了管道相贯线焊缝检测机器人通过自动识别磁性色码条实现视觉引导并通过多传感器信息融合实现准确定位的方案。针对确立的视觉引导与信息融合定位的方案,本文先提出了一种机器视觉引导方案下的图像处理算法。该图像处理算法在颜色空间转换和彩色边缘检测基础上,引入图像金字塔方法的Hough变换直线检测算法,快速识别磁性色码条并实现机器人自身姿态和运动轨迹的实时控制,将多阈值处理引入区域生长算法分割人工定位标识点,对人工定位标识点进行计数从而辅助机器人实现定位。然后本文根据机器人沿着管道相贯线焊缝运动的这一限制性条件,对机器人的运动轨迹和自身姿态进行建模,分析和推导机器人在运动过程中自身姿态变化与运动轨迹之间的数学关系。利用光电编码器和人工定位标识点计数建立起里程计定位模型,同时利用IMU系统建立起IMU定位模型。最后根据推导出的数学关系,利用Kalman滤波融合IMU模型和里程计模型这两大定位模型的信息,实现多传感器信息融合下的机器人定位功能。通过MATLAB软件,分别对里程计模型、IMU模型以及多传感器信息融合模型进行了仿真,分析和对比了三种定位模型的结果和误差。实验表明,本文提出的图像处理算法能够使机器人稳定识别出磁性色码条,从而实现机器人的自动寻迹功能。同时,图像处理算法还能准确的分割出定位标识点以辅助机器人实现定位。而计算机的仿真结果显示,基于Kalman滤波下的多传感器信息融合能够实现机器人对自身的准确定位,其定位精度显著优于单独使用光电编码器或者IMU系统。
[Abstract]:In the field of nondestructive testing, the detection of pipeline intersecting line weld defects is difficult and the detection efficiency is low. The pipe intersecting line weld detection robot attached to the steel pipe wall by magnetic wheel needs to carry the nondestructive testing equipment to move along the intersecting line weld and detect the weld defects. In the process of detection, the angle and distance between the detection probe and the weld should be adjusted constantly. Therefore, the automatic nondestructive testing robot for pipeline intersecting line weld must have the ability of automatic tracing and weld defect location. Based on the detailed analysis and consideration of the actual requirements of nondestructive testing of pipeline intersecting line welds and the field application scenarios, and after comparing various autonomous tracing and positioning schemes commonly used in the field of robots, In this paper, a scheme of visual guidance and accurate positioning of pipeline intersecting line weld detection robot through automatic recognition of magnetic color code bar and multi-sensor information fusion is established. Aiming at the established scheme of visual guidance and information fusion positioning, an image processing algorithm based on machine vision guidance scheme is proposed in this paper. On the basis of color space conversion and color edge detection, the image processing algorithm introduces the Hough transform line detection algorithm of image pyramid method to quickly identify the magnetic color code bar and realize the real-time control of the robot's own attitude and motion trajectory. The multi-threshold processing is introduced into the region growth algorithm to segment the artificial positioning identification points, and the artificial positioning identification points are counted to assist the robot to realize the positioning. Then, according to the restrictive condition of the weld motion of the robot along the intersecting line of the pipe, the trajectory and attitude of the robot are modeled in this paper. The mathematical relationship between the attitude change and the trajectory of the robot in the process of motion is analyzed and deduced. The odometer positioning model is established by using photoelectric encoder and manual positioning marking point counting, and the IMU positioning model is established by using IMU system. Finally, according to the derived mathematical relationship, Kalman filter is used to fuse the information of IMU model and odometer model to realize the robot positioning function under multi-sensor information fusion. The odometer model, IMU model and multi-sensor information fusion model are simulated by MATLAB software, and the results and errors of the three positioning models are analyzed and compared. The experimental results show that the image processing algorithm proposed in this paper can make the robot recognize the magnetic color code bar stably, so as to realize the automatic tracing function of the robot. At the same time, the image processing algorithm can also accurately segment the positioning identification points to assist the robot to achieve positioning. The computer simulation results show that the multi-sensor information fusion based on Kalman filter can realize the accurate positioning of the robot, and its positioning accuracy is significantly better than that of photoelectric encoder or IMU system alone.
【学位授予单位】:深圳大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP391.41;TP242
本文编号:2477979
[Abstract]:In the field of nondestructive testing, the detection of pipeline intersecting line weld defects is difficult and the detection efficiency is low. The pipe intersecting line weld detection robot attached to the steel pipe wall by magnetic wheel needs to carry the nondestructive testing equipment to move along the intersecting line weld and detect the weld defects. In the process of detection, the angle and distance between the detection probe and the weld should be adjusted constantly. Therefore, the automatic nondestructive testing robot for pipeline intersecting line weld must have the ability of automatic tracing and weld defect location. Based on the detailed analysis and consideration of the actual requirements of nondestructive testing of pipeline intersecting line welds and the field application scenarios, and after comparing various autonomous tracing and positioning schemes commonly used in the field of robots, In this paper, a scheme of visual guidance and accurate positioning of pipeline intersecting line weld detection robot through automatic recognition of magnetic color code bar and multi-sensor information fusion is established. Aiming at the established scheme of visual guidance and information fusion positioning, an image processing algorithm based on machine vision guidance scheme is proposed in this paper. On the basis of color space conversion and color edge detection, the image processing algorithm introduces the Hough transform line detection algorithm of image pyramid method to quickly identify the magnetic color code bar and realize the real-time control of the robot's own attitude and motion trajectory. The multi-threshold processing is introduced into the region growth algorithm to segment the artificial positioning identification points, and the artificial positioning identification points are counted to assist the robot to realize the positioning. Then, according to the restrictive condition of the weld motion of the robot along the intersecting line of the pipe, the trajectory and attitude of the robot are modeled in this paper. The mathematical relationship between the attitude change and the trajectory of the robot in the process of motion is analyzed and deduced. The odometer positioning model is established by using photoelectric encoder and manual positioning marking point counting, and the IMU positioning model is established by using IMU system. Finally, according to the derived mathematical relationship, Kalman filter is used to fuse the information of IMU model and odometer model to realize the robot positioning function under multi-sensor information fusion. The odometer model, IMU model and multi-sensor information fusion model are simulated by MATLAB software, and the results and errors of the three positioning models are analyzed and compared. The experimental results show that the image processing algorithm proposed in this paper can make the robot recognize the magnetic color code bar stably, so as to realize the automatic tracing function of the robot. At the same time, the image processing algorithm can also accurately segment the positioning identification points to assist the robot to achieve positioning. The computer simulation results show that the multi-sensor information fusion based on Kalman filter can realize the accurate positioning of the robot, and its positioning accuracy is significantly better than that of photoelectric encoder or IMU system alone.
【学位授予单位】:深圳大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP391.41;TP242
【参考文献】
相关期刊论文 前10条
1 刘步实;梁秋媛;覃晓;;区域生长算法在校园道路识别中的应用[J];软件导刊;2016年03期
2 柯卫;王宏力;袁宇;崔祥祥;陆敬辉;;基于区域生长法的星图中星的提取方法[J];传感器与微系统;2015年12期
3 林军;姚志忠;;压力管道相贯线焊缝超声波探伤[J];无损探伤;2014年06期
4 谢斌盛;张正平;;彩色图像处理综述[J];硅谷;2010年23期
5 黄漫国;樊尚春;郑德智;邢维巍;;多传感器数据融合技术研究进展[J];传感器与微系统;2010年03期
6 潘永利;孙小雪;;数字图像处理技术在移动机器人中的应用[J];青岛理工大学学报;2008年05期
7 赵小川;罗庆生;韩宝玲;;机器人多传感器信息融合研究综述[J];传感器与微系统;2008年08期
8 陈小宁;黄玉清;杨佳;;多传感器信息融合在移动机器人定位中的应用[J];传感器与微系统;2008年06期
9 王磊;段会川;;Otsu方法在多阈值图像分割中的应用[J];计算机工程与设计;2008年11期
10 张晓东;王园宇;郝鹏飞;李元宗;;相贯线及其展开曲线的方程构建方法的研究[J];机械设计与研究;2008年02期
,本文编号:2477979
本文链接:https://www.wllwen.com/kejilunwen/zidonghuakongzhilunwen/2477979.html