基于逻辑回归算法的复杂背景棉田冠层图像自适应阈值分割
发布时间:2018-03-30 00:38
本文选题:算法 切入点:棉花 出处:《农业工程学报》2017年12期
【摘要】:棉田冠层覆盖度是监测棉田棉花长势的重要指标,针对棉田复杂环境中冠层图像难以准确分割的问题,该文提出了一种基于逻辑回归算法的复杂背景棉田冠层图像自适应阈值分割方法。首先将棉田冠层图像像素分成叶片冠层和地表背景2类,在HSV颜色空间中分别提取两类像素的H通道值,在RGB颜色空间中分别提取绿色占比值(G/(G+R+B))作为颜色特征;再利用逻辑回归算法确定出各颜色特征的分割阈值,通过H通道分割阈值实现图像的初次分割;再对初次分割结果中的低亮像素使用逻辑回归算法计算出的超绿特征阈值进行低亮像素分割,同时采用绿色占比分割阈值对图像高亮像素及低亮像素分割结果整体实现二次分割,最后采用形态学滤波方法对分割结果进行优化。为评价该分割方法,利用从新疆棉花产区采集到的320幅棉田冠层图像进行试验。结果表明,该方法可在棉田复杂自然背景下,有效分割出棉田冠层区域,平均相对目标面积误差率仅为5.46%,总体平均匹配率达到93.07%;优于超绿特征OTSU分割方法(平均相对目标面积误差率11.78%,总体平均匹配率76.43%)、四分量分割方法(平均相对目标面积误差率24.11%,总体平均匹配率71.67%)、显著性分割方法(平均相对目标面积误差率36.92%,总体平均匹配率66.92%)。该方法的平均处理时间为4.63 s,相对于超绿特征OTSU法(3.84 s)和四分量分割法(2.56 s),耗时多一些,但与显著性分割法(6.25 s)对比,花费时间要少。研究结果可为棉田自然复杂环境下机器视觉技术监测棉花覆盖度提供一种有效途径。
[Abstract]:Canopy coverage is an important index to monitor cotton growth in cotton field. It is difficult to segment canopy image accurately in complex environment of cotton field. In this paper, an adaptive threshold segmentation method based on logical regression algorithm for complex background canopy image is proposed. Firstly, the pixels of cotton canopy image are divided into two categories: leaf canopy and surface background. The H channel values of two kinds of pixels are extracted in the HSV color space, and the green ratio G / G R B is extracted as the color feature in the RGB color space, and then the segmentation threshold of each color feature is determined by using the logical regression algorithm. The first segmentation of image is realized by H-channel segmentation threshold, and then the low-bright pixels in the initial segmentation result are segmented by using the super-green feature threshold calculated by the logic regression algorithm. At the same time, the green duty ratio segmentation threshold is used to realize the secondary segmentation of the high and low bright pixels. Finally, the morphological filtering method is used to optimize the segmentation results. Based on 320 canopy images collected from cotton production areas in Xinjiang, the results show that this method can effectively segment the canopy area of cotton field under the complex natural background of cotton field. The average relative target area error rate is only 5.46 and the overall average matching rate is 93.07, which is superior to the OTSU segmentation method with super-green features (the average relative target area error rate is 11.78 points, the overall average matching rate is 76.43 points), and the four-component segmentation method (average relative target) method. Area error rate 24.11%, overall average matching rate 71.67%, significant segmentation method (mean relative area error rate 36.92, total average matching rate 66.92%), average processing time of this method is 4.63 s, compared with ultra-green feature OTSU method (3.84 s) and four components. The segmentation method is 2. 56 sm, which takes more time. However, compared with the significant segmentation method (6.25 s), it takes less time. The results can provide an effective way for monitoring cotton coverage by machine vision under natural and complex environment.
【作者单位】: 宁夏大学信息工程学院;石河子大学信息科学与技术学院;新疆五家渠市农业局;
【基金】:国家自然科学基金项目(31460317)
【分类号】:S562;TP391.41
【相似文献】
相关期刊论文 前1条
1 陈树越;许九红;;基于光纤锥视觉的植物叶片脉络提取研究[J];农机化研究;2013年12期
相关会议论文 前3条
1 王歆s,
本文编号:1683622
本文链接:https://www.wllwen.com/kejilunwen/ruanjiangongchenglunwen/1683622.html