遥感图像的语义自动标注方法研究
本文选题:遥感图像 + 自动标注 ; 参考:《上海海洋大学》2017年硕士论文
【摘要】:随着遥感技术向高分辨率、广覆盖方向的发展,遥感图像的数据量持续增长,迫切要求与获取速度相适应的管理和理解能力。遥感图像的语义自动标注是大规模遥感图像数据管理、理解的关键。采用信息技术自动获取遥感图像的语义词,有助于用户直观理解图像内容,完成海量遥感图像数据的高效管理。现有标注方法在遥感图像语义自动标注过程中存在以下挑战:(1)遥感图像的空间结构复杂、地理特征信息丰富,很多学者基于遥感图像的单一特征进行研究,标注精度较低。图像特征融合有助于遥感图像内容的准确表达,但是遥感图像每一维特征并不都与标注精度强相关,部分弱相关的特征会影响遥感图像语义的标注精度。(2)融合的特征越多,遥感图像特征数据维度越高。随着遥感图像数据大规模的增加,传统的语义标注方法在面对海量高维特征数据时难以挖掘特征数据之间的规律,底层特征无法精准反映高层语义概念,使得标注精度受到制约。(3)海洋遥感图像作为典型的遥感图像,具有显著的目标信息稀疏性,即在大尺度的海洋遥感图像中,关键信息往往仅占整个图像的小部分。另外,在不同观测尺度下,感兴趣的对象所蕴含的语义概念也会不同。使用传统的语义标注方法,不仅无法准确表达海洋遥感图像内容,而且还会导致标注性能较差。本文针对遥感图像复杂结构造成语义自动标注精度较低的问题进行研究,主要工作分成三部分:(1)针对遥感图像单一特征无法准确描述图像内容,采用多特征融合的方式来表达遥感图像。但是简单融合遥感图像特征,没有考虑弱相关的特征对标注精度的影响,提出一种基于权重特征融合标注方法。在不分割遥感图像前提下,基于HSV空间采用颜色矩方法提取遥感图像颜色特征,采用共生矩阵方法提取遥感图像纹理特征,采用尺度不变特征变换方法提取遥感图像形状特征。通过遥感图像每一标注词每一维特征的标准差来判断特征的稳定性,根据稳定程度计算出其对应的权重系数。该方法兼顾遥感图像颜色、纹理、形状特征,相应调整强、弱相关的特征在遥感图像特征表达中作用。在公开遥感图像数据集上,采用支持向量机进行遥感图像的语义自动标注实验,实验表明基于权重特征融合的语义标注方法与仅采用遥感图像单一特征的标注方法相比,精度较高。(2)当融合的特征越多时,遥感图像特征数据维度越高,使得标注精度较低。构建一个以高维融合特征为输入的基于深度学习遥感图像语义自动标注模型。模型的第一层使用改进的受限玻尔兹曼机,以适应最优权重融合的高维可视特征作为模型的最底层输入,其他层都是以受限玻尔兹曼机为基础,通过标注模型的逐层空间转换,实现从底层到高层特征提取,发现数据分布式特征,从而提高遥感图像的语义标注精度。将该方法与传统神经网络及基于权重融合的遥感图像标注方法进行对比分析,实验验证融合多特征的深度学习遥感图像标注方法在精度上取得更好效果。(3)由于海洋遥感图像具有显著的目标信息稀疏性,使得标注精度受限,提出一种基于深度学习多示例海洋遥感图像标注模型。首先根据海洋遥感图像多尺度特性,利用小波变换得到图像在不同分辨率下的表达,粗粒度划分海洋遥感图像背景区域和对象区域,利用多示例表示每一尺度层图像。计算同一尺度空间的不同示例之间的相似度,通过设定阈值,完成遥感图像的自适应分割。然后将每一层图像示例作为深度学习模型的输入,完成新图像语义的标注。定量计算标注词间共现和对立的语义关系,改善图像标注词。最后通过实验验证所提出方法明显提高海洋遥感图像的语义标注精度。
[Abstract]:With the development of remote sensing technology to high resolution and wide coverage direction, the data amount of remote sensing image continues to increase, and the ability to adapt to the management and understanding is urgently required. The semantic automatic tagging of remote sensing images is the key to the management of large-scale remote sensing image data, and the key to understanding the semantic words of remote sensing images automatically by using information technology. It is helpful for the users to understand the content of the image and manage the massive remote sensing image data efficiently. The existing annotation methods have the following challenges in the automatic semantic annotation of remote sensing images: (1) the spatial structure of the remote sensing images is complex and the geographic information is rich. Many scholars are based on the single feature of remote sensing images and the accuracy of the annotation. The image feature fusion helps to accurately express the content of remote sensing image, but the feature of each one dimension of remote sensing image is not all closely related to the accuracy of the annotation. Some weak correlation features will affect the accuracy of the semantic annotation of remote sensing image. (2) the more features of the fusion, the higher the feature data dimension of remote sensing images, and the large scale of remote sensing image data. The traditional semantic annotation method is difficult to excavate the rules between the feature data when facing the massive high dimensional feature data. The underlying features can not accurately reflect the high level semantic concepts and make the labeling precision restricted. (3) the ocean remote sensing image, as a typical remote sensing image, has significant target information sparsity, that is, in large scale. In the ocean remote sensing images, the key information often accounts for only a small part of the whole image. In addition, the semantic concepts of the objects of interest are different at different observational scales. Using the traditional semantic annotation method, it can not only express the content of the ocean remote sensing image accurately, but also lead to the poor performance of the annotation. This paper is aimed at remote sensing map. The main work is divided into three parts: (1) the image content can not be described accurately by the single feature of the remote sensing image, and the multi feature fusion method is used to express the remote sensing image. However, the feature of the remote sensing image is simply fused, and the character of the weak correlation is not considered for the accuracy of the tagging. In the premise of non segmentation of remote sensing images, a color moment method is used to extract the color features of remote sensing images on the premise of non segmentation of remote sensing images. The texture features of remote sensing images are extracted by the symbiotic matrix method, and the shape features of remote sensing images are extracted by the scale invariant feature transform method, and the remote sensing images are extracted by remote sensing images. According to the standard deviation of each one dimension characteristic of each label, the corresponding weight coefficient is calculated according to the stability degree. This method takes both the color, texture and shape features of the remote sensing image into consideration, and the corresponding adjustment is strong and the weak correlation function is used in the feature expression of remote sensing image. The support direction is adopted on the open remote sensing image data set. The experiment of semantic automatic tagging of remote sensing images shows that the method of semantic annotation based on the fusion of weight features is more accurate than that using only the single feature of remote sensing image. (2) when the more features of the fusion, the higher the feature data dimension of the remote sensing image, and the lower accuracy of the annotation. The first layer of the model uses an improved limited Boltzmann machine to adapt the high dimensional visual feature of the optimal weight fusion as the bottom input of the model, and the other layer is based on the restricted Boltzmann machine, and the layer by layer space transformation of the tagged model, the first layer of the model. The feature extraction from the bottom to the high level is realized and the distributed feature of the data is found to improve the semantic annotation accuracy of remote sensing images. The method is compared with the traditional neural network and the method of remote sensing image annotation based on the weight fusion. The experiment verifies that the precision of the deep learning remote sensing image annotation with multiple features is more accurate. Good results. (3) because the ocean remote sensing image has significant target information sparsity, which makes the marking precision limited, a multi example ocean remote sensing image annotation model based on depth learning is proposed. Firstly, according to the multi-scale characteristics of the ocean remote sensing image, the image is expressed in different resolution by using the wavelet transform, and the coarse granularity of the ocean is divided into the ocean. The background area and object region of remote sensing image represent the image of each scale layer by multi example. The similarity between different examples of the same scale space is calculated and the adaptive segmentation of remote sensing image is completed by setting the threshold. Then each layer of image example is used as the input of the depth learning model to complete the tagging of the new image semantics. The semantic relation between the concurrence and opposites of the annotation words is calculated to improve the image annotation words. Finally, the proposed method is proved to improve the semantic annotation accuracy of the ocean remote sensing images.
【学位授予单位】:上海海洋大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP751
【参考文献】
相关期刊论文 前10条
1 聂建豪;李士进;;基于图像识别的秸秆焚烧事件检测[J];计算机技术与发展;2017年05期
2 袁海聪;李松斌;邓浩江;;一种基于多特征融合的二维人脸欺诈检测方法[J];计算机应用与软件;2017年02期
3 韩丁;武佩;张强;韩国栋;通霏;;基于颜色矩的典型草原牧草特征提取与图像识别[J];农业工程学报;2016年23期
4 黎健成;袁春;宋友;;基于卷积神经网络的多标签图像自动标注[J];计算机科学;2016年07期
5 薛俊韬;纵蕴瑞;杨正瓴;;基于改进的YCbCr空间及多特征融合的手势识别[J];计算机应用与软件;2016年01期
6 王亚星;齐林;郭新;陈恩庆;;基于稀疏PCA的多阶次分数阶傅里叶变换域特征人脸识别[J];计算机应用研究;2016年04期
7 杨阳;张文生;;基于深度学习的图像自动标注算法[J];数据采集与处理;2015年01期
8 吕启;窦勇;牛新;徐佳庆;夏飞;;基于DBN模型的遥感图像分类[J];计算机研究与发展;2014年09期
9 郑歆慰;胡岩峰;孙显;王宏琦;;基于空间约束多特征联合稀疏编码的遥感图像标注方法研究[J];电子与信息学报;2014年08期
10 张少博;全书海;石英;杨阳;李云路;程姝;;基于颜色矩的图像检索算法研究[J];计算机工程;2014年06期
相关博士学位论文 前1条
1 宋海玉;自动图像标注及标注改善算法的研究[D];吉林大学;2012年
相关硕士学位论文 前6条
1 李艳;基于连续预测的半监督学习图像语义标注研究[D];安徽大学;2015年
2 王凤姣;多特征融合的图像语义提取与分析[D];华中师范大学;2014年
3 王川川;网络社区图像检索中的排序研究[D];山东大学;2011年
4 李关龙;多角度遥感图像三维信息提取及可视化研究[D];哈尔滨工业大学;2010年
5 何希圣;图像自动标注方法研究[D];复旦大学;2010年
6 刘鹏宇;基于内容的图像特征提取算法的研究[D];吉林大学;2004年
,本文编号:2019269
本文链接:https://www.wllwen.com/guanlilunwen/gongchengguanli/2019269.html