融合深度特征和语义邻域的自动图像标注
发布时间:2018-05-03 12:44
本文选题:语义邻域 + 图像标注 ; 参考:《模式识别与人工智能》2017年03期
【摘要】:传统图像标注方法中人工选取特征费时费力,传统标签传播算法忽视语义近邻,导致视觉相似而语义不相似,影响标注效果.针对上述问题,文中提出融合深度特征和语义邻域的自动图像标注方法.首先构建基于深度卷积神经网络的统一、自适应深度特征提取框架,然后对训练集划分语义组并建立待标注图像的邻域图像集,最后根据视觉距离计算邻域图像各标签的贡献值并排序得到标注关键词.在基准数据集上实验表明,相比传统人工综合特征,文中提出的深度特征维数更低,效果更好.文中方法改善传统视觉近邻标注方法中的视觉相似而语义不相似的问题,有效提升准确率和准确预测的标签总数.
[Abstract]:In the traditional image labeling methods, it takes time and effort to select the features manually, and the traditional label propagation algorithm neglects the semantic nearest neighbor, which leads to visual similarity and semantic dissimilarity, which affects the annotation effect. In order to solve the above problems, an automatic image tagging method based on depth feature and semantic neighborhood is proposed. Firstly, a unified and adaptive depth feature extraction framework based on the deep convolution neural network is constructed, and then the training set is divided into semantic groups and the neighborhood image set of the image to be tagged is established. Finally, according to the visual distance, the contribution value of each label of the neighborhood image is calculated and the tagged keywords are sorted. The experiments on the datum data set show that the depth feature dimension proposed in this paper is lower and the effect is better than that of the traditional artificial synthesis feature. In this paper, the problem of visual similarity and semantic dissimilarity in traditional visual nearest neighbor labeling methods is improved, and the accuracy rate and the number of accurately predicted labels are effectively improved.
【作者单位】: 福州大学数学与计算机科学学院;福州大学福建省网络计算与智能信息处理重点实验室;
【基金】:国家自然科学基金项目(No.61502105) 福建省自然科学基金项目(No.2013J05088) 福建省中青年教师教育科研项目(No.JA15075)资助~~
【分类号】:TP391.41
,
本文编号:1838533
本文链接:https://www.wllwen.com/kejilunwen/ruanjiangongchenglunwen/1838533.html