当前位置:主页 > 科技论文 > 软件论文 >

基于多维尺度分析和小波统计特征的图像哈希算法

发布时间:2018-04-19 15:27

  本文选题:图像哈希 + 多维尺度分析 ; 参考:《广西师范大学》2017年硕士论文


【摘要】:图像哈希算法是数字媒体内容安全研究领域的一个前沿课题。它可以将任意尺寸的图像映射成一串短小的字符或者数字序列,现已广泛应用于图像检索、水印嵌入、图像篡改检测和图像质量评价等方面。在实际应用中,图像常常会受到一些正常的数字处理,如JPEG压缩、亮度调整、伽玛校正等,这些操作会改变图像的数据,但是并不改变图像视觉内容。因此,图像哈希算法应该将这些视觉内容相同的图像映射成相同或者相似的哈希序列,这是图像哈希的第一个基本属性,即感知鲁棒性。图像哈希的另一个基本属性是唯一性,它要求视觉内容不同的图像应该具有不同的哈希序列。显然,感知鲁棒性和唯一性是相互制约的两个属性。通常,感知鲁棒性的提高会导致唯一性下降,反之感知鲁棒性减弱则唯一性提高,兼顾两者间的性能平衡是图像哈希算法研究的一个重要指标。本文利用多维尺度分析(Multidimensional Scaling,MDS)、对数极坐标变换(Log-polar Transform,LPT)、边缘检测和多级小波分解等理论和技术,研究图像哈希算法,取得了两项有意义的研究成果。第一项成果是基于多维尺度分析的鲁棒图像哈希算法,该算法能够有效抵抗任意角度的旋转并且具有较好的唯一性。第二项成果是基于边缘检测和小波统计特征的图像哈希算法,该算法能够较好兼顾鲁棒性和唯一性,并且可应用于半参考图像质量评价。论文具体研究成果如下。1.提出基于多维尺度分析的鲁棒图像哈希算法MDS是一种有效的数据分析技术,已经成功的运用于数据可视化、目标检索、数据聚类等方面,但是,在图像哈希方面的研究还很少。本文深入探讨MDS理论,提出了一种联合MDS和LPT的图像哈希算法。该算法先用LPT和离散傅里叶变换(Discrete Fourier Transform,DFT)提取旋转不变的特征矩阵,再通过MDS从特征矩阵中学习压缩表达,最终的哈希相似性判断用相关系数来衡量。该算法能对抗任意角度的旋转操作,这是因为LPT将笛卡尔坐标空间中的旋转操作转化成对数极坐标空间中的平移操作,应用DFT的平移不变性后即可得到旋转不变特征,从理论上确保算法具有抗旋转变换的能力。实验结果显示,该算法对常见数字处理操作如斑点噪声、椒盐噪声、缩放、对比度调整、亮度调整、高斯低通滤波、任意角度旋转等稳健,并且可以有效地区分视觉内容不同的图像。接收机操作特性(Receiver Operating Characteristics,ROC)曲线对比结果表明,该算法在鲁棒性和唯一性方面的分类性能优于多种文献的哈希算法。2.提出基于边缘检测和小波统计特征的图像哈希算法人眼是视觉系统的终端,研究图像哈希在半参考图像质量评价应用时应该充分考虑人类的视觉特性。小波变换将图像从空间域转换到频率域,不同频带反映了图像的不同信息,低频子带是图像的粗略表示,高频子带反映图像的细节变化,主要侧重于边缘、轮廓和纹理,这与人类视觉系统(Human Vision System,HVS)感知图像的多通道特性相似。为此,本文提出了一种基于边缘检测和小波统计特征的哈希算法。该算法先对输入图像进行预处理得到规范化图像,提取规范化图像的边缘信息,接着对边缘信息进行非重叠分块,然后对每个图像块进行多级小波分解,考虑到人眼对不同频带变化的敏感度不同,对不同频带赋予不同的权重系数,最后对不同频带的小波统计特征进行加权计算即可得到图像哈希。哈希的相似性用欧氏距离来判断。ROC曲线对比实验结果表明,该算法在鲁棒性和唯一性方面的分类性能明显优于其他算法。讨论了该算法在半参考图像质量评价应用的性能,实验选取美国TEXAS大学图像和视频工程实验室提供的LIVE图像库为失真图像测试集,通过非线性曲线拟合发现,该算法的客观评价值与LIVE图像库提供的主观差异值(Different Mean Opinion Scores,DMOS)具有较好的相关关系。
[Abstract]:Image hash algorithm is a frontier topic in the field of content security research in digital media. It can map any size of image into a series of short characters or digital sequences. It has been widely used in image retrieval, watermark embedding, image tamper detection and image quality evaluation. In practical applications, images are often subject to Some normal digital processing, such as JPEG compression, brightness adjustment, gamma correction, etc., will change the image data, but do not change the image visual content. Therefore, the image Hashi algorithm should map the same visual content to the same or similar Hashi sequence, which is the first basic attribute of the image Hashi. Another basic attribute of image Hashi is uniqueness, which requires that images with different visual content should have different Hashi sequences. Obviously, the perceptual robustness and uniqueness are two attributes that are restricted to each other. Generally, the enhancement of perceptual robustness can lead to a uniqueness decline, and conversely, the robustness is weakened by the uniqueness of the perceptual robustness. Improving the performance balance between the two is an important index for the study of the image hash algorithm. In this paper, the Multidimensional Scaling (MDS), the logarithmic polar coordinate transformation (Log-polar Transform, LPT), edge detection and multistage wavelet decomposition are used to study the image hash algorithm, and two meaningful items have been obtained. The first achievement is a robust image hash algorithm based on multidimensional scale analysis. The algorithm can effectively resist arbitrary angle rotation and has better uniqueness. The second result is a hash algorithm based on edge detection and wavelet statistical features. The algorithm can give a good consideration to the robustness and uniqueness, and it can be well considered. It is applied to the semi reference image quality evaluation. The research results of this paper are as follows:.1. proposed that robust image hash algorithm based on multidimensional scale analysis (MDS) is an effective data analysis technique, which has been successfully applied to data visualization, target retrieval, data clustering and so on. However, the Research on image hash is very few. In the study of MDS theory, an image hash algorithm for combined MDS and LPT is proposed. The algorithm first uses LPT and discrete Fu Liye transform (Discrete Fourier Transform, DFT) to extract the invariant feature matrix, and then learning compression expression from the feature matrix through MDS, and the final hash similarity judgment is measured by the correlation coefficient. The algorithm can be used in this algorithm. This is because LPT transforms the rotation operation in the Descartes coordinate space into the translational operation in the logarithmic polar space. The rotation invariant features can be obtained after the translational invariance of DFT. It is theoretically ensured that the algorithm has the energy of anti rotation transformation. The experimental results show that the algorithm is used for the common digital place. The operation features such as speckle noise, salt and pepper noise, scaling, contrast adjustment, brightness adjustment, Gauss low pass filtering, arbitrary angle rotation and so on are robust, and can effectively distinguish images with different visual content. The receiver operating characteristics (Receiver Operating Characteristics, ROC) curve contrast results show that the algorithm is robust and unique. The performance of the surface classification is better than the Hashi algorithm of many kinds of literature.2.. The image Hashi algorithm based on edge detection and wavelet statistical features is the terminal of the visual system. The image Hashi should take full consideration of the human visual characteristics when the semi reference image quality evaluation is applied. The small wave changes the image from the space domain to the frequency domain. The different frequency bands reflect the different information of the image. The low frequency subband is a rough representation of the image. The high frequency subband reflects the details of the image, mainly focusing on the edge, contour and texture, which is similar to the multichannel specificity of the Human Vision System (HVS) perception image. This algorithm first preprocesses the input image to get the normalized image, extracts the edge information of the normalized image, then does not overlap the edge information, and then decomposes the multilevel wavelet into each image block, taking into account the different sensitivity of the human eye to the different frequency bands, and the different frequency bands are assigned to the different frequency bands. The similarity of hash hash in different frequency bands can be calculated with different weighting coefficients. The similarity of hash hash can be obtained by Euclidean distance. The results of.ROC curve comparison show that the performance of the algorithm is better than that of other algorithms in robustness and uniqueness. The algorithm is discussed in the semi reference graph. As for the performance of the application of quality evaluation, the experiment selected the LIVE image library provided by the image and video Engineering Laboratory of TEXAS University in the United States as a distorted image test set. The objective evaluation value of the algorithm was related to the subjective difference value (Different Mean Opinion Scores, DMOS) provided by the LIVE image library. Department.

【学位授予单位】:广西师范大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP391.41

【参考文献】

相关期刊论文 前2条

1 杨春玲;陈冠豪;谢胜利;;基于梯度信息的图像质量评判方法的研究[J];电子学报;2007年07期

2 殷晓丽;方向忠;翟广涛;;一种JPEG图片的无参考图像质量评价方法[J];计算机工程与应用;2006年18期



本文编号:1773662

资料下载
论文发表

本文链接:https://www.wllwen.com/kejilunwen/ruanjiangongchenglunwen/1773662.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户4ac37***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com