基于张量的高光谱遥感图像压缩研究
发布时间:2018-04-15 19:29
本文选题:高光谱图像 + 图像压缩 ; 参考:《复旦大学》2014年硕士论文
【摘要】:高光谱遥感图像成像光谱技术能够在电磁波谱的紫外、可见光、近红外与中红外区域获取很多非常窄但光谱连续的图像数据。这些高光谱数据与传统的遥感数据相比,其空间与谱间分辨率都大幅增加,从而使其在目标检测与识别,土地类型分析,农业以及环境监测研究等方面的应用更为广泛。但分辨率增加的同时,其拥有的数据量也急剧增加,造成这些数据的保存和传输非常不便,因此对高光谱图像进行有效的压缩显得尤为必要。传统的高光谱遥感图像压缩算法主要分为无损压缩与有损压缩。但由于对高光谱遥感图像传输具有高压缩比与实时性要求等苛刻条件,无损压缩很难满足要求,因而寻找高保真的有损压缩方法成为当前研究高光谱图像技术人员的急切目标。有损压缩主要有基于离散小波变换的方法。离散小波变换能提供高压缩比和高保真度,但其提供的解相关性能并非最好的。除了离散小波外,目前两维情况下最好的解相关方法是基于主元分析的。但一般的主元分析只考虑到两维情况,而对高维数据,比如高光谱遥感图像,其并不能很好地利用数据整体的信息。本文提出一种基于张量分解和小波包变换的高光谱图像压缩算法。首先,该算法能利用张量分解的性质,充分提取高光谱图像中各个模式的信息,并利用其中包含有空间信息的光谱模式对高光谱图像的光谱维进行解相关。之后,运用比经典Mallat小波分解更为有效的小波包变换对光谱去相关后保留下来的高阶主成分进行JPEG2000压缩。针对采用张量分解算法压缩图像耗时过长的问题,我们根据基于JPEG2000标准的图像压缩算法的特点,采用改进的二分搜索法,简单有效地降低了提出算法的时间复杂度。实验结果表明,提出的算法压缩性能远远好于经典的基于三维小波的算法,并且由于张量分解的应用,不论在码率失真表现还是信息保真度上,提出的算法均比基于二维主元分析的高光谱图像压缩算法更具优势。由于当今高光谱图像的用途越来越广泛,因此特别针对高光谱图像中的异常点信息的保持问题,我们采用了另一种解决方案。在压缩过程中先采用异常点检测算法将异常点进行移除,并将原异常点位置进行邻域插值后再进行压缩,同时把提取出来的异常点矢量进行无损压缩来提高压缩算法对细节信息的保持能力。从实验结果中可以看出,此算法能更进一步提高本文基于张量压缩算法的压缩性能。最后,在一般张量分解的基础上引入分解时的非负约束,即利用非负张量分解,对高光谱遥感图像进行空间和光谱方向的分块压缩。通过实验结果可以看出,引入的非负分解约束更符合自然图像的意义,能较好地提升分解效果,并且分块压缩能很好地降低整个算法的运行复杂度,使本文算法对实际应用更有意义。
[Abstract]:Hyperspectral remote sensing image imaging spectroscopy can obtain a lot of very narrow but continuous spectral image data in the ultraviolet, visible, near infrared and mid-infrared regions of the electromagnetic spectrum.Compared with the traditional remote sensing data, the spatial and spectral resolution of these hyperspectral data are greatly increased, which makes them more widely used in target detection and recognition, land type analysis, agriculture and environmental monitoring research.However, with the increase of resolution, the amount of data has increased dramatically, which makes the preservation and transmission of these data very inconvenient. Therefore, it is necessary to compress hyperspectral images effectively.The traditional hyperspectral remote sensing image compression algorithm is mainly divided into lossless compression and lossy compression.However, because of the harsh conditions such as high compression ratio and real time requirement for hyperspectral remote sensing image transmission, lossless compression is very difficult to meet the requirements, so it is urgent for researchers to find a high fidelity lossy compression method.Lossy compression is mainly based on discrete wavelet transform (DWT).Discrete wavelet transform (DWT) can provide high compression ratio and high fidelity, but its decorrelation performance is not the best.In addition to discrete wavelet, the best method of decorrelation in two dimensional case is based on principal component analysis.But the general principal component analysis only takes into account the two-dimensional situation, but for high-dimensional data, such as hyperspectral remote sensing images, it can not make good use of the overall information of the data.This paper presents a hyperspectral image compression algorithm based on Zhang Liang decomposition and wavelet packet transform.Firstly, the algorithm can fully extract the information of each mode in hyperspectral image by using the character of Zhang Liang decomposition, and decorrelate the spectral dimension of hyperspectral image with spectral mode containing spatial information.Then, the wavelet packet transform, which is more efficient than the classical Mallat wavelet decomposition, is used to compress the higher-order principal components of the spectrum which remain after de-correlation.According to the characteristics of the image compression algorithm based on JPEG2000 standard, we adopt the improved binary search method to reduce the time complexity of the proposed algorithm.The experimental results show that the compression performance of the proposed algorithm is much better than that of the classical algorithm based on 3D wavelet, and because of the application of Zhang Liang decomposition, whether in the performance of rate distortion or the fidelity of information,The proposed algorithm is superior to the hyperspectral image compression algorithm based on two-dimensional principal component analysis.As the applications of hyperspectral images are becoming more and more extensive, we have adopted another solution to the problem of maintaining outliers in hyperspectral images.In the process of compression, the outliers are removed by outlier detection algorithm, and the original outliers are interpolated by neighborhood interpolation, and then compressed.At the same time, the extracted outlier vectors are compressed losslessly to improve the ability of the compression algorithm to preserve the detail information.The experimental results show that this algorithm can further improve the compression performance based on Zhang Liang compression algorithm.Finally, based on the general Zhang Liang decomposition, the non-negative constraint is introduced, that is, the hyperspectral remote sensing image is compressed in space and spectral direction by using non-negative Zhang Liang decomposition.The experimental results show that the non-negative decomposition constraint is more suitable to the meaning of the natural image, and can improve the decomposition effect better, and the block compression can reduce the running complexity of the whole algorithm.The algorithm in this paper is more meaningful to practical application.
【学位授予单位】:复旦大学
【学位级别】:硕士
【学位授予年份】:2014
【分类号】:TP751
【共引文献】
相关期刊论文 前2条
1 陈宏铭;王远大;程玉华;;基于结合小波变换与FastICA算法的脑电信号降噪(英文)[J];生物医学工程学进展;2014年03期
2 谢力;王忠;;基于小波包变换的图像压缩算法研究[J];通信与信息技术;2015年03期
相关博士学位论文 前1条
1 Taha Mohammed Hasan;自适应分形图像压缩[D];哈尔滨工业大学;2013年
,本文编号:1755467
本文链接:https://www.wllwen.com/guanlilunwen/gongchengguanli/1755467.html