HEVC帧间预测编码的研究
发布时间:2018-10-30 21:07
【摘要】:随着通信技术、计算机技术和多媒体技术的迅猛发展,多媒体已经深深融入到人们生活,其中视频是多媒体中信息量最大、也是最常用的媒体,然而,它的数据量也是非常庞大的,不经过压缩编码的视频一般难以直接在互联网中传输。视频压缩编码技术是视频存储、传输的前提,也是数字电视、视频监控、网络视频等应用的核心技术。近年来视频分辨率越来越大,高分辨率视频对存储提出了很高的要求,为此ITU-T视频编码专家组和MPEG委员会联合定制了最新的视频编码标准HEVC(High Efficiency Video Coding),并于2013年1月正式颁布,同时提供了编解码参考软件HM,引起了广大学者的热情研究。帧间编码技术是视频编码中的关键技术之一,也是最耗时的部分,占据了整个视频编码中约一半的运算量,如何兼顾运算量与运算精度一直以来是个热门而具挑战性的课题,学者们纷纷提出来了各种不同快速算法,然而帧间预测编码的各种快速算法都是基于块匹配,近年来有部分学者开始在帧间编码中引入仿射变换模型,意在进一步提高编码质量。本文在学习HEVC整体编码结构的基础上,深入研究了帧间编码,结合部分学者对仿射变换的初步探索结果,进一步探讨了仿射变换的模式划分、运动向量以及运算复杂度,并通过实验证明了本文所提观点的正确性和算法的有效性,论文主要工作如下:(1)提出了基于坐标描述的仿射运动模型,并对仿射变换单元做进一步细分,将基于方块的仿射变换转化为基于三角块的仿射变换,实验证明,本文算法可以获得比块平移更大的匹配精度;研究了由仿射变换产生的MV(运动向量)的性质,通过理论和实验证明了相邻MV的相关性很差,不具备类似Merge模式的可预测性;提出一种用于仿射运动估计的快速搜索方法,在匹配精度损失很小的情况下,缩短了运动估计的时间。(2)鉴于HEVC的参考软件过于复杂、帧间预测编码结构庞大,所以未能将本文算法嵌入HM中,但本文仿照HEVC结构独立编写了整个帧间预测编码,包括预测单元的划分、运动估计、量化及熵编码,通过实验数据作出不同测试序列的B-D曲线,仿真结果表明了本文所提算法在高精度编码模式下,应对纹理细节复杂、运动剧烈的视频具有其优越性。
[Abstract]:With the rapid development of communication technology, computer technology and multimedia technology, multimedia has been deeply integrated into people's life. Its data is also very large, uncompressed video is generally difficult to transmit directly over the Internet. Video compression and coding technology is the premise of video storage and transmission, as well as the core technology of digital TV, video surveillance, network video and other applications. In recent years, the video resolution is getting bigger and bigger, and the high resolution video has put forward the very high request to the storage. For this reason, the expert group of ITU-T video coding and the MPEG committee have jointly customized the newest video coding standard HEVC (High Efficiency Video Coding),. It was promulgated in January 2013, and the codec reference software HM, has aroused the enthusiasm of scholars. Inter-frame coding is one of the key techniques in video coding, and it is also the most time-consuming part. It accounts for about half of the computation in the whole video coding. How to balance the computation and precision has always been a hot and challenging topic. Scholars have put forward a variety of fast algorithms. However, the fast algorithms of inter-frame prediction coding are based on block matching. In recent years, some scholars began to introduce affine transformation models into inter-frame coding. It is intended to further improve the coding quality. Based on the study of the global coding structure of HEVC, this paper studies the inter-frame coding in depth, and further discusses the pattern partition, motion vector and computational complexity of the affine transformation, combined with the preliminary results of some scholars on the affine transformation. The experimental results show the correctness of the proposed method and the validity of the algorithm. The main work of this paper is as follows: (1) the affine motion model based on coordinate description is proposed, and the affine transformation unit is further subdivided. The block based affine transformation is transformed into the triangle block based affine transformation. The experiments show that the proposed algorithm can achieve greater matching accuracy than the block translation. The properties of MV (motion vector) generated by affine transformation are studied. It is proved by theory and experiment that the correlation of adjacent MV is very poor, and it does not have the predictability similar to Merge model. This paper presents a fast search method for affine motion estimation, which shortens the time of motion estimation with small loss of matching accuracy. (2) in view of the complexity of HEVC reference software, the frame prediction coding structure is huge. So we can not embed the algorithm into HM, but we write the whole inter-frame prediction code according to the HEVC structure independently, including the division of prediction units, motion estimation, quantization and entropy coding. The B-D curves of different test sequences are made from the experimental data. The simulation results show that the proposed algorithm is superior to the video with complex texture details and intense motion under the high-precision coding mode.
【学位授予单位】:海南大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TN919.81
本文编号:2301189
[Abstract]:With the rapid development of communication technology, computer technology and multimedia technology, multimedia has been deeply integrated into people's life. Its data is also very large, uncompressed video is generally difficult to transmit directly over the Internet. Video compression and coding technology is the premise of video storage and transmission, as well as the core technology of digital TV, video surveillance, network video and other applications. In recent years, the video resolution is getting bigger and bigger, and the high resolution video has put forward the very high request to the storage. For this reason, the expert group of ITU-T video coding and the MPEG committee have jointly customized the newest video coding standard HEVC (High Efficiency Video Coding),. It was promulgated in January 2013, and the codec reference software HM, has aroused the enthusiasm of scholars. Inter-frame coding is one of the key techniques in video coding, and it is also the most time-consuming part. It accounts for about half of the computation in the whole video coding. How to balance the computation and precision has always been a hot and challenging topic. Scholars have put forward a variety of fast algorithms. However, the fast algorithms of inter-frame prediction coding are based on block matching. In recent years, some scholars began to introduce affine transformation models into inter-frame coding. It is intended to further improve the coding quality. Based on the study of the global coding structure of HEVC, this paper studies the inter-frame coding in depth, and further discusses the pattern partition, motion vector and computational complexity of the affine transformation, combined with the preliminary results of some scholars on the affine transformation. The experimental results show the correctness of the proposed method and the validity of the algorithm. The main work of this paper is as follows: (1) the affine motion model based on coordinate description is proposed, and the affine transformation unit is further subdivided. The block based affine transformation is transformed into the triangle block based affine transformation. The experiments show that the proposed algorithm can achieve greater matching accuracy than the block translation. The properties of MV (motion vector) generated by affine transformation are studied. It is proved by theory and experiment that the correlation of adjacent MV is very poor, and it does not have the predictability similar to Merge model. This paper presents a fast search method for affine motion estimation, which shortens the time of motion estimation with small loss of matching accuracy. (2) in view of the complexity of HEVC reference software, the frame prediction coding structure is huge. So we can not embed the algorithm into HM, but we write the whole inter-frame prediction code according to the HEVC structure independently, including the division of prediction units, motion estimation, quantization and entropy coding. The B-D curves of different test sequences are made from the experimental data. The simulation results show that the proposed algorithm is superior to the video with complex texture details and intense motion under the high-precision coding mode.
【学位授予单位】:海南大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TN919.81
【相似文献】
相关期刊论文 前7条
1 张志冰;帧间预测编码的误码扩散分析及改善方法[J];通信学报;1990年06期
2 王超;于洪珍;徐小民;李鹏;;H.264帧间预测编码的块匹配模式的优化选择[J];网络安全技术与应用;2009年04期
3 马宏兴;;H.264/AVC帧间预测编码模式快速选择算法[J];计算机应用与软件;2011年08期
4 陈云峰,高厚琴;低码率活动图像帧间预测编码的研究[J];电视技术;1999年04期
5 冯志飞;李璋;;H.264/AVC帧间预测编码模式的快速选择算法研究[J];黑龙江科技信息;2011年13期
6 王磊;葛镜;熊曼子;;帧间预测编码中的运动估计算法研究[J];计算机与数字工程;2007年12期
7 刘玉珍;陶志勇;;H.264帧间预测编码块匹配模式的研究[J];信息技术;2007年10期
相关硕士学位论文 前3条
1 李峰;HEVC帧间预测编码的研究[D];海南大学;2017年
2 向厚振;帧间预测编码的FPGA设计与实现[D];中北大学;2013年
3 马宏兴;H.264/AVC帧间预测编码模式快速选择算法研究[D];西南交通大学;2006年
,本文编号:2301189
本文链接:https://www.wllwen.com/kejilunwen/xinxigongchenglunwen/2301189.html