当前位置:主页 > 科技论文 > 计算机论文 >

基于GPU的网络编码的并行计算研究

发布时间:2018-04-17 12:06

  本文选题:网络编码 + 组播网络 ; 参考:《浙江理工大学》2012年硕士论文


【摘要】:网络编码是一种新颖的网络传输技术,最早于2000年,由香港中文大学的Ahlswede等人首次提出。与传统路由组播方式只允许中间节点转发接收信息不同,网络编码理论允许中间节点对接收到的信息进行编码处理,再转发出去,信宿节点通过解码运算译出信源发出的信息。网络编码可以实现由最大流最小割定理所决定的组播网络的理论信息流传输容量。网络编码具有提高网络吞吐量、鲁棒性等优点,已经被广泛应用于无线Ad Hoc网络、传感器网络、P2P内容分发、分布式文件存储和网络安全等领域,其研究结合了信息论、计算机通信网络、组播技术、多用户信息论和图论等多方面的知识。自网络编码理论提出以来,关于网络编码理论及其应用的研究已经成为网络信息界的热点之一。 网络编码理论在充分利用网络理论组播速率的同时,也带来了相应的开销,如中间节点进行编码处理带来的计算开销和存储开销,信宿节点进行解码计算引入的时延,编码节点需要具有编码能力的网络硬件设备等。如何在应用网络编码技术充分利用网络传输能力的同时,尽可能的减少时延和各种开销,是网络编码在实际应用中所要解决的重要问题。 在本文我们主要做了以下2个方面的工作: 1.针对应用于大规模网络环境中的随机线性网络编码算法的编解码吞吐率较低的情况,结合GPU高度并行的特性,本文提出了一种基于GPU并行加速的线性网络编码算法,该并行算法程序主要借助了英伟达于2007年6月推出的基于GPU的通用计算模型:CUDA统一架构,,将实现过程转化成CUDA多线程的并行计算过程,在GPU中实现了加速,提高了编码和解码的吞吐率,减少了编码和解码操作带来的计算时延,有利于网络编码在流媒体服务等实时网络中的应用。 2.对于已知拓扑结构的网络,本文研究了其网络编码资源开销优化问题,即寻找具有最小编码节点或编码链路的编码方案,以及代数型网络编码框架下的编码优化模型,并对应用于编码优化中的启发式-遗传算法进行了分析和研究,该算法产生一个优化结果往往需要几个小时,且难以得到最优解,针对以上情况,本文提出了一种基于GPU并行加速的遗传算法的解决方案,并在算法中加入了新的处理步骤,不仅减少了遗传算法每次循环所需要的时间,而且解决了算法的局部性问题和收敛速度较慢的问题。然后通过仿真实验和结果分析,从算法的有效性,收敛性能,运行时间等方面进行了验证。 最后,对本文的研究工作进行了总结,并指出了本文的不足以及下一步的研究内容和方向。
[Abstract]:Network coding is a novel network transmission technology, which was first proposed by Ahlswede et al of the Chinese University of Hong Kong (CUHK) in 2000.Different from the traditional routing multicast mode which only allows intermediate nodes to forward received information network coding theory allows intermediate nodes to code and process the received information and forward it out.Network coding can realize the theoretical information flow transmission capacity of multicast networks determined by the maximum flow minimum cut theorem.Network coding has been widely used in wireless Ad Hoc networks, sensor networks, P2P content distribution, distributed file storage and network security.Computer communication network, multicast technology, multi-user information theory and graph theory and other aspects of knowledge.Since the theory of network coding was put forward, the research on network coding theory and its application has become one of the hot topics in the field of network information.Network coding theory not only makes full use of the multicast rate of network theory, but also brings the corresponding overhead, such as the computation overhead and storage overhead brought by the intermediate node coding processing, the delay caused by the decoding calculation of the host node.Coding nodes need network hardware devices with coding capability.How to use the network coding technology to make full use of the network transmission ability and reduce the delay and all kinds of overhead as much as possible is an important problem to be solved in the practical application of network coding.In this paper, we mainly do the following two aspects of work:1.In view of the low throughput of random linear network coding algorithm applied in large-scale network environment and the high parallelism of GPU, a linear network coding algorithm based on GPU parallel acceleration is proposed in this paper.The parallel algorithm program mainly uses the universal computing model: CUDA based on GPU, which was put forward by Nvidia in June 2007, to transform the realization process into the parallel computing process of CUDA multithreading, and accelerates it in GPU.It improves the throughput of coding and decoding, reduces the computational delay caused by coding and decoding operations, and is conducive to the application of network coding in real-time networks such as streaming media services.2.For the network with known topology, this paper studies the optimization problem of the network coding resource overhead, that is, to find the coding scheme with the minimum coding node or the coding link, as well as the coding optimization model under the algebraic network coding framework.The heuristic genetic algorithm used in coding optimization is analyzed and studied. It takes several hours for the algorithm to produce an optimization result, and it is difficult to obtain the optimal solution.In this paper, a solution of genetic algorithm based on GPU parallel acceleration is proposed, and a new processing step is added to the algorithm, which not only reduces the time required for each cycle of genetic algorithm.Moreover, the localization problem and the slow convergence rate of the algorithm are solved.Then, the validity, convergence performance and running time of the algorithm are verified by simulation experiment and result analysis.Finally, this paper summarizes the research work, and points out the shortcomings of this paper, as well as the next research content and direction.
【学位授予单位】:浙江理工大学
【学位级别】:硕士
【学位授予年份】:2012
【分类号】:TN915.01;TP338.6

【参考文献】

相关期刊论文 前6条

1 郝琨;金志刚;;一种最小化编码节点的网络编码优化算法[J];电子与信息学报;2011年02期

2 邓亮;赵进;王新;;网络编码下的编码开销-链路开销联合优化[J];计算机研究与发展;2010年03期

3 吴恩华,柳有权;基于图形处理器(GPU)的通用计算[J];计算机辅助设计与图形学学报;2004年05期

4 黄政;王新;;网络编码中的优化问题研究[J];软件学报;2009年05期

5 邓亮;赵进;王新;;基于遗传算法的网络编码优化[J];软件学报;2009年08期

6 陶少国;黄佳庆;杨宗凯;乔文博;熊志强;;网络编码研究综述[J];小型微型计算机系统;2008年04期



本文编号:1763539

资料下载
论文发表

本文链接:https://www.wllwen.com/kejilunwen/jisuanjikexuelunwen/1763539.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户08323***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com