当前位置:主页 > 管理论文 > 移动网络论文 >

基于GPU的网络流量特征提取并行算法设计与实现

发布时间:2019-02-14 07:04
【摘要】:网络流量分类技术在增强网络可控性以及加强网络管理方面都发挥着重要的作用。随着网络应用的层出不穷,对实时、准确的流量分类技术提出了更高的要求,使得近年来研究者大量引入机器学习领域知识来处理流量分类问题,取得了较好的分类效果。但是,特征提取作为机器学习分类算法中一个重要环节,在处理大数据流量时因其计算复杂度较高、耗时过长已成为制约机器学习算法应用于实时流量分类的主要瓶颈。 近年来,GPU硬件体系结构的快速发展使其浮点运算和并行计算能力远远超过了CPU,在大规模并行处理和科学计算等方面取得了广泛应用。特别是NVIDIA公司CUDA编程模型的推出,提供了丰富的API函数,使其能够更好地发挥GPU并行处理能力。 本文首先介绍了GPU在体系结构和编程方式上与传统CPU的不同。其次,对串行特征提取算法执行流程进行了介绍,并从串行算法每部分的计算任务大小和特点入手对算法的可并行性进行了逐一地分析。在此基础上,采用CUDA编程语言设计实现了并行特征提取算法,并利用流化技术和GPU异构执行的特点对并行算法进行了优化。最后,通过实验对本文提出的并行算法及其优化方案在LINUX平台下进行了测试和验证。实验结果表明,在进行大数据量的网络流特征提取时,优化后的并行算法相比于串行算法可以达到2倍以上的加速比,取得了显著的性能优势。
[Abstract]:Network traffic classification plays an important role in enhancing network controllability and network management. With the continuous emergence of network applications, higher requirements for real-time and accurate traffic classification techniques have been put forward. In recent years, researchers have introduced a large number of machine learning domain knowledge to deal with traffic classification problems, and achieved better classification results. However, as an important part of machine learning classification algorithm, feature extraction has become the main bottleneck in the application of machine learning algorithm in real-time traffic classification because of its high computational complexity and long time consuming. In recent years, with the rapid development of GPU hardware architecture, its floating-point computing and parallel computing capabilities have far exceeded the extensive application of CPU, in large-scale parallel processing and scientific computing. Especially, the introduction of CUDA programming model of NVIDIA Company provides abundant API functions, which makes it better exert the ability of GPU parallel processing. This paper first introduces the differences between GPU and traditional CPU in architecture and programming. Secondly, the execution flow of the serial feature extraction algorithm is introduced, and the parallelism of the algorithm is analyzed one by one from the calculation task size and characteristics of each part of the serial algorithm. On this basis, the parallel feature extraction algorithm is designed and implemented by using CUDA programming language, and the parallel algorithm is optimized by using the fluidization technology and the characteristics of GPU heterogeneous execution. Finally, the proposed parallel algorithm and its optimization scheme are tested and verified on LINUX platform through experiments. The experimental results show that the optimized parallel algorithm can achieve a speedup ratio of more than 2 times compared with the serial algorithm, and obtain a significant performance advantage.
【学位授予单位】:内蒙古大学
【学位级别】:硕士
【学位授予年份】:2014
【分类号】:TP181;TP393.06

【参考文献】

相关期刊论文 前3条

1 徐鹏;刘琼;林森;;改进的对等网络流量传输层识别方法[J];计算机研究与发展;2008年05期

2 徐鹏;刘琼;林森;;基于支持向量机的Internet流量分类研究[J];计算机研究与发展;2009年03期

3 熊刚;孟姣;曹自刚;王勇;郭莉;方滨兴;;网络流量分类研究进展与展望[J];集成技术;2012年01期



本文编号:2421942

资料下载
论文发表

本文链接:https://www.wllwen.com/guanlilunwen/ydhl/2421942.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户da6a4***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com