当前位置:主页 > 科技论文 > 计算机论文 >

数据中心网络TCP Incast研究

发布时间:2018-05-01 22:58

  本文选题:云计算 + TCP ; 参考:《广西师范大学》2013年硕士论文


【摘要】:随着云计算技术的不断成熟,数据中心也随之发展。现在数据中心已不再只是一个简单的服务器统一托管、维护的场所,它已逐渐演变成了一个集大数据量运算和存储为一体的高性能计算机的集中地。数据被存储在大型的数据中心,通过广域网存取,Google, Yahoo, IBM, Microsoft, Amazon.com等都在这类数据中心提供存、取、处理数据服务。云计算的广泛应用和产业发展也极大的推动数据中心的发展。 现代数据中心网络独特的工作负载、速度和规模,违背了TCP最初依据的一些基本假设。因此,在高带宽、低延迟的数据中心环境中使用TCP协议时,出现了‘'Incast'问题。在数据中心网络中,为了提高性能和可靠性,数据分散存储在多台服务器上。在多对一通信模式中,多个发送端通过以太网交换机与一个接收端通信,接收端向多个发送端请求数据,发送端响应请求并向接收端发送数据,如果发送的数据包超过交换机的缓冲能力,将导致大量的数据包丢失甚至引发超时,从而使得传输链路带宽在超时周期处于空闲状态。在高带宽、低延时的数据中心网络中,链路带宽不能被充分利用将导致吞吐量严重下降。因此,如何从根本上以低开销解决该问题,是一个具有实际应用价值的重要课题。为了能够以低开销彻底解决该问题,本文在分析现有的TCP Incast解决方案和TCP Incast分析模型的基础上,围绕TCP Incast吞吐量崩溃和链路层上使用量化拥塞通知(Quantized Congestion Notification, QCN)协议中数据流共享瓶颈链路存在的不公平性等问题展开了深入研究。具体的研究内容以及取得的成果如下: (1)介绍TCP Incast产生的背景和数据中心的一些典型应用及其存在的问题,分析TCP Incast产生原因并详细阐述了TCP Incast的研究现状,对现有的应用层、传输层和链路层上方案进行深入分析,并在开销和实用性上比较其优缺点。应用层和传输层的解决方案一定程度上可以缓解TCP Incast,但不能从根本上解决TCP Incast问题,链路层上的解决方案可以从根本上解决TCP Incast,但是在实际环境中性能很差。 (2)深入分析了现存的三个典型的TCP Incast模型,即基于管道的TCP Incast模型、基于数据块的TCP Incast模型和基于轮次的TCP Incast模型。分析三种模型的相同点和不同点,并从交换队列管理机制、TCP版本、TCP拥塞控制阶段、实现复杂度和验证方式等多个方面比较其性能的优劣性。通过实验验证这三个模型,基于轮次的TCP Incast模型不仅提出了引起TCP Incast的根本原因,而且精确反映TCP Incast吞吐量的变化趋势,对于研究TCP Incast问题具有重要的指导意义。 (3)着重研究了链路层端到端量化拥塞通知算法(Quantized Congestion Notification, QCN),针对数据中心网络中的链路层量化拥塞通知算法QCN数据流之间存在不公平性问题,提出了一种增强的量化拥塞通知算法(Ehanced Quantized Congestion Notification, EQCN)。该算法对QCN的交换机和发送端同时进行改进,交换机端按照一定的概率对收到的数据包进行采样,当一个被采样数据包到达时,EQCN交换机检测当前缓冲区队列长度并计算一个反应当前拥塞程度的反馈值,当发送拥塞时,交换机生成一个携带该反馈值的拥塞反馈信息,并向所有超过共享链路平均带宽的数据流的源端发送该拥塞反馈信息。发送端的速率限制器采用动态门限,使速率增加阶段的门限值与当前发送速率成正比,速率高的数据流,门限值也高,EQCN算法对交换机和发送端的改进共同提高了数据流共享瓶颈链路的公平性。最后,通过实验进行验证,实验结果表明该算法显著提高了TCP Incast吞吐量。 (4)使用NS-2网络仿真工具搭建仿真平台,对QCN数据流之间的不公平和提出的EQCN算法进行仿真模拟,实验结果表明QCN数据流之间存在不公平性,本文提出的EQCN算法能够提高数据流之间的公平性,显著提高TCP Incast吞吐量。
[Abstract]:With the growing maturity of cloud computing technology, the data center has also developed. Now the data center is no longer just a simple server unified hosting, maintenance site, it has gradually evolved into a collection of high performance computers with large data operations and storage. Data is stored in large data centers. Over the wide area network access, Google, Yahoo, IBM, Microsoft, Amazon.com, etc. all provide, take, and process data services in this kind of data center. The wide application of cloud computing and the development of industry also greatly promote the development of the data center.
The unique workload, speed and size of the modern data center network violates some basic assumptions originally based on TCP. Therefore, when the TCP protocol is used in a high bandwidth, low delay data center environment, a ''Incast' problem 'appears. In the data center network, data are stored in multiple servers for high performance and reliability. In a multi to one communication mode, multiple sending ends communicate with a receiver through an Ethernet switch, the receiver requests data to multiple sender, sends the response request and sends data to the receiver. If the packet is over the buffer capacity of the switch, a large number of data packets will be lost or even timeout is caused. The transmission link bandwidth is idle in the timeout period. In a high bandwidth, low delay data center network, the link bandwidth can not be fully utilized will lead to a serious decline in throughput. Therefore, how to solve the problem at a low cost fundamentally is an important subject of practical value. To solve this problem, based on the analysis of existing TCP Incast solutions and TCP Incast analysis models, this paper focuses on the problems of the TCP Incast throughput crash and the unfairness of the existing bottleneck links in the data stream sharing bottleneck in the Quantized Congestion Notification (QCN) protocol (Quantized Congestion Notification, QCN) protocol on the link layer. The specific research contents and achievements are as follows:
(1) introduce the background of TCP Incast and some typical applications of data center and the existing problems, analyze the causes of TCP Incast and elaborate the research status of TCP Incast, analyze the existing application layer, transmission layer and link layer, and compare their advantages and disadvantages in overhead and practicability. Application layer and transmission Layer solutions can alleviate TCP Incast to a certain extent, but can not fundamentally solve the TCP Incast problem. The solution on the link layer can fundamentally solve TCP Incast, but the performance is very poor in the actual environment.
(2) in-depth analysis of three existing typical TCP Incast models, the TCP Incast model based on pipeline, the TCP Incast model based on the data block and the TCP Incast model based on the rotation, analyze the similarities and differences of the three models, and realize the complexity and the verification mode from the exchange queue management mechanism, TCP edition, TCP congestion control stage. The performance is compared with other aspects. Through experiments, the three models are verified. The TCP Incast model based on the rotation not only puts forward the fundamental cause of the TCP Incast, but also accurately reflects the change trend of TCP Incast throughput, which is of great guiding significance for the study of the TCP Incast problem.
(3) focus on the link layer end to end congestion notification algorithm (Quantized Congestion Notification, QCN). For the link layer quantization congestion notification algorithm in the data center network, there is an unfair problem between QCN data streams, and an enhanced quantization plug notification algorithm (Ehanced Quantized Congestion Notification) is proposed. EQCN). The algorithm improves the switch and sending end of QCN at the same time. The switch side samples the received packets according to a certain probability. When a sampled data packet arrives, the EQCN switch detects the current buffer queue length and calculates a feedback value that reacts to the current congestion level. When the congestion is sent, the switch is born. It is a congestion feedback information that carries the feedback value, and sends the congestion feedback information to all the sources of the data stream that exceeds the average bandwidth of the shared link. The rate limiter of the transmitter uses the dynamic threshold to make the threshold value of the rate increasing phase proportional to the current transmission rate, the high rate data stream and the high threshold value, the EQCN algorithm. The improvement of the switch and the sending end improves the fairness of the data stream sharing bottleneck link. Finally, the experiment is carried out to verify the results, and the experimental results show that the algorithm improves the throughput of TCP Incast significantly.
(4) using the NS-2 network simulation tool to build a simulation platform, simulate the unfairness between QCN data streams and the proposed EQCN algorithm. The experimental results show that there is an unfairness between the QCN data streams. The EQCN algorithm proposed in this paper can improve the fairness between the data streams and improve the throughput of the TCP Incast significantly.

【学位授予单位】:广西师范大学
【学位级别】:硕士
【学位授予年份】:2013
【分类号】:TP308

【相似文献】

相关期刊论文 前10条

1 江南;数据中心如何应付管理挑战[J];互联网周刊;2001年40期

2 ;简化管理挑战——惠普推实用数据中心解决方案[J];每周电脑报;2001年67期

3 李庆莉;去数据中心看一看——中国银行华北信息中心计划处处长云恩善谈数据中心运行、管理[J];中国金融电脑;2002年12期

4 马天蔚;;数据中心按需造[J];每周电脑报;2002年25期

5 戚丽,蒋东兴,武海平,冯珂;校园数据中心建设与管理方法的探索[J];教育信息化;2002年S1期

6 何俊山;您企业的数据中心2003了吗?[J];微电脑世界;2003年17期

7 ;挖潜数据中心[J];金融电子化;2004年07期

8 王琨月;;数据中心业务就绪[J];每周电脑报;2004年21期

9 包东智;新热点:创建下一代数据中心[J];上海信息化;2005年10期

10 ;把握数据中心建设五大看点[J];中国计算机用户;2005年10期

相关会议论文 前10条

1 姚,

本文编号:1831368


资料下载
论文发表

本文链接:https://www.wllwen.com/kejilunwen/jisuanjikexuelunwen/1831368.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户083a9***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com