当前位置:主页 > 管理论文 > 移动网络论文 >

按报文长度映射优先级对DCNs中HOMA传输协议性能的影响

发布时间:2022-02-20 17:13
  本研究工作提出了一种强大、高效并且可靠的Homa传输协议修改方案。我们引入了一种新颖、直接并且高效的按报文大小确定优先级的算法。首先,根据传入流量,更改未进行优先级调度的截止报文的大小范围。我们使用显式机制确定截止范围,并根据负载报文大小确定优先级。其次,我们选择了最佳比例来分配优先级,并据此更改已调度和未调度数据包的优先级数。关于Homa传输协议的修改方案是直接的,并不复杂,并且可以被认为是根据报文大小确定优先级算法的一种良好的替代方案。此外,所提出的确定优先级算法的核心思想是根据先前负载报文大小的统计信息显示地分配优先级。除了方案中修改的地方,我们的工作基于与之前Homa工作完全相同的机制。因此,我们的研究工作令人非常满意,而且性能相对较高。此外,我们的研究工作引入了充分的调研,了解报文大小映射优先级对DCN中Homa传输协议性能的影响。仿真结果表明,我们提出的方法提高了 Homa在高网络负载情况下的性能,并且令人满意。在90%的网络负载下,它将短报文的传输延迟降低到了14μs左右,并最大限度地减少了接收端下行链路侧(TOR和接收端之间)的队列长度-与Homa传输协议相比减少了 4... 

【文章来源】:北京邮电大学北京市211工程院校教育部直属院校

【文章页数】:80 页

【学位级别】:硕士

【文章目录】:
ACKNOWLEDGEMENT
ABSTRACT
摘要
Chapter 1 Introduction
    1.1 Background
    1.2 Data Centers Goals
    1.3 Data Center Infrastructure
    1.4 Types of Data Center Networks
        1.4.1 Three-tier
        1.4.2 Fat tree
        1.4.3 DCell
        1.4.4 Others
    1.5 Transport in Data Center Networks
    1.6 Performance in Data Center Networks (DCNs)
    1.7 Challenge and Motivations
Chapter 2 Related Works and Key Technologies
    2.1 Literature Review
        2.1.1 Data Center TCP (DCTCP)
            2.1.1.1 Goal
            2.1.1.2 Algorithm
            2.1.1.3 Benefits
            2.1.1.4 Results
        2.1.2 High-bandwidth Ultra-Low Latency (HULL)
            2.1.2.1 Goal
            2.1.2.2 Challenges and Design
            2.1.2.3 HULL Architecture
            2.1.2.4 Results
        2.1.3 Deadline-Aware Datacenter TCP (D~2TCP)
            2.1.3.1 Goal
            2.1.3.2 Design
            2.1.3.3 Benefits
            2.1.3.4 Results
        2.1.4 Novel datacenter transport (NDP)
            2.1.4.1 Goal
            2.1.4.2 Design
            2.1.4.3 Benefits
            2.1.4.4 Results
        2.1.5 Information-Agnostic Flow Scheduling (PIAS)
            2.1.5.1 Goal
            2.1.5.2 Design
            2.1.5.3 Benefits
            2.1.5.4 Results
        2.1.6 Distributed Near-Optimal Datacenter Transport (pHost)
            2.1.6.1 Goal
            2.1.6.2 Design
            2.1.6.3 Benefits
            2.1.6.4 Results
        2.1.7 Minimal Near-Optimal Datacenter Transport (pFabric)
            2.1.7.1 Goal
            2.1.7.2 Design
            2.1.7.3 Benefits
            2.1.7.4 Results
        2.1.8 Homa Transport Protocol
            2.1.8.1 Goal
            2.1.8.2 Design
            2.1.8.3 Benefits
            2.1.8.4 Results
    2.2 Software Programs and Tools Used in research work
        2.2.1 Linux Operating System
        2.2.2 Virtual Machine VMware Workstation
        2.2.3 OMNeT++4.6
        2.2.4 Python Programming Language
        2.2.5 OriginPro Software Program
Chapter 3 Homa Transport Protocol
    3.1 Introduction
    3.2 Motivations of Homa
    3.3 Objectives of Homa
    3.4 Key Ideas and Contributions
    3.5 Network Assumptions with Homa
    3.6 Using workloads with Homa
    3.7 The Design Space
        3.7.1 No time to schedule each packet
        3.7.2 Buffering is a necessary case
        3.7.3 In-network priorities are a must
        3.7.4 Limited priorities require receiver control to make best use
        3.7.5 Receivers should assign priorities dynamically
        3.7.6 Receivers should be able overcommitment their downlink controlling
        3.7.7 Senders also requires SRPT
        3.7.8 Collecting it all together
    3.8 Design of Homa
        3.8.1 RPCs, without connections
        3.8.2 Behavior of sender
        3.8.3 Flow control
        3.8.4 Priorities of Packet
        3.8.5 Overcommitment
        3.8.6 Incast
        3.8.7 Lost packets
    3.9 Implementation
    3.10 Evaluation
        3.10.1 Implementation Measurements
            3.10.1.1 Homa vs. Infiniband
            3.10.1.2 Homa vs.TCP
            3.10.1.3 Homa vs. other implementations
            3.10.1.4 Incast
        3.10.2 Simulations
            3.10.2.1 Tail latency vs. pFabric, pHost, and PIAS
            3.10.2.2 NDP
            3.10.2.3 Causes of remaining delay
            3.10.2.4 Bandwidth utilization
            3.10.2.5 Lengths of queues
            3.10.2.6 Priority utilization
    3.11 Conclusion
Chapter 4 Proposed Homa Transport Protocol Modifications
    4.1 Introduction
    4.2 Change cutoff message size for unscheduled priority levels
    4.3 Change the number of priority levels for scheduled and unscheduled
    4.4 Using workloads with proposed Homa
    4.5 Design Space
    4.6 Advantages of our work
    4.7 Disadvantages of our work
Chapter 5 Experimental Results
    5.1 Introduction
    5.2 Topology and Configuration
        5.2.1 Simulation global configuration options
        5.2.2 Data center network topology
        5.2.3 Our Homa transport parameters
        5.2.4 Workload types
    5.3 End to end messages tail latency
        5.3.1 With load factor 10% and 6 unscheduled priority levels
        5.3.2 With load factor 90% and 6 unscheduled priority levels
        5.3.3 With load factor 90% and 5 unscheduled priority levels
    5.4 Average and maximum lengths of queue
        5.4.1 With load factor 10% and 6 unscheduled priority levels
        5.4.2 With load factor 90% and 6 unscheduled priority levels
        5.4.3 With load factor 90% and 5 unscheduled priority levels
    5.5 Bandwidth utilization
        5.5.1 With load factor 10% and 6 unscheduled priority levels
        5.5.2 With load factor 90% and 6 unscheduled priority levels
        5.5.3 With load factor 90% and 5 unscheduled priority levels
    5.6 Comparison results
Chapter 6 conclusion and future work
    6.1 Conclusion
    6.2 Future work
References



本文编号:3635488

资料下载
论文发表

本文链接:https://www.wllwen.com/guanlilunwen/ydhl/3635488.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户26836***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com