当前位置:主页 > 管理论文 > 移动网络论文 >

内容中心网络中路由器关键模块的研究

发布时间:2018-12-27 08:58
【摘要】:当前不断丰富的互联网应用给传统的IP网络带来了诸多挑战,为了更好地满足用户高效、便捷地访问网络中内容和服务的需求,以内容为中心(Named Data Networking,NDN)的未来互联网架构应运而生。内容中心网络有三个非常重要的特点:1)路由器可以缓存网络中的大量内容,用于服务后续的请求;2)根据内容或者服务的名字进行路由和转发;3)需要有状态路由,即在转发时维护与路由相关的状态。这三个功能分别对应数据平面上的三个关键模块:内容仓库(Content Store,CS)、转发表(Forwarding Information Base,FIB)以及请求暂存表(Pending Interest Table,PIT)。本文针对这三个模块开展了研究,主要成果如下:1.内容仓库是内容路由器的一项基本功能,该功能区别于传统的包缓冲功能,因而需要完全不同的存储架构。本文提出了存储卡(storage)的概念,并利用存储卡搭建了分布式的存储架构来实现内容仓库,并且同时采用模型和仿真比较了该架构下多种管理策略的性能差异。实验结果表明,“一致性哈希”管理策略能够在该硬件框架下,以极小的额外开销达到非常接近理想管理策略的性能。2.数据平面需要顺序地查找CS、PIT和FIB以实现转发,CS和PIT要求精确匹配,而FIB要求最长前缀匹配;并且查找基于名字而不是IP地址,名字查找比IP地址查找具有更高的复杂度。这使得大规模的、涉及多张表的高速率、低时延转发成为一项艰巨的任务。为此,我们提出了“统一索引”的概念,把三张表的索引整合在一起,只需一次查找就能得出结果,避免了原来的三次查找,降低了转发时延并提高了查找速率。3.鉴于名字转发表体积巨大,本文提出一种名为CONSERT的最优压缩算法,经过压缩后的最优转发表拥有的前缀数量最少,而且转发的行为保持不变。本文利用归纳法证明了其最优性。实验表明,CONSERT算法能够有效减少名字转发表中的前缀数量,进而降低存储开销。4.在数据平面上,PIT除了记录路由的状态,同时还具有请求聚合的功能,即将多个相同的请求聚合为一个,以减少向网络上游(从用户到服务器方向)转发的流量。为了探索PIT请求聚合的比例和PIT表项的数量这两个指标,本文提出对PIT的一种全新的观点:PIT是一个基于TTL(Time To Live)的缓存。基于这一新的认识,我们提出了PIT的分析模型,并给出了对上述两个指标的计算方法。模型和仿真的结果非常契合,表明了该模型的合理性和准确性。
[Abstract]:The current abundant Internet applications bring many challenges to the traditional IP network. In order to better meet the needs of users for efficient and convenient access to the content and services in the network, content-centered (Named Data Networking, The future Internet architecture of NDN has emerged as the times require. Content-centric network has three very important features: 1) routers can cache a large amount of content in the network for service subsequent requests, 2) routing and forwarding according to the content or service name; 3) the need for stateful routing, that is, maintaining routing-related states when forwarding. These three functions correspond to three key modules in the data plane: content warehouse (Content Store,CS), forwarding table (Forwarding Information Base,FIB) and request staging table (Pending Interest Table,PIT). The main results are as follows: 1. Content warehouse is a basic function of content router, which is different from the traditional packet buffering function, so it needs a completely different storage architecture. In this paper, the concept of memory card (storage) is proposed, and a distributed storage architecture is built to implement the content warehouse. At the same time, the performance differences of various management strategies under this architecture are compared by using model and simulation. The experimental results show that the "consistent hash" management strategy can achieve the performance close to the ideal management strategy with minimal additional overhead under the hardware framework. 2. The data plane needs to find CS,PIT and FIB in order to forward, CS and PIT need to match accurately, while FIB requires the longest prefix matching, and the search is based on the name rather than the IP address, and the name lookup is more complex than IP address search. This makes large-scale, high-rate, low-delay forwarding of multiple tables a difficult task. For this reason, we put forward the concept of "unified index", integrating the indexes of three tables together, we can get the result only once, avoid the original three times lookup, reduce the delay of forwarding and improve the speed of searching. In view of the huge volume of the name forwarding table, this paper proposes an optimal compression algorithm called CONSERT. The compressed optimal forwarding table has the least number of prefixes and the behavior of forwarding remains unchanged. In this paper, we prove its optimality by inductive method. Experiments show that the CONSERT algorithm can effectively reduce the number of prefixes in the name forwarding table, and then reduce the storage overhead. 4. 4. In the data plane, the PIT not only records the state of the route, but also has the function of requesting aggregation, that is, many identical requests are aggregated into one to reduce the traffic of forwarding to the upstream of the network (from user to server). In order to explore the ratio of PIT request aggregation and the number of PIT table items, this paper presents a new view on PIT: PIT is a cache based on TTL (Time To Live). Based on this new understanding, we put forward the analysis model of PIT, and give the calculation method of the above two indexes. The results of the model and the simulation are in agreement with each other, which shows the rationality and accuracy of the model.
【学位授予单位】:清华大学
【学位级别】:博士
【学位授予年份】:2015
【分类号】:TP393.05


本文编号:2392818

资料下载
论文发表

本文链接:https://www.wllwen.com/guanlilunwen/ydhl/2392818.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户b3632***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com