当前位置:主页 > 管理论文 > 移动网络论文 >

M2ram:云平台大页内存压缩系统

发布时间:2018-05-28 22:51

  本文选题:Linux + 内核 ; 参考:《浙江大学》2017年硕士论文


【摘要】:在当今的云计算环境下,为了提高物理资源的使用效率,单台物理服务器上整合了几十甚至上百个虚拟机来承载云业务。为了保证云业务的服务质量,物理服务器采用无交换分区的2M大内存页机制管理物理内存。然而Linux现有的大页管理机制Hugetlbfs由于不支持内存压缩回收以及大页空间拓展性差,导致其不能很好地满足业务需求。同时,在实际的部署环境中,物理服务器内存中存在较多的重复数据,而现有的内存去重技术(KSM、UKSM)并不能很好地支持大页机制,致使宝贵的物理内存资源没有被充分的利用。为了提高云计算环境下物理服务器系统的性能和物理内存的使用效率(复用率),同时尽可能小地修改系统内核。本文将基于大页内存管理分配框架PHPA(Pristine Huge Page Allocator),设计和实现了一套大页内存压缩管理系统M2ram(2M ram)。该系统将不经常使用的大页压缩存放在内存中,以提高物理内存的复用率。为了实现本系统,本文完成了下列工作:(1)采用全新大页内存管理框架PHPA。该框架和Linux内核耦合性低,还可以大幅降低大页元数据的内存开销,进一步节省可用物理内存。(2)对业务场景进行分析,并根据其设计一套全新的从冷热页追踪、大页回收、内存压缩管理到缺页异常处理完整流程的大页压缩系统M2ram。利用多流压缩技术,提高了在NUMA架构下M2ram的并发能力。(3)提出了把2M大页数据压缩后存放到4k页空间的全新压缩数据存储管理机制。其空间浪费率低达1/512,不会产生碎片化问题。(4)相比于Hugetlbfs的串行缺页异常处理和复杂的SwapCache机制,本文实现了并行化大页缺页异常处理和去SwapCache机制,保证了内核的高响应。经仿真实验和标准化测试,此压缩系统使得内存复用率可达到2倍以上。此系统的高内存复用率、高响应以及高鲁棒性使得其有良好的工业应用前景。
[Abstract]:In today's cloud computing environment, in order to improve the efficiency of the use of physical resources, a single physical server integrates dozens or even hundreds of virtual machines to host cloud services. In order to ensure the quality of service of cloud service, physical server manages physical memory with 2 M large memory page mechanism without switch partition. However, the existing large page management mechanism of Linux, Hugetlbfs, can not meet the business requirements due to its lack of support for memory compression and recycling and poor expansion of large page space. At the same time, in the actual deployment environment, there is more repeated data in the memory of physical server. However, the existing memory de-reloading technology (KSM / UKSMSM) can not support the large page mechanism very well, resulting in the precious physical memory resources are not fully utilized. In order to improve the performance of physical server system and the efficiency of using physical memory in cloud computing environment, the system kernel is modified as small as possible. In this paper, we design and implement a large page memory compression management system based on the large page memory allocation framework PHPA(Pristine Huge Page Allocator. In order to improve the reuse rate of physical memory, the system compresses and stores the infrequently used large pages in memory. In order to achieve this system, this paper completed the following work: 1) using a new large page memory management framework PHPAA. The low coupling between the framework and the Linux kernel can also significantly reduce the memory overhead of large page metadata, further save the available physical memory. 2) analyze the business scenarios, and design a new set of hot and cold page tracking, large page recycling, according to the framework. Memory compression management to the page exception handling the full process of the large page compression system M 2 ram. By using multi-stream compression technology, the concurrency capability of M2ram under NUMA architecture is improved. (3) A new compressed data storage management mechanism is proposed, in which 2 M large page data is compressed and stored into 4k page space. Compared with Hugetlbfs's serial page missing exception handling and complex SwapCache mechanism, this paper implements parallel large page missing exception handling and SwapCache removal mechanism, which ensures high kernel response. Through simulation and standardized test, the compression system can make the memory reuse rate more than 2 times. Because of its high memory reuse rate, high response and high robustness, the system has a good prospect of industrial application.
【学位授予单位】:浙江大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP393.09;TP311.52

【相似文献】

相关期刊论文 前10条

1 莫建麟;王玉晶;吴U,

本文编号:1948493


资料下载
论文发表

本文链接:https://www.wllwen.com/guanlilunwen/ydhl/1948493.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户60412***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com