当前位置:主页 > 科技论文 > 计算机论文 >

云存储中避免重复数据存储机制研究

发布时间:2018-07-05 05:47

  本文选题:云存储 + 重复数据删除 ; 参考:《云南大学》2013年硕士论文


【摘要】:随着云存储的迅速发展,云存储变得越来越流行了,因为它可以提供低成本的、大存储的使用需求和资源处理。尽管数据量成倍增加,但研究发现,在现有的存储系统中存在大量的冗余数据。那么在云存储中,如何在保证数据的安全性和完整性的基础上,降低大量数据的传输、存储以及管理的开销现已成为云存储的一个重要问题。特别的,如何在保证数据机密性的前提下,尽量提高重复数据删除效率是一个非常重要的问题。本文建立一个安全的重复数据删除机制,在能确保存储数据机密性条件下,尽可能提高重复数据删除效率。 针对上面的冗余数据大量存在、可扩展性差、效率低并受限于网络带宽等瓶颈问题,本文首先介绍了密码学相关理论以及关于安全重复数据删除机制的关键技术。通过对这些理论和技术的研究和讨论,在此基础上我们提出了具有代理的安全重复数据删除机制,并加入版本控制技术。 本文提出的安全重复数据删除机制的主要设计思想是:将重复数据删除处理放在客户端,在客户端使用hash值作为数据块标识符来进行重复数据删除以及数据块加密等操作,在代理端提供了具有元数据管理、密钥管理等技术来实现合法用户对所存储数据块的正常访问和授权,从而实现具有代理的安全重复数据删除机制;同时介绍了安全云存储中的版本控制技术,并在具有代理的安全重复数据删除机制中加入版本控制功能。最终保证当重复数据被删除时,在能确保数据安全性的基础上,尽力提高重复数据删除效率,从而保证数据存储空间利用率。
[Abstract]:With the rapid development of cloud storage, cloud storage has become more and more popular because it can provide low cost, large storage usage requirements and resource processing. Although the amount of data is increasing exponentially, it is found that there is a large amount of redundant data in the existing storage system. In cloud storage, how to reduce the transmission of a large amount of data on the basis of ensuring the security and integrity of data has become an important problem in cloud storage. In particular, how to improve the efficiency of duplicate data deletion under the premise of data confidentiality is a very important issue. In this paper, we establish a secure mechanism for deleting repeated data, which can improve the efficiency of data deletion as much as possible under the condition of ensuring the confidentiality of stored data. Aiming at the bottleneck problems of redundant data such as large amount of redundant data, poor scalability, low efficiency and limited network bandwidth, this paper first introduces the cryptographic theory and the key technology about the mechanism of deleting secure repeated data. Based on the research and discussion of these theories and techniques, we propose a secure data deletion mechanism with agent, and add version control technology. The main design idea of the security repeated data deletion mechanism proposed in this paper is that the repeated data delete processing is put into the client side, the hash value is used as the data block identifier to delete the repeated data in the client side and the data block encryption operation, etc. In the agent, the technologies of metadata management and key management are provided to realize the normal access and authorization of the legal user to the stored data block, so as to realize the security repeated data deletion mechanism with the agent. At the same time, the version control technology in secure cloud storage is introduced, and the version control function is added to the security duplicate data deletion mechanism with agent. Finally, it is guaranteed that when the repeated data is deleted, the efficiency of data deletion can be improved on the basis of ensuring the security of the data, so as to ensure the utilization of the data storage space.
【学位授予单位】:云南大学
【学位级别】:硕士
【学位授予年份】:2013
【分类号】:TP333


本文编号:2099170

资料下载
论文发表

本文链接:https://www.wllwen.com/kejilunwen/jisuanjikexuelunwen/2099170.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户2eb89***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com