基于低秩稀疏子空间的数据挖掘算法研究
[Abstract]:High-dimensional data not only has high-dimensional attributes, but also contains a lot of redundancy, noise and outliers, which makes the spatial structure of high-dimensional data more complex. It is not conducive to data mining algorithm using the real association structure of data to build a better model. An important step is to capture the size of association between samples or attributes by learning coefficient matrices. However, the learning process is sensitive to noise and outliers. Sparse learning can make coefficient matrices sparse, i.e. there are large coefficient values between related samples or attributes, and coefficient values between uncorrelated samples or attributes. Because the sparse coefficient matrix is very small or even zero, it can effectively reflect the relationship between data, so that the data mining algorithm can effectively remove redundancy, noise and outliers, thus achieving very good robustness. In addition, high-dimensional data can be represented by a set of low-dimensional subspaces. Therefore, the use of subspace learning to transform complex high-dimensional data space into simple low-dimensional subspace is more conducive to the data mining algorithm to find hidden global structure and local structure in the data, so as to obtain more effective data mining results. Because the rank of matrix becomes larger, data mining algorithms can not capture the real low-rank structure in high-dimensional data. Therefore, the size of rank can be clearly reduced by using low-rank constraints in the learning process of coefficient matrix. For example, the model only uses global structure information or local structure information. A few algorithms can construct the model through more comprehensive structure information. However, they do not combine sparse learning with low rank constraints and subspace learning to obtain complementary structure information in the data, so as to get more effective data mining model. Secondly, the data mining task is divided into several independent steps to complete, even though these independent steps can get the optimal solution of each step in their respective optimization process, but can not ensure that the final solution is the global optimal solution. To overcome the shortcomings of existing data mining algorithms, innovative multiple output regression algorithm and subspace clustering algorithm are proposed to mine high-dimensional data more effectively. The main research results of this paper can be summarized as follows: 1) A low-rank feature based on low-rank constraints and feature selection is proposed. Reduction for multi-output regression, or LFR for short, solves the problem that existing multiple-output regression analysis algorithms do not fully utilize the inherent associations in high-dimensional data. The correlation between attributes and attributes, the correlation between output variables and output variables, and the correlation between training samples and training samples can improve the prediction ability of multiple output regression models. In addition, the regression coefficient matrix is represented by the product of two new matrices with low rank constraints, and the output variables are searched indirectly by low rank constraints on the regression coefficient matrix. In addition, sample selection is carried out by combining l2,1-norm with loss function term to remove the influence of outliers on regression model learning. LFR algorithm has very good ability of multiple output regression prediction.2) A subspace clustering algorithm based on low rank constraints and sparse learning (LSS) is proposed. Line spectrum clustering can not ensure the final solution is the optimal solution, and does not consider learning similarity matrix from the low-dimensional structure of the original data. In the fourth chapter, LSS algorithm is proposed, which combines sparse learning, low rank constraints, sample self-expression and subspace learning to achieve better clustering results. For example, LSS algorithm uses sparse learning to select features of coefficient matrices to remove redundant features and noises, and learns similarity matrices from the original data space and its low-dimensional space respectively, and then optimizes the two matrices in the iterative optimization process, so that the similarity matrix can better reflect the real data phase. In addition, the Laplacian matrix of the similarity matrix is constrained by low rank constraints, so that the best similarity matrix and the best clustering results can be obtained simultaneously in the iterative optimization process. In this paper, sparse learning, low rank constraints and subspace learning are studied, and two new data mining algorithms are proposed to solve the shortcomings of the existing multiple output regression algorithm and subspace clustering algorithm in the field of data mining, which adds new ideas and applications to the research of data mining algorithms. Experiments on real open datasets show that the two algorithms proposed in this paper can achieve very good mining results under various evaluation indicators.
【学位授予单位】:广西师范大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP311.13
【相似文献】
相关期刊论文 前10条
1 陈文锋;;基于统计信息的数据挖掘算法[J];统计与决策;2008年15期
2 王清毅,张波,蔡庆生;目前数据挖掘算法的评价[J];小型微型计算机系统;2000年01期
3 胡浩纹,魏军,胡涛;模糊数据挖掘算法在人力资源管理中的应用[J];计算机与数字工程;2002年05期
4 万国华,陈宇晓;数据挖掘算法及其在股市技术分析中的应用[J];计算机应用;2004年11期
5 文俊浩,胡显芝,何光辉,徐玲;小波在数据挖掘算法中的运用[J];重庆大学学报(自然科学版);2004年12期
6 邹志文,朱金伟;数据挖掘算法研究与综述[J];计算机工程与设计;2005年09期
7 赵泽茂,何坤金,胡友进;基于距离的异常数据挖掘算法及其应用[J];计算机应用与软件;2005年09期
8 赵晨,诸静;过程控制中的一种数据挖掘算法[J];武汉大学学报(工学版);2005年05期
9 王振华,柴玉梅;基于决策树的分布式数据挖掘算法研究[J];河南科技;2005年02期
10 胡作霆;董兰芳;王洵;;图的数据挖掘算法研究[J];计算机工程;2006年03期
相关会议论文 前10条
1 贺炜;邢春晓;潘泉;;因果不完备条件下的数据挖掘算法[A];第二十二届中国数据库学术会议论文集(技术报告篇)[C];2005年
2 刘玲;张兴会;;基于神经网络的数据挖掘算法研究[A];全国第二届信号处理与应用学术会议专刊[C];2008年
3 陈曦;曾凡锋;;数据挖掘算法在风险评估中的应用[A];2007通信理论与技术新发展——第十二届全国青年通信学术会议论文集(上册)[C];2007年
4 郭新宇;梁循;;大型数据库中数据挖掘算法SLIQ的研究及仿真[A];2004年中国管理科学学术会议论文集[C];2004年
5 张沫;栾媛媛;秦培玉;罗丹;;基于聚类算法的多维客户行为细分模型研究与实现[A];2011年通信与信息技术新进展——第八届中国通信学会学术年会论文集[C];2011年
6 潘国林;杨帆;;数据挖掘算法在保险客户分析中的应用[A];全国第20届计算机技术与应用学术会议(CACIS·2009)暨全国第1届安全关键技术与应用学术会议论文集(上册)[C];2009年
7 张乃岳;张力;张学燕;;基于字段匹配的CRM数据挖掘算法与应用[A];逻辑学及其应用研究——第四届全国逻辑系统、智能科学与信息科学学术会议论文集[C];2008年
8 祖巧红;陈定方;胡吉全;;客户分析中的数据挖掘算法比较研究[A];12省区市机械工程学会2006年学术年会湖北省论文集[C];2006年
9 李怡凌;马亨冰;;一种基于本体的关联规则挖掘算法[A];全国第19届计算机技术与应用(CACIS)学术会议论文集(下册)[C];2008年
10 盛立;刘希玉;高明;;基于粗糙集理论的数据挖掘算法研究[A];山东省计算机学会2005年信息技术与信息化研讨会论文集(二)[C];2005年
相关重要报纸文章 前1条
1 ;选择合适的数据挖掘算法[N];计算机世界;2007年
相关博士学位论文 前4条
1 陈云开;基于粗糙集和聚类的数据挖掘算法及其在反洗钱中的应用研究[D];华中科技大学;2007年
2 张静;基于粗糙集理论的数据挖掘算法研究[D];西北工业大学;2006年
3 沙朝锋;基于信息论的数据挖掘算法[D];复旦大学;2008年
4 梁瑾;模糊粗糙单调数据挖掘算法及在污水处理中应用研究[D];华南理工大学;2011年
相关硕士学位论文 前10条
1 谢亚鑫;基于Hadoop的数据挖掘算法的研究[D];华北电力大学;2015年
2 彭军;基于新型异构计算平台的数据挖掘算法研究与实现[D];电子科技大学;2015年
3 杨维;基于Hadoop的健康物联网数据挖掘算法研究与实现[D];东北大学;2013年
4 张永芳;基于Hadoop平台的并行数据挖掘算法研究[D];安徽理工大学;2016年
5 李围成;基于FP-树的时空数据挖掘算法研究[D];河南工业大学;2016年
6 官凯;基于MapReduce的图挖掘研究[D];贵州师范大学;2016年
7 陈名辉;基于YARN和Spark框架的数据挖掘算法并行研究[D];湖南师范大学;2016年
8 刘少龙;面向大数据的高效数据挖掘算法研究[D];华北电力大学(北京);2016年
9 罗俊;数据挖掘算法的并行化研究及其应用[D];青岛大学;2016年
10 甘昕艳;数据挖掘算法在电子病历系统中的应用研究[D];广西大学;2013年
,本文编号:2194417
本文链接:https://www.wllwen.com/kejilunwen/ruanjiangongchenglunwen/2194417.html