当前位置:主页 > 科技论文 > 自动化论文 >

具有更好透明性和解释性的智能建模方法研究

发布时间:2018-05-02 19:00

  本文选题:智能建模 + 模糊系统 ; 参考:《江南大学》2017年硕士论文


【摘要】:在人脸识别、语音识别等复杂场景中,目前以神经网络为代表的智能模型已能达到很高的识别精度。而在智能医疗诊断等特定领域,对智能建模方法的透明性和解释性有着更高的要求,可解释性好的模型有助于人们发现事物的内在规律。一般来说,传统的统计学习方法本身是简单的,因而易于理解和解释,而那些智能模型就像是黑盒子一样,它们的透明性较差,很难解释清楚模型内部的推理过程。基于模糊规则推导的过程,语义性更强,使得模糊系统在解释性层面表现得较好。早期的模糊系统复杂度较低,只需要少量的模糊规则来构成规则库,并且领域专家可参与到规则的制定过程中,因而所构建的模糊系统还是比较透明的。然而,在模糊系统向神经网络融合的趋势当中,模糊规则和系统结构的复杂化就导致可解释性出现了损失。为得到透明性和解释性更好的智能模型,本文展开了下面的研究:1)比较和分析神经网络、模糊系统等人工智能模型的可解释性,例如神经元个数、模糊规则条数对可解释性有哪些影响;比较和分析不同分类策略对所建分类模型可解释性的影响,如“一对一”或“一对多”策略各有哪些优缺点。2)基于最小最大概率决策技术,结合神经网络、模糊系统、核技巧,得到可解释性更好的广义隐映射最小最大概率机,指出学习所得α指标的物理解释意义。通过简单的实验,检验各智能模型在分类问题中在解释性层面上的不同表现。3)针对癫痫脑电信号的识别,基于最小最大概率决策技术,将单隐层径向基神经网络同分类树进行连接,充分考虑和利用了两类数据间不同的可分性,得到可解释性更好的径向基最小最大概率分类树,其推理过程清晰,易于理解和解释。4)基于区间二型TSK模糊系统,利用子空间聚类和网格划分法生成稀疏的规整的规则中心,构建语义更为简洁而清晰的规则前件,简化规则后件为0阶形式,降低了复杂度,从而得到可解释性更好的区间二型模糊子空间0阶TSK系统。在大量医学数据上进行实验,验证所提方法的有效性和优势。
[Abstract]:In the complex scene of face recognition and speech recognition, the intelligent model represented by neural network can achieve high recognition accuracy. However, in specific fields such as intelligent medical diagnosis, there is a higher requirement for transparency and interpretation of intelligent modeling methods. A good interpretable model can help people to discover the inherent laws of things. Generally speaking, the traditional statistical learning method is simple, so it is easy to understand and explain, but those intelligent models are like black boxes, they are less transparent, it is difficult to explain the reasoning process within the model. The process of fuzzy rule derivation is more semantic, which makes the fuzzy system perform well at the interpretive level. The complexity of the early fuzzy system is low, only a few fuzzy rules are needed to form the rule base, and the domain experts can participate in the process of making the rules, so the fuzzy system constructed is relatively transparent. However, in the trend of fusion of fuzzy system to neural network, the complexity of fuzzy rules and system structure leads to loss of interpretability. In order to obtain a more transparent and interpretive intelligent model, the following research is carried out to compare and analyze the interpretability of artificial intelligence models such as neural networks, fuzzy systems, and so on, such as the number of neurons. What is the effect of the number of fuzzy rules on interpretability, and the influence of different classification strategies on the interpretability of the proposed classification model is compared and analyzed. For example, what are the advantages and disadvantages of "one-to-one" or "one-to-many" strategies? based on the minimum maximum probability decision technology, combining neural networks, fuzzy systems and kernel techniques, a better interpretable generalized implicit mapping minimum maximum probability machine is obtained. The physical interpretation significance of the 伪 -index obtained from the study is pointed out. Through a simple experiment, we test the different performance of each intelligent model in the classification problem on the explanatory level. 3) for the identification of epileptic EEG signal, based on the minimum maximum probability decision technology, The single hidden layer radial basis function neural network is connected with the classification tree, and the different separability between the two kinds of data is fully considered and utilized, and a better interpretable minimum maximum probability classification tree is obtained, and the reasoning process is clear. Based on interval type 2 TSK fuzzy system, subspace clustering and mesh division method are used to generate sparse regular rule centers. The complexity is reduced, and a better interpretable interval type 2 fuzzy subspace 0 order TSK system is obtained. Experiments were carried out on a large amount of medical data to verify the effectiveness and advantages of the proposed method.
【学位授予单位】:江南大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:R318;TP183

【参考文献】

相关期刊论文 前7条

1 邓赵红;张江滨;蒋亦樟;王士同;;基于模糊子空间聚类的0阶岭回归TSK模糊系统[J];控制与决策;2016年05期

2 陈聪;王士同;;基于模糊分组和监督聚类的RBF回归性能改进[J];电子与信息学报;2009年05期

3 连可;陈世杰;周建明;龙兵;王厚军;;基于遗传算法的SVM多分类决策树优化算法研究[J];控制与决策;2009年01期

4 阎岭,郑洪涛,蒋静坪;基于进化策略生成可解释性模糊系统[J];电子学报;2005年01期

5 张凯,钱锋,刘漫丹;模糊神经网络技术综述[J];信息与控制;2003年05期

6 张建刚,毛剑琴,夏天,魏可惠;模糊树模型及其在复杂系统辨识中的应用[J];自动化学报;2000年03期

7 张良杰,李衍达;模糊神经网络技术的新近发展[J];信息与控制;1995年01期

相关博士学位论文 前1条

1 张永;基于解释性与精确性的模糊建模方法研究[D];南京理工大学;2006年

相关硕士学位论文 前1条

1 王瀚漓;多目标进化算法对模糊系统解释性的研究应用[D];浙江大学;2003年



本文编号:1835092

资料下载
论文发表

本文链接:https://www.wllwen.com/kejilunwen/zidonghuakongzhilunwen/1835092.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户3b329***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com