基于特征向量的音乐情感分析的研究
发布时间:2018-10-23 08:46
【摘要】:随着当今社会的迅速信息化,各种多媒体信息资料飞速发展。音乐作为一门艺术,已经成为人类生活中必备的部分。一直以来,音乐都是人们表达情感的渠道,可以为欢乐而歌,可以为悲伤而唱。如今纸上的音乐已经不能够满足音乐的保存、检索以及音乐人之间的交流。随着信息时代的到来,计算机音乐的研究成了一个新的课题。让计算机完成我们人类能够完成的事情一直是人们试图努力的方向。目前,我们可以通过计算机进行音乐的播放、制作和存储等,通过计算机对音乐的情感进行分析也渐渐兴起,使计算机能够通过“听”音乐自动识别出音乐所表达的情感。本文就音乐情感自动分析做了深入的研究。本文的音乐情感分析模型由三个部分构成:音乐特征向量模型、音乐情感模型和分类认知模型。音乐特征向量模型是由从音乐中提取的一些特征组成的一个八维向量。在音乐特征向量模型的部分,本文在介绍了旋律面积的概念之后,定义了音乐能量的概念,并提出了自己的方法,即利用音乐能量为音乐划分乐段,针对每个乐段使用数字音乐特征提取技术提取乐段的速度、旋律的方向、力度、节拍、节奏变化、大三度、小三度和音色等八个特征,然后利用音乐情感模型和分类认知模型对每个乐段的情感进行分析。音乐情感模型是音乐情感的描述,本文介绍了几种研究者常用的音乐情感模型,包括Hevner情感环、Thayer情感模型和情感语义模型等等,并对这些模型的优缺点进行了比较。我们将Hevner情感环与情感语义模型相结合,得到了由Hevner情感环中的八大类情感描述所构成的情感向量模型,并将该模型作为本文实验所用的情感模型。分类认知模型是通过算法将音乐特征模型映射到情感模型,即分类认知的过程是一个模式识别的过程。在分类认知模型部分,简单介绍了几种模式识别方法并对它们的优缺点进行对比之后,选用BP神经网络作为本文的认知模型。针对音乐情感分析的需求,本文对BP神经网络的学习过程做了改进,使其能够更加符合音乐情感分析的主观性的特点。最后,本文将上述三部分自然地结合起来,构成了一个完整的音乐情感分析模型。之后,对该模型的功能和性能进行了实验验证,并将实验结果与已有研究的实验成果进行比较,结果显示,使用本文所提出的方法构建的音乐情感分析模型能够较好地对数字音乐进行情感分析,并且与已有成果相比,具有更高的准确率。
[Abstract]:With the rapid development of information, various multimedia information materials are developing rapidly. As an art, music has become an essential part of human life. Music has always been a channel for people to express their feelings, to sing for joy and to sing for sorrow. Today, the music on paper can no longer satisfy the preservation, retrieval and communication between musicians. With the arrival of the information age, the research of computer music has become a new subject. Making computers do what we humans can do has always been the direction of our efforts. At present, we can play, make and store music by computer, and analyze the emotion of music by computer, so that the computer can automatically recognize the emotion expressed by music through listening to music. This article has done the thorough research to the music emotion automatic analysis. The music emotion analysis model consists of three parts: music feature vector model, music emotion model and classified cognitive model. The music feature vector model is an eight-dimensional vector composed of some features extracted from music. In the part of the music feature vector model, after introducing the concept of melodic area, this paper defines the concept of music energy, and puts forward its own method, that is, using music energy to divide music segments. For each segment, using digital music feature extraction technology to extract eight features, such as speed, direction of melody, intensity, rhythm, rhythm change, big third degree, small third degree and timbre, etc. Then the emotion of each segment is analyzed by using musical emotion model and classified cognitive model. Music emotion model is the description of music emotion. This paper introduces several musical emotion models commonly used by researchers, including Hevner emotional loop, Thayer emotional model and emotional semantic model, and compares the advantages and disadvantages of these models. We combine the Hevner emotional loop with the affective semantic model and obtain the emotional vector model which is composed of eight kinds of affective description in the Hevner emotional loop and use this model as the emotional model used in this experiment. Classifying cognitive model is to map music feature model to affective model through algorithm, that is, the process of classifying cognition is a process of pattern recognition. In the part of classified cognitive model, several pattern recognition methods are briefly introduced and their advantages and disadvantages are compared. Then BP neural network is selected as the cognitive model in this paper. According to the demand of music emotion analysis, this paper improves the learning process of BP neural network, so that it can more accord with the subjective characteristics of music emotion analysis. Finally, the above three parts are naturally combined to form a complete musical emotional analysis model. After that, the function and performance of the model are verified by experiments, and the experimental results are compared with the existing experimental results. The results show that, Using the method proposed in this paper, the music emotion analysis model can be used to analyze the digital music emotion better, and compared with the existing results, the model has a higher accuracy.
【学位授予单位】:西安电子科技大学
【学位级别】:硕士
【学位授予年份】:2014
【分类号】:TN912.3
本文编号:2288731
[Abstract]:With the rapid development of information, various multimedia information materials are developing rapidly. As an art, music has become an essential part of human life. Music has always been a channel for people to express their feelings, to sing for joy and to sing for sorrow. Today, the music on paper can no longer satisfy the preservation, retrieval and communication between musicians. With the arrival of the information age, the research of computer music has become a new subject. Making computers do what we humans can do has always been the direction of our efforts. At present, we can play, make and store music by computer, and analyze the emotion of music by computer, so that the computer can automatically recognize the emotion expressed by music through listening to music. This article has done the thorough research to the music emotion automatic analysis. The music emotion analysis model consists of three parts: music feature vector model, music emotion model and classified cognitive model. The music feature vector model is an eight-dimensional vector composed of some features extracted from music. In the part of the music feature vector model, after introducing the concept of melodic area, this paper defines the concept of music energy, and puts forward its own method, that is, using music energy to divide music segments. For each segment, using digital music feature extraction technology to extract eight features, such as speed, direction of melody, intensity, rhythm, rhythm change, big third degree, small third degree and timbre, etc. Then the emotion of each segment is analyzed by using musical emotion model and classified cognitive model. Music emotion model is the description of music emotion. This paper introduces several musical emotion models commonly used by researchers, including Hevner emotional loop, Thayer emotional model and emotional semantic model, and compares the advantages and disadvantages of these models. We combine the Hevner emotional loop with the affective semantic model and obtain the emotional vector model which is composed of eight kinds of affective description in the Hevner emotional loop and use this model as the emotional model used in this experiment. Classifying cognitive model is to map music feature model to affective model through algorithm, that is, the process of classifying cognition is a process of pattern recognition. In the part of classified cognitive model, several pattern recognition methods are briefly introduced and their advantages and disadvantages are compared. Then BP neural network is selected as the cognitive model in this paper. According to the demand of music emotion analysis, this paper improves the learning process of BP neural network, so that it can more accord with the subjective characteristics of music emotion analysis. Finally, the above three parts are naturally combined to form a complete musical emotional analysis model. After that, the function and performance of the model are verified by experiments, and the experimental results are compared with the existing experimental results. The results show that, Using the method proposed in this paper, the music emotion analysis model can be used to analyze the digital music emotion better, and compared with the existing results, the model has a higher accuracy.
【学位授予单位】:西安电子科技大学
【学位级别】:硕士
【学位授予年份】:2014
【分类号】:TN912.3
【参考文献】
相关硕士学位论文 前1条
1 钟子岳;基于数据挖掘技术的音乐风格分类方法的研究[D];南昌大学;2013年
,本文编号:2288731
本文链接:https://www.wllwen.com/kejilunwen/wltx/2288731.html