多维语音信息识别技术研究
[Abstract]:With the increasing demand for artificial intelligence and the rapid development of machine learning technology, voice interaction technology has become the development trend of the next generation of smart home and many other applications. Speech recognition, speaker identification and voice emotion recognition have attracted more and more attention and high degree of attention. At present, the research of speech recognition at home and abroad is the single identification of single dimensional information or content. However, in daily life, the speech signals that people collect are essentially mixed signals, mainly including three large information: the content information contained in the voice, and the speech contains information related to the speaker's features (such as sex. We can identify all kinds of sound information at the same time, and we can identify all kinds of sound information at the same time in human dialogue. The separate identification of various information will produce the ambiguity of semantic understanding, reduce the robustness of speech recognition, and prevent the development of speech dialogue system. If machine can The identity, age, sex, emotional state and even background sound of the speaker can be recognized as many multidimensional information as a person at the same time, which can greatly improve the efficiency of human-computer dialogue and solve the bottleneck problem in the single dimension recognition system. Therefore, this team has proposed a new research topic for the simultaneous recognition of multidimensional speech information. Of course, there are nearly ten kinds of recognition objects involved in the three major aspects of the above information. At the same time, the recognition is very difficult and the scope of research involved is very wide. Therefore, as a pioneering attempt, this article will first study the multi-dimensional information recognition technology related to the speaker. Gender related emotion recognition, the technical research and development of gender and identity identification in the emotional environment. Aiming at the only one dimension information recognition system block diagram, this paper analyzes the common and characteristic of the traditional single speaker information recognition, and focuses on the two key technologies to realize the simultaneous recognition of multi-dimensional speaker information. Feature extraction and model training. (1) it is found that different speech feature parameters can represent different speech related information, and the same eigenvectors can also be used in different single dimensional speech recognition tasks. At present, the commonly used acoustic characteristic parameters are prosodic features, sound qualitative characteristics and spectral characteristics. The speaker related three aspects of information recognition, so consider using the combined features of the above three acoustic features as the feature parameters of the multidimensional speaker information recognition. Compared with the single category, it contains more abundant speech information. This paper uses two methods to obtain the fusion features respectively, one is extracted by the Matlab simulation platform. Low dimension features, and the other is the high dimensional feature extracted by the OpenSMILE toolbox. (2) in view of the lack of mature reference and theoretical knowledge of multidimensional information recognition, this paper first creatively constructs a gender based multidimensional information recognition baseline system, as a multidimensional reference model. Then, the baseline system and transmission are passed through the baseline system. Compared to the system identified by the system of emotion, gender and identity, the average recognition rate of the multidimensional recognition system is 11.37% higher, which proves the feasibility and effectiveness of the baseline system scheme, and proves that the multi-dimensional information recognition can also bring the advantage of improving the recognition rate of Dan Weixin interest, which itself becomes a new kind. (3) because the multi-dimensional speaker information recognition task is essentially a multi label learning problem, the multi example multi label learning algorithm is considered in the study of multidimensional speech recognition. The multi example multi label support vector machine (MSVM) is used for the first time for the first time. The experiment shows that, in addition to gender recognition, the recognition rate of the improved MIMLSVM system is higher than that of the baseline system, in addition to gender recognition, the recognition rate based on the improved system is higher than that of the baseline system. Among them, the high dimension features are used to improve the MIMLSVM system. The accuracy rate is lower than that of low dimension, and the baseline system is about 1.97% higher. It is visible that proper parameter selection and model matching can significantly improve the recognition rate of multidimensional systems. However, with the increase of the number of markers, the running time and computational complexity of the system are also increased accordingly. A certain amount of system complexity is the cost.
【学位授予单位】:南京邮电大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TN912.34
【相似文献】
相关期刊论文 前10条
1 刘文举,孙兵,钟秋海;基于说话人分类技术的分级说话人识别研究[J];电子学报;2005年07期
2 丁辉;唐振民;钱博;李燕萍;;易扩展小样本环境说话人辨认系统的研究[J];系统仿真学报;2008年10期
3 刘明辉;黄中伟;熊继平;;用于说话人辨识的评分规整[J];计算机工程与应用;2010年12期
4 陈雪芳;杨继臣;;一种三层判决的说话人索引算法[J];计算机工程;2012年02期
5 杨继臣;何俊;李艳雄;;一种基于性别的说话人索引算法[J];计算机工程与科学;2012年06期
6 何致远,胡起秀,徐光yP;两级决策的开集说话人辨认方法[J];清华大学学报(自然科学版);2003年04期
7 殷启新,韩春光,杨鉴;基于掌上电脑录音的说话人辨认[J];云南民族学院学报(自然科学版);2003年04期
8 吕声,尹俊勋;同语种说话人转换的实现[J];移动通信;2004年S3期
9 董明,刘加,刘润生;快速口音自适应的动态说话人选择性训练[J];清华大学学报(自然科学版);2005年07期
10 曹敏;王浩川;;说话人自动识别技术研究[J];中州大学学报;2007年02期
相关会议论文 前10条
1 司罗;胡起秀;金琴;;完全无监督的双人对话中的说话人分隔[A];第九届全国信号处理学术年会(CCSP-99)论文集[C];1999年
2 金乃高;侯刚;王学辉;李非墨;;基于主动感知的音视频联合说话人跟踪方法[A];2010年通信理论与信号处理学术年会论文集[C];2010年
3 马勇;鲍长春;夏丙寅;;基于辨别性深度信念网络的说话人分割[A];第十二届全国人机语音通讯学术会议(NCMMSC'2013)论文集[C];2013年
4 白俊梅;张树武;徐波;;广播电视中的目标说话人跟踪技术[A];第八届全国人机语音通讯学术会议论文集[C];2005年
5 索宏彬;刘晓星;;基于高斯混合模型的说话人跟踪系统[A];第八届全国人机语音通讯学术会议论文集[C];2005年
6 罗海风;龙长才;;多话者环境下说话人辨识听觉线索研究[A];中国声学学会2009年青年学术会议[CYCA’09]论文集[C];2009年
7 王刚;邬晓钧;郑方;王琳琳;张陈昊;;基于参考说话人模型和双层结构的说话人辨认快速算法[A];第十一届全国人机语音通讯学术会议论文集(一)[C];2011年
8 李经伟;;语体转换与角色定位[A];全国语言与符号学研究会第五届研讨会论文摘要集[C];2002年
9 王刚;邬晓钧;郑方;王琳琳;张陈昊;;基于参考说话人模型和双层结构的说话人辨认[A];第十一届全国人机语音通讯学术会议论文集(二)[C];2011年
10 何磊;方棣棠;吴文虎;;说话人聚类与模型自适应结合的说话人自适应方法[A];第六届全国人机语音通讯学术会议论文集[C];2001年
相关重要报纸文章 前8条
1 ;做一名积极的倾听者[N];中国纺织报;2003年
2 唐志强;不听别人说话,也能模仿其口音[N];新华每日电讯;2010年
3 atvoc;数码语音电路产品概述[N];电子资讯时报;2008年
4 记者 李山;德用双音素改进人工语音表达[N];科技日报;2012年
5 中国科学院自动化研究所模式识别国家重点实验室 于剑邋陶建华;个性化语音生成技术面面观[N];计算机世界;2007年
6 江西 林慧勇;语音合成芯片MSM6295及其应用[N];电子报;2006年
7 记者 邰举;韩开发出脑电波情感识别技术[N];科技日报;2007年
8 黄力行邋陶建华;多模态情感识别参透人心[N];计算机世界;2007年
相关博士学位论文 前10条
1 李洪儒;语句中的说话人形象[D];黑龙江大学;2003年
2 李威;多人会话语音中的说话人角色分析[D];华南理工大学;2015年
3 杨继臣;说话人信息分析及其在多媒体检索中的应用研究[D];华南理工大学;2010年
4 郑建炜;基于核方法的说话人辨认模型研究[D];浙江工业大学;2010年
5 吕声;说话人转换方法的研究[D];华南理工大学;2004年
6 陈凌辉;说话人转换建模方法研究[D];中国科学技术大学;2013年
7 玄成君;基于语音频率特性抑制音素影响的说话人特征提取[D];天津大学;2014年
8 李燕萍;说话人辨认中的特征参数提取和鲁棒性技术研究[D];南京理工大学;2009年
9 徐利敏;说话人辨认中的特征变换和鲁棒性技术研究[D];南京理工大学;2008年
10 王坚;语音识别中的说话人自适应研究[D];北京邮电大学;2007年
相关硕士学位论文 前10条
1 李姗;多维语音信息识别技术研究[D];南京邮电大学;2017年
2 朱丽萍;说话人聚类机制的设计与实施[D];北京邮电大学;2016年
3 杨浩;基于广义音素的文本无关说话人认证的研究[D];北京邮电大学;2008年
4 史梦洁;构式“没有比X更Y的(了)”研究[D];上海师范大学;2015年
5 魏君;“说你什么好”的多角度研究[D];河北大学;2015年
6 解冬悦;互动韵律:英语多人冲突性话语中说话人的首音模式研究[D];大连外国语大学;2015年
7 朱韦巍;扬州街上话语气词研究[D];南京林业大学;2015年
8 蒋博;特定目标说话人的语音转换系统设计[D];电子科技大学;2015年
9 王雅丹;汉语反语研究[D];南昌大学;2015年
10 陈雨莺;基于EMD的说话人特征参数提取方法研究[D];湘潭大学;2015年
,本文编号:2173801
本文链接:https://www.wllwen.com/kejilunwen/xinxigongchenglunwen/2173801.html