基于循环神经网络的蒙古文语言模型研究

发布时间:2018-01-18 13:36

  本文关键词:基于循环神经网络的蒙古文语言模型研究 出处:《内蒙古大学》2017年硕士论文 论文类型:学位论文


  更多相关文章: 语言模型 中间字符 语音识别 N-gram FRNNLM


【摘要】:语言模型是自然语言处理任务中的重要组成部分。其中,N-gram语言模型是目前应用最为广泛的统计语言模型。近年来,随着深度学习技术的不断发展,深度神经网络模型逐渐被应用于语音识别中,它为研究者带来新一轮的研究热潮。神经网络语言模型是其中比较重要的研究方向之一。蒙古文语言模型对于蒙古语语音识别、蒙古文信息检索和蒙古文机器翻译等蒙古文信息处理技术的研究起着至关重要的作用。现阶段,神经网络语言模型已被广泛应用于英文和汉文中,但是神经网络语言模型在蒙古文中的使用还比较少。本文主要针对蒙古文神经网络语言模型进行研究。蒙古文是一种在国际上有广泛影响的语言文字。然而,在蒙古文文本语料中,存在大量的显现形式相同但编码不同的单词,这给蒙古文单词的统计和检索等带来了很大困难。本文着重解决显现形式相同但编码不同蒙古文单词的统计和检索问题,从而提高蒙古文语言模型的性能。首先,提出了采用中间字符对蒙古文显现形式相同但编码不同的字母进行合并表示的方法;接着,分别建立了基于拉丁字符的N-gram语言模型与基于中间字符的N-gram语言模型,以及基于拉丁字符的快速循环神经网络语言模型(Faster Recurrent Neural Network Language Model,FRNNLM)与基于中间字符的FRNNLM;然后,实现了 N-gram语言模型和FRNNLM融合的方法,得到了性能更好的语言模型;最后,用困惑度评价了蒙古文语言模型的性能,并将其应用到蒙古语语音识别中进行词错误率(Word ErrorRate,WER)的比较。实验结果表明,基于中间字符的蒙古文文本语料的词汇量比基于拉丁字符的语料平均减少了 41%;基于中间字符的语言模型(3-gram、FRNNLM)比相应基于拉丁字符的语言模型在困惑度方面下降了近40%,提高了蒙古文语言模型的性能。并且在蒙古语语音识别中,基于中间字符的语言模型(3-gram、FRNNLM、3-gram+FRNNLM)比相应基于拉丁字符的语言模型在WER方面下降了近20%;3-gram+FRNNLM(基于拉丁字符、基于中间字符)比3-gram、FRNNLM在WER方面下降得更加明显,有效提升了蒙古语语音识别的准确率。
[Abstract]:Language model is an important part of natural language processing. N-gram language model is the most widely used statistical language model. In recent years, with the development of in-depth learning technology. Depth neural network model is gradually used in speech recognition. It brings a new wave of research for researchers. Neural network language model is one of the more important research directions. Mongolian language model for Mongolian speech recognition. The research of Mongolian information processing technology such as Mongolian information retrieval and Mongolian machine translation plays an important role. At present, neural network language model has been widely used in English and Chinese. However, the use of neural network language model in Mongolian language is relatively small. This paper mainly focuses on the Mongolian neural network language model. Mongolian language is a kind of language which has a wide influence in the world. However. In the Mongolian text corpus, there are a large number of words with the same manifestation but different codes. This brings great difficulties to the statistics and retrieval of Mongolian words. This paper focuses on solving the problem of statistics and retrieval of Mongolian words with the same manifestation but different codes. In order to improve the performance of Mongolian language model. Firstly, the method of combining middle characters to represent Mongolian characters with the same form but different encoding is put forward. Then, the N-gram language model based on Latin character and N-gram language model based on intermediate character are established respectively. And the fast loop neural network language model based on Latin characters, Recurrent Neural Network Language Model. FRNNLM) and FRNNLM based on intermediate characters; Then, the N-gram language model and the FRNNLM fusion method are implemented, and a better performance language model is obtained. Finally, the performance of the Mongolian language model is evaluated with the degree of confusion, and the word ErrorRate is applied to the Mongolian speech recognition. The experimental results show that the vocabulary of Mongolian text corpus based on intermediate characters is 41% less than that based on Latin characters; The language model based on intermediate characters / FRNNLM) is nearly 40% less confusing than the corresponding language model based on Latin characters. The performance of Mongolian language model is improved, and in Mongolian speech recognition, the language model based on intermediate character is 3-gram/ FRNLM. 3-gram FRNNLM) is nearly 20% lower in WER than the corresponding Latin character-based language model; 3-gram FRNNLM (based on Latin characters, based on intermediate characters) is significantly lower in WER than 3-gram FRNNLM. It effectively improves the accuracy of Mongolian speech recognition.
【学位授予单位】:内蒙古大学
【学位级别】:硕士
【学位授予年份】:2017
【分类号】:TP391.1;TP183

【参考文献】

相关期刊论文 前6条

1 张剑;屈丹;李真;;基于循环神经网络语言模型的N-best重打分算法[J];数据采集与处理;2016年02期

2 王龙;杨俊安;陈雷;林伟;;基于循环神经网络的汉语语言模型建模方法[J];声学技术;2015年05期

3 王龙;杨俊安;陈雷;林伟;刘辉;;基于循环神经网络的汉语语言模型并行优化算法[J];应用科学学报;2015年03期

4 苏传捷;侯宏旭;杨萍;员华瑞;;基于统计翻译框架的蒙古文自动拼写校对方法[J];中文信息学报;2013年06期

5 斯·劳格劳;;基于不确定有限自动机的蒙古文校对算法[J];中文信息学报;2009年06期

6 徐望,王炳锡;N-gram语言模型中的插值平滑技术研究[J];信息工程大学学报;2002年04期

相关硕士学位论文 前4条

1 江布勒;基于规则的蒙古文自动校对方法研究[D];内蒙古大学;2014年

2 张剑;连续语音识别中的循环神经网络语言模型技术研究[D];解放军信息工程大学;2014年

3 飞龙;蒙古语语音识别系统的研究与优化[D];内蒙古大学;2009年

4 赵军;基于音节统计语言模型蒙古文词汇分析校正器的设计与实现[D];内蒙古大学;2007年



本文编号:1441211

资料下载
论文发表

本文链接:https://www.wllwen.com/shoufeilunwen/xixikjs/1441211.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户93177***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com