基于Kinect动作驱动的三维细微面部表情实时模拟
发布时间:2019-05-08 09:01
【摘要】:产生引人注目的动态面部表情动画在计算机图形学中是一个具有挑战性的方面。近年来,虚拟角色越来越多的出现在计算机游戏、广告以及电影制作中,使得具有细微面部表情的角色动画变得越来越重要。本文提出一种生成三维细微面部表情实时动画的新技术,驱动三维面部网格模型生成带有细微面部表情特征运动的虚拟三维角色动画。 首先,为了实时捕获用户的不同表情状态特征,利用微软的Kinect3D体感摄影机对用户的面部表情进行实时的跟踪,通过分析捕捉得到的人脸运动数据,将运动数据分解为两个部分:头部的刚性运动和面部表情的运动。相对于依赖特定硬件设备的人体运动捕捉系统来说,Kinect降低了系统的硬件成本和调试维护费用,并且对于自然环境的复杂背景具有很好的适应性。 其次,在用户面部表情的捕获和数据处理的基础上,利用拉普拉斯坐标局部细节保留的性质,使用拉普拉斯变形的方法把捕获的面部表情映射到一个中性的三维人脸模型上,对虚拟的三维人脸模型进行姿态重建,产生具有睁眼闭眼和嘴部运动等与用户表情状态一致的虚拟三维角色动画。 再次,为了产生带有皱纹等细微面部表情特征的实时表情动画,生成带有毛孔、胡须以及凹凸不平粗糙感的皮肤纹理,,利用GPU进行光照和法线贴图渲染,再根据Kinect捕获的动作单元计算动态纹理映射和皱纹产生的权重,并通过引入皱纹函数实时模拟皱纹运动状态,产生动作驱动的面部表情实时动画。 最后,利用专业图形程序接口OpenGL和高级着色语言GLSL设计并实现了实时细微面部表情仿真系统。实验表明,利用本文的方法可以产生动作驱动的逼真细微面部表情实时动画,适用于数字娱乐、视频会议等领域。
[Abstract]:Creating compelling animation of dynamic facial expressions is a challenging aspect of computer graphics. In recent years, more and more virtual characters appear in computer games, advertising and film production, making character animation with subtle facial expressions more and more important. In this paper, a new technique to generate real-time animation of 3D fine facial expression is proposed, which drives 3D facial mesh model to generate virtual 3D character animation with fine facial expression feature motion. Firstly, in order to capture the characteristics of different facial expressions in real time, the Kinect3D somatosensory camera of Microsoft is used to track the facial expressions of the user in real time, and the facial motion data obtained from the analysis and capture are analyzed and captured. The motion data is divided into two parts: the rigid motion of the head and the movement of the facial expression. Compared with the human motion capture system which depends on specific hardware devices, Kinect reduces the hardware cost and debugging and maintenance costs of the system, and has a good adaptability to the complex background of the natural environment. Secondly, based on the capture and data processing of the user's facial expression, the captured facial expression is mapped to a neutral three-dimensional face model by using the property of local detail preservation in Laplace coordinates, and the Laplace deformation method is used to map the captured facial expression to a neutral three-dimensional face model. The virtual 3D face model is reconstructed to generate virtual 3D character animation with eye-closing and mouth motion, which is consistent with the user's expression state. Thirdly, in order to produce real-time facial animation with fine facial expression features such as wrinkles, and to generate skin textures with pores, whiskers, and uneven roughness, GPU is used for lighting and normal mapping rendering. Then the weight of dynamic texture mapping and wrinkle is calculated according to the action unit captured by Kinect. The real-time animation of facial expression driven by motion is generated by introducing wrinkle function into real-time simulation of wrinkle motion state. Finally, a real-time fine facial expression simulation system is designed and implemented by using professional graphic program interface (OpenGL) and advanced coloring language (GLSL). Experiments show that the proposed method can generate real-time animation of real-time action-driven facial expressions, which is suitable for digital entertainment, video conferencing and other fields.
【学位授予单位】:燕山大学
【学位级别】:硕士
【学位授予年份】:2013
【分类号】:TP391.41
本文编号:2471780
[Abstract]:Creating compelling animation of dynamic facial expressions is a challenging aspect of computer graphics. In recent years, more and more virtual characters appear in computer games, advertising and film production, making character animation with subtle facial expressions more and more important. In this paper, a new technique to generate real-time animation of 3D fine facial expression is proposed, which drives 3D facial mesh model to generate virtual 3D character animation with fine facial expression feature motion. Firstly, in order to capture the characteristics of different facial expressions in real time, the Kinect3D somatosensory camera of Microsoft is used to track the facial expressions of the user in real time, and the facial motion data obtained from the analysis and capture are analyzed and captured. The motion data is divided into two parts: the rigid motion of the head and the movement of the facial expression. Compared with the human motion capture system which depends on specific hardware devices, Kinect reduces the hardware cost and debugging and maintenance costs of the system, and has a good adaptability to the complex background of the natural environment. Secondly, based on the capture and data processing of the user's facial expression, the captured facial expression is mapped to a neutral three-dimensional face model by using the property of local detail preservation in Laplace coordinates, and the Laplace deformation method is used to map the captured facial expression to a neutral three-dimensional face model. The virtual 3D face model is reconstructed to generate virtual 3D character animation with eye-closing and mouth motion, which is consistent with the user's expression state. Thirdly, in order to produce real-time facial animation with fine facial expression features such as wrinkles, and to generate skin textures with pores, whiskers, and uneven roughness, GPU is used for lighting and normal mapping rendering. Then the weight of dynamic texture mapping and wrinkle is calculated according to the action unit captured by Kinect. The real-time animation of facial expression driven by motion is generated by introducing wrinkle function into real-time simulation of wrinkle motion state. Finally, a real-time fine facial expression simulation system is designed and implemented by using professional graphic program interface (OpenGL) and advanced coloring language (GLSL). Experiments show that the proposed method can generate real-time animation of real-time action-driven facial expressions, which is suitable for digital entertainment, video conferencing and other fields.
【学位授予单位】:燕山大学
【学位级别】:硕士
【学位授予年份】:2013
【分类号】:TP391.41
【参考文献】
相关期刊论文 前6条
1 王振;;基于关键帧的三维人脸皱纹动画[J];电脑与信息技术;2010年05期
2 周仁琴;刘福新;;面向移动数字娱乐的卡通人脸动画系统[J];计算机工程与应用;2009年01期
3 王玉顺;肖俊;庄越挺;王宇杰;;基于运动传播和Isomap分析的三维人脸动画编辑与合成[J];计算机辅助设计与图形学学报;2008年12期
4 杜志军;王阳生;;利用主动外观模型合成动态人脸表情[J];计算机辅助设计与图形学学报;2009年11期
5 张剑;;融合SFM和动态纹理映射的视频流三维表情重建[J];计算机辅助设计与图形学学报;2010年06期
6 姜大龙,高文,王兆其,陈益强;面向纹理特征的真实感三维人脸动画方法[J];计算机学报;2004年06期
本文编号:2471780
本文链接:https://www.wllwen.com/wenyilunwen/guanggaoshejilunwen/2471780.html