当前位置:主页 > 科技论文 > 自动化论文 >

情感脑机交互研究

发布时间:2021-06-24 11:18
  情绪在日常生活人与人交流中扮演着重要角色。除了逻辑智能,情感智能也被认为是人类智能的重要组成部分。情感智能是指机器感知,理解和调控人的情绪的能力。然而,现有人机交互系统仍然缺乏情感智能。情感脑机交互研究的目的是通过构建情感计算模型来建立人与机器的情感交流通路。在本论文中,我们探讨了情感脑机交互的理论基础,模型,算法,实现技术,实验验证以及原型应用。主要工作包括以下三个方面:1)我们利用脑电,眼电和眼动信号以及深度神经网络构建了多模态情绪识别和警觉度估计系统。相对于传统浅层模型,深度神经网络能有效提高识别性能,并揭示情绪识别中关键频段和关键脑区,从而给出在实际应用中具有更少电极的配置方案。我们通过跨个体的在不同时间的多次实验,揭示了对于三类情绪(高兴、悲伤和中性)的稳定神经模式。我们发现高兴情绪在颞叶脑区具有更强的beta频段和gamma频段的脑电信号反应,中性和悲伤情绪的神经模式比较相似,中性情绪在顶叶和枕叶脑区具有更强的alpha频段脑电信号反应,而悲伤情绪在顶叶和枕叶脑区具有更强的delta频段脑电信号反应以及前额脑区更强的gamma频段脑电信号反应。2)我们提出了利用脑电和眼动信... 

【文章来源】:上海交通大学上海市 211工程院校 985工程院校 教育部直属院校

【文章页数】:226 页

【学位级别】:博士

【文章目录】:
摘要
ABSTRACT
Chapter 1 Introduction
    1.1 Motivation
    1.2 Contributions
    1.3 Thesis Overview
Chapter 2 Research Background
    2.1 Emotion Definition and Emotion Models
        2.1.1 Discrete Model
        2.1.2 Continuous Model
    2.2 Brain Mechanism of Emotion
    2.3 Electroencephalography
    2.4 Emotion Elicitation and Emotion Experiment
    2.5 Emotion Recognition
        2.5.1 EEG-based Emotion Recognition
        2.5.2 Multimodal Emotion Recognition
        2.5.3 Public Emotion EEG Datasets
    2.6 Driving Fatigue and Vigilance Estimation
    2.7 Summary
Chapter 3 Experimental Setups
    3.1 SJTU Emotion EEG Dataset (SEED) for Three Emotions
        3.1.1 Emotion Stimuli
        3.1.2 Subjects
        3.1.3 Experiment Protocol
    3.2 SJTU Emotion EEG Dataset (SEED-IV) for Four Emotions
        3.2.1 Emotion Stimuli
        3.2.2 Subjects
        3.2.3 Experiment Protocol
    3.3 Data Processing for Multimodal Emotion Recognition
        3.3.1 Feature Extraction for EEG
        3.3.2 Feature Smoothing for EEG
        3.3.3 Dimensionality Reduction for EEG
        3.3.4 Feature Extraction for Eye Movements
    3.4 Multimodal Vigilance Estimation Dataset (SEED-VIG)
        3.4.1 Experimental setup
        3.4.2 Vigilance Annotation
    3.5 Wearable Device for Vigilance Estimation
        3.5.1 Flexible Dry Electrodes
        3.5.2 EOG Acquisition Board
        3.5.3 Laboratory Driving Simulations
        3.5.4 Real-World Driving Experiments
    3.6 Summary
Chapter 4 EEG-based Emotion Recognition
    4.1 EEG-based Emotion Classification Using Deep Neural Networks
        4.1.1 Introduction
        4.1.2 Deep Belief Networks
        4.1.3 Classifier Training
        4.1.4 Classification Performance
        4.1.5 Critical Frequency Bands and Channels
        4.1.6 Electrode Reduction
    4.2 Stable EEG Patterns over Time for Emotion Recognition
        4.2.1 Introduction
        4.2.2 Discriminative Graph Regularized Extreme Learning Machine
        4.2.3 Experiment Results on DEAP Data
        4.2.4 Experiment Results on SEED Data
        4.2.5 Neural Signatures and Stable Patterns
        4.2.6 Stability of the Emotion Recognition Model over Time
    4.3 Summary
Chapter 5 Multimodal Emotion Recognition with EEG and Eye Movements
    5.1 Introduction
    5.2 Multimodal Deep Learning
    5.3 Modality Fusion Methods
    5.4 Experimental Results on SEED for Three Emotions
        5.4.1 Eye Movement-Based Emotion Recognition
        5.4.2 Performance of Modality Fusion
        5.4.3 Analysis of Complementary Characteristics
    5.5 Experimental Results on SEED-IV for Four Emotions
        5.5.1 EEG-Based Emotion Recognition
        5.5.2 Analysis of Modality Fusion and Complementary Characteristics
        5.5.3 Analysis of Stability Across Sessions
    5.6 Summary
Chapter 6 Personalizing Affective Models with Transfer Learning
    6.1 Introduction
    6.2 Transfer Learning
        6.2.1 Transfer Component Analysis
        6.2.2 Kernel Principle Component Analysis
        6.2.3 Transductive Parameter Transfer
    6.3 Experiment Setup
    6.4 Experiment Results
    6.5 Heterogeneous Knowledge Transfer From Eye Tracking To EEG
        6.5.1 Introduction
        6.5.2 Spatiotemporal Scanpath Analysis
        6.5.3 Heterogeneous Transfer Learning
        6.5.4 Evaluation Details
        6.5.5 Experimental Results
    6.6 Summary
Chapter 7 Multimodal Vigilance Estimation: From Simulated To Real Scenarios
    7.1 Introduction
    7.2 Feature Extraction
        7.2.1 Preprocessing for Forehead EOG
        7.2.2 Feature Extraction for Forehead EOG
        7.2.3 Forehead EEG Signal Extraction
        7.2.4 Feature Extraction from EEG
    7.3 Incorporating Temporal Dependency into Vigilance Estimation
    7.4 Evaluation Metrics
    7.5 Experimental Results on SEED-VIG
        7.5.1 Forehead EOG-based Vigilance Estimation
        7.5.2 EEG-based Vigilance Estimation
        7.5.3 Modality Fusion with Temporal Dependency
        7.5.4 Complementary Characteristics
    7.6 Experimental Setups with Wearable Device
    7.7 Experimental Results on Wearable Device
        7.7.1 Laboratory Driving Simulations
        7.7.2 Real-World Driving Experiments
    7.8 Discussion
    7.9 Summary
Chapter 8 Conclusions and Future Work
    8.1 Summary of Contributions
    8.2 Future Work
References
Acknowledgements
Publications
Project Paticipation
List of Patents
Resume



本文编号:3246997

资料下载
论文发表

本文链接:https://www.wllwen.com/kejilunwen/zidonghuakongzhilunwen/3246997.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户9906c***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com