当前位置:主页 > 社科论文 > 心理论文 >

注意对视听整合加工的影响

发布时间:2018-07-20 15:40
【摘要】:多感觉整合(Multisensory Integration)是指,来自不同感觉通道(视觉、听觉、触觉等)的信息同时同地呈现时,被个体有效地整合为统一、连贯的知觉信息的现象。通过合并来自不同感觉通道的信息,多感觉整合能够减少知觉系统的噪音,帮助个体更好地知觉信息,在行为上表现为对同时呈现的多通道信息的判断更快更准确。已有研究将这种对双通道信息的加工优势称为冗余信号效应。回顾以往研究不难发现,早期研究多关注多感觉整合本身的特性和加工方式。作为最主要的感觉通道,视听整合加工的特性是研究者们关注的焦点。近几年,研究者们逐渐将兴趣点转向注意与视听整合加工的关系上。但是,已有研究大多针对注意与非注意条件对视听整合加工的影响是否存在差异进行探讨,而忽略了注意除可以指向空间和客体之外,还可以指向感觉通道的灵活性。因此,本研究旨在考察指向不同感觉通道的注意对视听整合加工的影响以及注意资源量和注意的起伏在其中的调制作用。本论文包括三个研究,共6个实验。研究一通过线索刺激将被试的注意指向不同感觉通道(只注意视觉、只注意听觉、同时注意视觉和听觉),考察了指向不同感觉通道的注意对视听整合加工的影响是否不同。实验1以图形和短音作为实验材料,被试的任务是根据线索刺激对相应的目标刺激进行按键反应。实验结果发现,只有在同时注意视觉和听觉(分配性注意)条件下,被试对视听双通道目标的反应最快,即产生冗余信号效应,而在只注意视觉或只注意听觉(选择性注意)条件下并没有冗余信号效应产生。为检验冗余信号效应是否源自于双通道目标的视觉和听觉成分的整合,采用竞争模型分析法对被试反应时的累积量分布概率进行检验。结果发现,实验1中的冗余信号效应源自于视听双通道目标的视觉和听觉成分的整合。也就是说,实验1结果表明,只有在分配性注意条件下才会产生视听整合。实验2对实验1的材料进行调整,以人声读出的汉语单字词作为听觉刺激,使其与视觉图形形成语义一致和不一致两种条件,考察了指向不同感觉通道的注意对视听言语整合加工的影响是否不同。结果发现,在选择性注意条件下,语义一致和不一致的视听目标均没有冗余信号效应产生。在分配性注意条件下,被试对语义一致的视听目标反应最快,即在语义一致的视听目标上产生冗余信号效应,而语义不一致的视听目标不具有加工优势。竞争模型分析发现,冗余信号效应来源于语义一致视听目标中视觉和听觉成分的整合。也就是说,实验2结果表明,只有在分配性注意条件下,语义一致的视听目标才会产生整合,而语义不一致的视听目标不会产生整合;在选择性注意条件下,不论语义是否一致,视听目标均不会产生整合。研究二在研究一的结论基础上探讨了在分配性注意条件下注意负荷对视听言语整合加工的影响。包括两个实验,分别从视觉负荷(实验3)和听觉负荷(实验4)的角度进行考察。实验3发现,只有在无视觉负荷条件下才会产生冗余信号效应,而在有视觉负荷条件下无冗余信号效应产生。竞争模型分析发现,冗余信号效应来源于语义一致视听目标的视觉和听觉成分的整合。实验3结果表明,即便是在分配性注意条件下,视听目标的整合加工仍然受到视觉注意负荷的调制,表现为只有在无负荷条件下才会产生视听整合,而在有视觉负荷条件下没有产生视听整合加工。实验4结果发现,不论是否呈现听觉负荷,被试都对视听双通道目标的反应最快,即不论有、无听觉负荷,均产生了冗余信号效应。随后的竞争模型分析发现,在有、无听觉负荷条件下均产生了视听整合。实验4结果表明,分配性注意条件下产生的视听整合加工并不受听觉负荷的调制。综合研究二的结果可以发现,视觉和听觉注意负荷在影响视听言语整合加工时具有不对称性。研究三采用节奏化的视听线索,从动态注意理论的角度出发,考察了在分配性注意条件下注意的起伏对视听言语整合加工的影响及其神经机制。实验5设置了目标刺激与视听节奏相符(合拍)、不相符(不合拍)和无视听节奏(静音)三种条件,发现只有在合拍条件下,被试对视听双通道目标的反应最快,即产生冗余信号效应,而在其他两种条件下均无冗余信号效应产生。竞争模型分析发现,冗余信号效应来源于视听目标的视觉和听觉成分的整合。也就是说,只有当视听双通道目标落在注意节奏的峰值上时才会产生视听整合,而在其他两种条件下,即使视听目标的视觉和听觉成分同时呈现,仍然无法进行整合加工。实验6在实验5的基础上,设置合拍和无声两种条件,采用具有较高时间分辨率的事件相关电位技术考察了注意的起伏影响视听言语整合加工的时间进程和神经机制。结果发现,对于单通道听觉目标而言,合拍条件下的额区和中央区N1波幅显著大于无声条件,中线和右侧电极上合拍条件下的N1波幅显著大于无声条件;在Pz和P3电极点处,合拍条件下的P2波幅显著大于无声条件下的P2波幅。对于单通道视觉目标而言,合拍条件下头皮前部和枕部的N1波幅显著大于无声条件下的N1波幅;在额区电极处,合拍条件下的P2波幅显著大于无声条件下的P2波幅。综合上述结果可以看出,在N1成分上,不论对于视觉目标还是听觉目标,均出现了N1注意效应,即在合拍条件下的波幅显著大于无声条件下的波幅。这表明在合拍条件下被试更能将注意指向目标刺激。结合已有研究中对于视听整合加工的脑电数据处理方法,在0-500ms时间窗内,对每20ms的ERP波幅数据进行分析。将不同时间段内单通道听觉目标和视觉目标诱发的ERP相加(A+V),与视听双通道目标诱发的ERP(AV)相比,结果发现,只有在合拍条件下,在121-140ms的头皮前部右侧脑区,141-160ms的头皮前部中央区均出现AV大于A+V的超加性效应。这表明,在本研究的实验条件下,当目标刺激处于注意峰值时才会产生视听整合,但是这种注意峰值对视听整合的影响并不是持续性的,而是在目标刺激呈现之后的121-160ms之间产生。综上所述,注意对视听整合加工存在一定的影响,表现为只有当注意同时指向视觉和听觉通道时才会产生视听整合加工,这种效应在简单刺激的视听整合和视听言语整合加工中均存在。其次,分配性注意条件下视觉和听觉注意负荷对视听言语整合加工的影响具有不对称性,表现为视觉负荷下不会产生视听整合,而听觉负荷下仍然存在视听整合。最后,分配性注意条件下的注意起伏对视听言语整合存在一定的影响,即当目标刺激处于注意峰值时才会产生视听整合加工。同时,这种影响并非持续性,而是在目标刺激呈现后121-160ms之间才会出现。
[Abstract]:Multi sensory integration (Multisensory Integration) means that when information from different sensory channels (visual, auditory, tactile, etc.) is presented simultaneously, the individual is effectively integrated into a unified and coherent perceptual information. By merging the information from different sensory channels, multi sensory integration can reduce the noise of the perceptual system, and help to reduce the noise of the perceptual system. It is not difficult to find out that early research pays much attention to the characteristics and processing methods of multi sensory integration itself. The characteristics of sensory channel and audio-visual integration are the focus of attention. In recent years, researchers have gradually shifted their interest to the relationship between attention and audio-visual integrated processing. However, most of the existing studies have discussed whether there is a difference in the effect of attention and non attention on audiovisual integration processing, but neglecting attention. In addition to pointing to space and objects, it can also point to the flexibility of sensory channels. Therefore, this study aims to examine the impact of attention directed to different sensory channels on audiovisual integration and the adjustment of attention to the amount of resources and attention. This paper includes three studies and 6 experiments. The subjects' attention points to different sensory channels (only attention to vision, only hearing, visual and hearing), and the effects of attention on visual and auditory integration are investigated. In Experiment 1, graphics and short sounds are used as experimental materials, and the tasks of the subjects are based on clues to the corresponding stimulus. The experimental results show that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of visual and auditory (distributive attention), and the effect of redundant signal is not produced under the condition of visual or only attention hearing (selective attention). From the integration of visual and auditory components of a dual channel target, a competitive model analysis is used to test the cumulative distribution probability of the tested response. The results show that the redundant signal effect in Experiment 1 derives from the integration of visual and auditory components of audio-visual dual channel targets. That is to say, the result of Experiment 1 shows that it is only in the distribution nature. Audio-visual integration is produced under the condition of attention. In Experiment 2, the material of Experiment 1 was adjusted, and the Chinese mono words read by human voice were used as auditory stimuli, and the two conditions of semantic consistency and inconsistency were formed, and the different effects of attention on the integration processing of different sensory channels were investigated. Under the selective attention condition, both semantic and inconsistent audio-visual targets have no redundant signal effects. Under the condition of distributive attention, the subjects respond to the semantic consistent audio-visual target most quickly, that is, it produces redundant signal effects on the semantic consistent audio-visual target, and the audio-visual target of the semantic inconsistent audio-visual target does not have the processing advantage. The contention model analysis shows that the redundant signal effect comes from the integration of visual and auditory components in the semantic consistent audio-visual target. That is to say, experiment 2 shows that the semantic consistent audio-visual target will produce integration only under the distributive attention condition, and the semantic inconsistent audio-visual target will not produce integration; At the same time, no matter whether the semantic consistency is consistent, the audio-visual target will not produce integration. In study two, the effect of attention load on audio-visual speech integration was explored on the basis of the study one. Two experiments were carried out from the angle of visual load (Experiment 3) and auditory load (Experiment 4). Experiment 3 found that only There is no redundant signal effect under the condition of no visual load, but there is no redundant signal effect under the condition of visual load. The competitive model analysis finds that the redundant signal effect comes from the integration of visual and auditory components of the semantic consistent audio-visual target. Experiment 3 results show that the audio-visual target is even under the distributive attention condition. The integration process is still modulated by visual attention load, which shows that audiovisual integration is produced only under no load conditions, and audio-visual integration is not produced under visual load conditions. Experiment 4 found that whether or not the auditory load was presented, the subjects had the fastest response to audiovisual dual channel targets, that is, no hearing or hearing. The results of the subsequent competition model showed that audio-visual integration was produced under the condition that there was no auditory load. Experiment 4 showed that audio-visual integration produced under the condition of distributive attention was not modulated by the auditory load. The results of the comprehensive study two could be found in the visual and auditory attention load. The influence of audio-visual speech integration was asymmetrical. Study three used rhythmic audio-visual clues and from the perspective of dynamic attention theory, the effects of attention undulating on audiovisual speech integration processing and its neural mechanism were investigated under the condition of distributive attention. In Experiment 5, the coincidence of target stimulation and audio-visual rhythm was set. It is found that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of closing, and there is no redundant signal effect under the other two conditions. The competition model is analyzed and the redundant signal effect comes from the visual and hearing of the audio-visual target. Integration of components. That is to say, audio-visual integration is produced only when the audio-visual dual channel target falls on the peak of the rhythm of attention. Under the other two conditions, even if the visual and auditory components of the audio-visual target are presented simultaneously, it is still impossible to integrate and process. In Experiment 6, on the basis of Experiment 5, two conditions are set up and silent. The time process and neural mechanism of the attentional undulating effects on audiovisual speech integration are investigated with the event related potential technique with high time resolution. The results show that the N1 amplitude in the frontal and central regions is significantly greater than the silent condition for the single channel auditory target. The amplitude of the N1 wave is significantly greater than that of the silent condition; at the Pz and P3 electrode points, the P2 amplitude under the combined condition is significantly greater than the P2 amplitude under the silent condition. For the single channel visual target, the N1 amplitude of the front and occipital parts of the scalp is significantly greater than the N1 amplitude under the silent condition. The P2 wave amplitude at the frontal zone electrode is significant. The P2 amplitude is greater than that under silent conditions. The above results can be seen that the N1 attention effect appears on both visual and auditory targets on the N1 component, that is, the amplitude of the wave amplitude is significantly greater than the amplitude under the silent condition. This shows that the subjects are more able to point to the target stimulation under the co beat condition. In the 0-500ms time window, the ERP wave amplitude data of each 20ms are analyzed in the time window of audio-visual integrated processing. The addition of the single channel auditory target and the visual target induced ERP (A+V) in different time periods is compared with the ERP (AV) induced by the audio-visual dual channel target. The results are found only under the matching condition, in 121-140m. In the right brain region of the anterior part of the scalp of S, the super additive effect of AV greater than A+V appears in the central region of the anterior part of the 141-160ms scalp. This indicates that audio-visual integration is produced when the target stimulus is at the peak of attention, but the effect of this peak of attention on audiovisual integration is not persistent, but at the target stimulus. The emergence of 121-160ms after now. To sum up, attention has been made to the effect of audiovisual integrated processing, which shows that audiovisual integration is produced only when attention is directed to the visual and auditory channels. This effect exists in both simple and exciting audiovisual integration and audio-visual speech integration. Secondly, distributive attention conditions. The effect of visual and auditory attention load on audio-visual speech integration is asymmetrical, which shows that audiovisual integration is not produced under visual load, while audiovisual integration still exists under the auditory load. Finally, attention undulating under the condition of distributive attention has a certain influence on audio-visual speech integration, that is, when the target stimulus is at the attention peak. At the same time, the effect is not continuous, but occurs between 121-160ms after the presentation of the target stimulus.
【学位授予单位】:天津师范大学
【学位级别】:博士
【学位授予年份】:2016
【分类号】:B842.3

【相似文献】

相关会议论文 前1条

1 林斌;;央行最优干预下人民币汇率的决定——基于信号效应和资产调整效应的动态分析[A];2009年全国博士生学术会议论文集[C];2009年

相关重要报纸文章 前1条

1 高兴;从矿产买卖新趋势看资源股底在何方[N];证券时报;2013年

相关博士学位论文 前1条

1 顾吉有;注意对视听整合加工的影响[D];天津师范大学;2016年

相关硕士学位论文 前1条

1 孙雪萍;政府研发补贴的信号效应研究[D];中共江苏省委党校;2014年



本文编号:2134002

资料下载
论文发表

本文链接:https://www.wllwen.com/shekelunwen/xinlixingwei/2134002.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户eddcf***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com