基于深度学习的无人机人机交互系统
发布时间:2018-11-13 07:59
【摘要】:现行的无人机控制(UAV)主要依靠专业的设备,由经过专业训练的人来完成.为了更方便的人机交互,本文提出了一种基于双目视觉和深度学习的手势控制无人机(HRI)方法.用双目视觉提取深度图,跟踪提取人物所在区域并且设置阈值将人物与背景分离开来,从而得到只含有人物的深度图.其次,通过对深度图序列的处理并叠加,将视频转换为同时含有时间与空间信息的彩色纹理图.本文用深度学习工具Caffe对所得到的彩色纹理图进行了训练与识别,根据识别结果生成无人机的控制指令.本文所述方法在室内和室外均可使用,有效范围达到10,m,可以简化无人机控制复杂度,对促进无人机普及及拓展无人机应用范围都具有重要意义.
[Abstract]:The current UAV control (UAV) mainly relies on professional equipment and is accomplished by trained personnel. In order to facilitate human-computer interaction, a hand gesture control (HRI) method based on binocular vision and depth learning is proposed in this paper. The depth map is extracted by binocular vision, the region in which the character is located is tracked and the threshold is set to separate the character from the background, thus the depth map with only characters is obtained. Secondly, by processing and superposing the depth map sequence, the video is transformed into a color texture image with both temporal and spatial information. In this paper, the color texture images are trained and recognized by the depth learning tool Caffe, and the control instructions of UAV are generated according to the recognition results. The proposed method can be used both indoors and outdoors, with an effective range of 10 m, which can simplify the control complexity of UAV, and is of great significance to promote the popularization of UAV and expand its application range.
【作者单位】: 天津大学电气自动化与信息工程学院;天津市先进电气工程与能源技术重点实验室;天津工业大学电气工程与自动化学院;
【基金】:国家自然科学基金资助项目(61571325) 天津市科技支撑计划重点资助项目(15ZCZDGX00190,16ZXHLGX00190)~~
【分类号】:TP391.41;V279
本文编号:2328494
[Abstract]:The current UAV control (UAV) mainly relies on professional equipment and is accomplished by trained personnel. In order to facilitate human-computer interaction, a hand gesture control (HRI) method based on binocular vision and depth learning is proposed in this paper. The depth map is extracted by binocular vision, the region in which the character is located is tracked and the threshold is set to separate the character from the background, thus the depth map with only characters is obtained. Secondly, by processing and superposing the depth map sequence, the video is transformed into a color texture image with both temporal and spatial information. In this paper, the color texture images are trained and recognized by the depth learning tool Caffe, and the control instructions of UAV are generated according to the recognition results. The proposed method can be used both indoors and outdoors, with an effective range of 10 m, which can simplify the control complexity of UAV, and is of great significance to promote the popularization of UAV and expand its application range.
【作者单位】: 天津大学电气自动化与信息工程学院;天津市先进电气工程与能源技术重点实验室;天津工业大学电气工程与自动化学院;
【基金】:国家自然科学基金资助项目(61571325) 天津市科技支撑计划重点资助项目(15ZCZDGX00190,16ZXHLGX00190)~~
【分类号】:TP391.41;V279
【相似文献】
相关期刊论文 前2条
1 修吉宏;李军;黄浦;;航测相机人机交互系统的设计与实现[J];液晶与显示;2011年04期
2 张丽萍;谢栓勤;;飞机供电系统中人机交互的研究与设计[J];计算机测量与控制;2006年03期
,本文编号:2328494
本文链接:https://www.wllwen.com/kejilunwen/ruanjiangongchenglunwen/2328494.html