面向三维环境的类脑同步定位与制图系统

发布时间:2021-10-21 23:58
  随着陆、海、空及太空机器人应用领域的不断扩展,机器人系统的自主性、鲁棒性、智能性等也将面临新的巨大挑战,如何让机器人实现真正自主已成为该领域研究的前沿热点。而三维同步定位与制图(SLAM)作为实现机器人自主的关键技术之一,在实际应用中仍面临许多挑战。特别是在未知的复杂三维环境下,机器人板载传感器、计算、存储、功耗、重量、尺寸等资源严格受限,传感器信号易受干扰且无法依赖GPS等外源性信号,导致现有三维SLAM技术因存在设备能耗大、计算成本高、环境适应差、智能水平低等突出问题,已成为制约移动机器人应用的瓶颈。如何研制具有极低能耗、极高效率、极强鲁棒等特性的新型智能三维SLAM新技术已成为亟待攻克的重大难题。然而,自然界中的人类和动物却具有非凡的三维导航能力。例如,蝙蝠仅仅通过眼睛、耳朵等感官和小小的大脑,无需获取高精度的地图,就能自如的在复杂三维动态环境中进行智能导航,而且只需要消耗极少的能量,却具有极高的效率和鲁棒性。那么大脑是怎样进行智能三维导航的呢?近年来,神经科学家们逐渐发现了大脑中的“三维地图”和“三维罗盘”,由三维位置细胞、三维头朝向细胞、三维网格细胞等组成,逐步揭开了大脑三维... 

【文章来源】:中国地质大学湖北省 211工程院校 教育部直属院校

【文章页数】:221 页

【学位级别】:博士

【文章目录】:
作者简历
摘要
Abstract
List of Abbreviations
Chapter 1 Introduction
    1.1 Motivation
    1.2 Research Problems
    1.3 Research Contents
    1.4 Contributions
    1.5 Dissertation Organization
Chapter 2 Literature Review
    2.1 Conception of 3D SLAM and Navigation
        2.1.1 SLAM for 3D Environments
        2.1.2 Navigation for 3D Environments
        2.1.3 Navigation Modes
        2.1.4 Classification of 3D SLAM and Navigation Approaches
    2.2 Conventional 3D SLAM
    2.3 Neural Basis of 3D Navigation
        2.3.1 Neural Representation of 3D Space
        2.3.2 Computational Model of 3D Spatial Cells
    2.4 Bio-inspired SLAM
        2.4.1 Behaviour Strategy-inspired SLAM
        2.4.2 Brain-inspired SLAM
    2.5 Summary
Chapter 3 System Overview
    3.1 The Framework and Components
        3.1.1 3D Pose Representation
        3.1.2 3D Path Integration
        3.1.3 3D Spatial Experience Map
        3.1.4 3D Vision Perception
    3.2 The Software Architecture
    3.3 The Working Process
    3.4 Summary
Chapter 4 3D Pose Representation based on 3D Place and HeadDirection Cells
    4.1 Requirements of Robot's Pose Representation in 3D Space
    4.2 Biological Properties of3D Place Cells and 3D Head Direction Cells
    4.3 3D Place Cell Model
        4.3.1 Conceptual Model of 3D Place Cells
        4.3.2 Computational Model of 3D Place Cells
    4.4 3D Head Direction Cell Model
        4.4.1 Multilayered HD Cell Model for4Do F Pose Representation
        4.4.2 Torus 3D HD Cell Model for 5Do F Pose Representation
        4.4.3 Conjunctive 3D HD Cell Models for 6Do F Pose Representation
    4.5 Experiments
        4.5.1 Experiments of 4Do F Pose Representation
        4.5.2 Experiments of 5Do F Pose Representation
        4.5.3 Experiments of 6Do F Pose Representation
    4.6 Summary
Chapter 5 3D Path Integration based on3D Grid Cells
    5.1 Requirements of Robot's Path Integration in 3D Space
    5.2 Biological Properties of 3D Grid Cells
    5.3 Conceptual Models of 3D Grid Cells
        5.3.1 Cube3D GC Model
        5.3.2 Conjunctive Cube 3D GC Model
        5.3.3 Conjunctive Lie Group 3D GC Model
    5.4 Computational Model of 3D Grid Cells
        5.4.1 3D Path Integration
        5.4.2 3D Pose Calibration
    5.5 Experiments
        5.5.1 Experiments of 3D Path Integration
        5.5.2 Experiments of 3D Pose Calibration
    5.6 Summary
Chapter 6 3D Spatial Experience Map
    6.1 Requirements of Robot's 3D Spatial Experience Representation
    6.2 4Do F Pose Experience Map
        6.2.1 Encoding of 4Do F Pose Experiences
        6.2.2 Creation of 4Do F Pose Experiences
        6.2.3 Update of 4Do F Pose Experience Map
    6.3 5Do F Pose Experience Map
        6.3.1 Encoding of 5Do F Pose Experiences
        6.3.2 Creation of 5Do F Pose Experiences
        6.3.3 Update of 5Do F Pose Experience Map
    6.4 6Do F Pose Experience Map
        6.4.1 Encoding of 6Do F Pose Experiences
        6.4.2 Creation of 6Do F Pose Experiences
        6.4.3 Update of 6Do F Pose Experience Map
    6.5 Experiments
        6.5.1 Experiments of 4Do F Pose Experience Mapping
        6.5.2 Experiments of 5Do F Pose Experience Mapping
        6.5.3 Experiments of 6Do F Pose Experience Mapping
    6.6 Summary
Chapter 7 3D Vision Perception
    7.1 Overview of 3D Vision Perception
    7.2 Image Processing
    7.3 3D Visual Odometry for Estimating Self-motion Cues
        7.3.1 4Do F Pose Estimation
        7.3.2 5Do F and 6Do F Pose Estimation
    7.4 Visual Template for Estimating External Cues
        7.4.1 Overview of Local View Processing
        7.4.2 Visual Template Learning and Recall
        7.4.3 Local View Cell Calculation
    7.5 Experiments
        7.5.1 Experiments of Image Processing
        7.5.2 Experiments of Self-motion Cues Estimation
        7.5.3 Experiments of Visual Template Learning
    7.6 Summary
Chapter 8 Performance Evaluation
    8.1 Experimental Setup
        8.1.1 Datasets
        8.1.2 Parameters
    8.2 Evaluation Metrics
        8.2.1 Geometric Accuracy
        8.2.2 Topological Consistency
    8.3 Results
        8.3.1 3D Spatial Experience Mapping
        8.3.2 Snapshots of the3D Navigational Spatial Cells
        8.3.3 Visual Template Learning and Recall
        8.3.4 3D Visual Odometry
    8.4 Comparison with State-of-the-art3D SLAM
        8.4.1 Comparison with ORB-SLAM and LDSO
        8.4.2 Performance Test by Integrating Visual Inertial Odometry
    8.5 Summary
Chapter 9 Conclusion
    9.1 Discussion and Summary
    9.2 Future Work
Acknowledgements
致谢
References
Appendix
    A.Code and Datasets
    B.Videos of Experiments
    C.Parameters


【参考文献】:
期刊论文
[1]无人作战平台认知导航及其类脑实现思想[J]. 吴德伟,何晶,韩昆,李卉.  空军工程大学学报(自然科学版). 2018(06)



本文编号:3449973

资料下载
论文发表

本文链接:https://www.wllwen.com/shoufeilunwen/xxkjbs/3449973.html


Copyright(c)文论论文网All Rights Reserved | 网站地图 |

版权申明:资料由用户f248c***提供,本站仅收录摘要或目录,作者需要删除请E-mail邮箱bigeng88@qq.com