Biologically Inspired Robots Michael Milford, David Prasser, and Gordon Wyeth Folami alamudun graduate Student Computer science & engineering Texas a&m university RatSLAM on the Edge: Revealing a Coherent Representation from an Overloaded Rat Brain OUTLINE Overview RatSLAM Experience Mapping Goal Recall Using Experience Maps Experiment Results Discussion OVERVIEW In order for a robot to navigate intelligently : It must possess a means of acquiring and storing information about past experiences; and It must possess the ability to use make decisions based on this information. 3 OVERVIEW What is SLAM? Simultaneous Localization and Mapping Determine the state of the world: What does the world look like? Determine location in the observed world: Where in the world am I? Where in the world…? OUTLINE Overview RatSLAM Experience Mapping Goal Recall Using Experience Maps Experiment Results Discussion ratSLAM Why are we SLAM-ing? Maps are used to depict the environment for an overview and to determine location within the perceived environment. Locating and mapping under conditions of errors and of noise is very complex. Simultaneous localization and mapping (SLAM). (SLAM) is a concept to bind these processes and therefore supports the contiguity of both aspects in separated processes. Iterative feedback from one process to the other one enhances the results of both consecutive steps. 6 Inspired by computational models of hippocampus in rodents. Hippocampus is a part of the brain that plays an important role in long-term memory and spatial navigation Neurons in the rat and mouse hippocampus respond as place cells. Place cells exhibit a high rate of firing whenever an animal is in a location in an environment corresponding to the cell's "place field" Place Field are patterns of neural activity that correspond to locations in space ratSLAM Place cells: firing whenever an animal is in a specific location in an environment corresponding to the cell's "place field". 7 ratSLAM RatSLAM is an implementation of a hippocampal model of robot control: To provide a new and effective method for the mobile robot problem of (SLAM); and To reproduce a high-level brain function in a robot in order to increase the understanding of memory and learning in mammals. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment 8 ratSLAM Architecture for RatSLAM. Local View and Pose Cell arrangement for artificial landmarks 9 ratSLAM – Local View Local View (LV): A form of representation processed from vision information from camera images Calibrates the robot’s state information Stored and associated with the currently active pose cells. If familiar, the current visual scene also causes activity to be injected into the pose cells associated with the currently active LV cells. they didn't work in large environments. 10 ratSLAM – Pose Cell 3-D pose cell model. Each dimension corresponds to one of the three state variables of a ground-based robot Θ` y` x` It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment 11 ratSLAM – Pose Cell Pose Cell: A three-dimensional structured competitive attractor neural network. Combines the characteristics of place and head-direction cells Each axis of the structure corresponds to a different state variable, x′, y′ and θ′ An individual cell represents a particular robot location and orientation. 12 ratSLAM How it works: Wheel encoder information is used to perform path integration by shifting the current pose cell activity. Vision information is converted into a local view. Local view cell is associated with the currently active pose cells. If familiar, activity is injected into the particular pose cells associated with the currently active local view cells. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment 13 ratSLAM – Pose Cell The first test environment was a two by two metre arena The numerical labels indicate the two goal locations. 14 ratSLAM – Pose Cell Floor plan and robot trajectory for initial goal navigation experiments. The numerical labels indicate the two goal locations. 15 ratSLAM – Pose Cell The temporal map cells after recall of the first goal. Darker areas correspond to lower cell activity levels and hence locations close in time to the goal. 16 ratSLAM – Pose Cell The path the robot followed to reach the first goal. Each grid square represents 4 × 4 pose cells in the (x′, y′) plane 17 ratSLAM Hashing collisions in the Pose Cells: Vision information starts to cause more frequent loop closures. Leads to discontinuities in the pose cell matrix. Multiple representations of the same physical areas in the environment. Clusters of pose cells become associated with more than one pose. Hashing collisions within pose cells are unavoidable. 18 ratSLAM Floor plan of large indoor environment The robot’s path is shown by the thick line. 19 ratSLAM Dominant packet path for a 40 × 20 × 36 pose cell matrix. The path is projected onto the (x′, y′) plane. Each grid square represents 4 × 4 pose cells in the (x′, y′) plane. 20 ratSLAM Temporal map for the large indoor environment The robot’s path is shown by the thick line. 21 OUTLINE Overview RatSLAM Experience Mapping Goal Recall Using Experience Maps Experiment Results Discussion EXPERIENCE MAPPING Experience mapping algorithm is the creation and maintenance of a collection of experiences and inter-experience links. This produces a spatially continuous map without collisions from the messy representations found in the pose cells. It does this by combining information from the pose cells with the Local View cells and the robot's current behavior 23 EXPERIENCE MAPPING Experience map co-ordinate space An experience is associated with certain pose and local view cells, but exists within the experience map’s own (x, y, µ′) co-ordinate space 24 EXPERIENCE MAPPING Experience Mapping: The algorithm uses output from pose cells and local view cells to create an experience map. A graph-like map containing nodes (experiences) and links between experiences. Each node represents a snapshot of the activity within pose cells and local view cells. New experience nodes is created as needed. In effect: an experience is the robot’s final representation of a distinct place in the environment, along with information about what that place looks like and other behavioral and temporal information. - When insufficient for describing pose and local view cells’ activity state new experience 25 EXPERIENCE MAPPING Experience Generation Each experience has its own (x′, y′, θ′, V ). where x’, y’, and ′ are the three state variables. V describes the visual scene associated with the experience. Output from the pose cells and local view cells is used to create a map made up of robot experiences. Inter-experience links store temporal, behavioral, and odometric information about the robot's movement between experiences. To produce a spatially continuous world representation, 26 EXPERIENCE MAPPING Experience zone of influence: Activity is dependent on how close the activity peaks in the pose cells and local view cells are to the cells associated with the experience. 27 EXPERIENCE MAPPING x'PC, ,YPC and θ' - coordinates of the dominant activity packet, x'i, yI, and θ‘ - coordinates of the associated experience i, ra is the zone constant for the (x',y') plane, and 0a is the zone constant for the 0' dimension. 28 EXPERIENCE MAPPING Experience zone Visual scene V is the current visual scene. Vi is the visual scene associate with experience i. Ex’y’θ’ is the visual scene energy component. 29 EXPERIENCE MAPPING Total Energy Level: Total Energy level of Experience Ei: Ei = EV × (Exy + Eθ) 30 EXPERIENCE MAPPING Experience Mapping: As the robot moves around a novel environment, it needs to generate experiences to form a representation of the world. Learning of new experiences is triggered not only by exploring in new areas of an environment, but also by visual changes in areas the robot has already explored. In effect: an experience is the robot’s final representation of a distinct place in the environment, along with information about what that place looks like and other behavioral and temporal information. - When insufficient for describing pose and local view cells’ activity state new experience 31 OUTLINE Overview RatSLAM Experience Mapping Goal Recall Using Experience Maps Experiment Results Discussion GOAL RECOLLECTION Experience Transitions: Transitions represent the physical movement of the robot in the world as it moves from one experience to another. These transitions represent the physical movement of the robot in the world as it moves from one experience to another. variables θij , φij , and dij 33 GOAL RECOLLECTION Experience Transitions: dpij is a vector describing the position and orientation of experience j relative to experience i. 34 GOAL RECOLLECTION Map Correction: Discrepancies between a transition's odometric information and the linked experiences' coordinates are minimized through a process of map correction: 35 OUTLINE Overview RatSLAM Experience Mapping Goal Recall Using Experience Maps Experiment Results Discussion EXPERIMENTAL RESULTS It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment 37 EXPERIMENTAL RESULTS EXPERIMENTAL RESULTS EXPERIMENTAL RESULTS DISCUSSION Experience maps are localized: Cartesian properties are not guaranteed beyond local area For instance straight corridors may be slightly curved in the experience map. for instance straight corridors may be slightly curved in the experience map. - Robot has no information linking the two directions apart from odometric information. 41
Comments
Report "RatSLAM on the Edge: Revealing a Coherent Representation from an Overloaded Rat Brain"