Building spatial maps from visual and self-motion inputs

Lead Research Organisation: Queen Mary University of London
Department Name: Sch of Biological & Behavioural Sciences

Abstract

Context of research
Our physical environment possesses many different cues that are perceived by our sensory systems. As we move through the environment, we observe a corresponding change in the sensory cues. In mammals, the hippocampus and its adjacent areas in the medial temporal lobe, have long been implicated in spatial navigation and learning. Several types of spatial neurons have been discovered in this area, including place cells and grid cells. The activity of these neurons represents an animal's current location. Despite this discovery of an internal representation (or "map") of space, it remains unclear how the brain combines environmental sensory cues (e.g. visual landmarks) with self-motion information (e.g. locomotor or optic flow cues) in order to form these maps. Hence, the focus of this project is to disentangle the effects of visual and self-motion cues on place cells and grid cells during spatial mapping.
Historically, it was challenging to study the question in adult animals for three key reasons. First, separating the effects of visual and self-motion inputs is difficult to achieve in the real world. Second, place cell and grid cell networks are interconnected, hence it is difficult to study the separate networks independently. Third, spatial representations appear almost instantaneously when an animal enters a new environment. This suggests that animals learn from previous experience, possibly developing a generalized code that enables them to quickly construct new spatial representations on demand.
I have recently developed a two-dimensional virtual reality (2D VR) system, providing mice with an immersive experience of navigating in a virtual world. The pioneering development places me in a unique position to answer the question which was previously challenging. The new 2D VR allows independent manipulations of visual and self-motion cues in 2D space. My preliminary data show that spatial representations in a virtual world are similar to those in the real world, but form at a much slower pace. Thus, for the first time, we have a prolonged window during which to study the formation of spatial representations, and in particular the effects of visual and self-motion cues on the formation process.
Aim and objectives
The project will study the distinct roles of place cells and grid cells in building spatial representations, by taking advantage of the new 2D VR system. The aim is to understand how place and grid cells interact and combine visual and self-motion cues to represent space. I will first establish the timeline of the formation of spatial representations in 2D virtual space in adult mice. Next, I will differentiate the contributions of visual and self-motion information on forming spatial representations. Finally, I will test how varying these cues affects established spatial maps.
Potential applications and benefits
The project tackles one of the key challenges outlined in the BBSRC's vision - "Understanding the rules of life", and is perfectly aligned with the BBSRC's strategic priority "Systems approaches to the bioscience".
The project offers a new angle for understanding the interaction between spatial cells and their functions in spatial learning, providing the foundation for the applications in the fields of artificial intelligence, robotic navigation and ageing. First, the findings will allow computational neuroscientists to create increasingly accurate models simulating long-term memory, contributing to the development of artificial intelligence. Second, the work offers insight into teaching robots how to integrate multisensory inputs, perform complex terrain navigation. Third, the findings will help us understand the neural basis of memory processing in normal ageing, as well as neurodegenerative diseases such as dementia.

Technical Summary

Our brain forms an internal representation (or "map") of space. Several spatial cells have been discovered in mammals, including place cells in the hippocampus and grid cells in the medial entorhinal cortex. The formation of spatial representation requires inputs from environmental sensory cues (such as environmental visual landmarks) and self-motion cues (such as locomotor, or optic flow cues). However, it remains unclear how spatial cells combine visual and self-motion cues in order to form these maps.
This question had been challenging to study, but my recently-developed two-dimensional virtual reality (2D VR) system offers two unique advantages for addressing it. Firstly, it allows independent manipulations of visual and self-motion cues in 2D space. Secondly, my pilot data show that grid cells take a much longer time to form spatial patterns than place cells. Thus, for the first time, we have a prolonged window during which to study the formation of spatial representations, especially the effects of visual and self-motion cues on the formation.
The project aims to understand how place and grid cells interact and combine visual and self-motion cues to represent space. I will take advantage of the 2D VR system, combined with in vivo electrophysiological recordings with tetrodes and/or Neuropixels probes. I will first investigate the de-novo formation of spatial representations as adult mice first experience a 2D virtual space. I will then investigate the separate effects of the visual and self-motion inputs on spatial representations at different points of the formation. At the same time, I will also look at the ensemble reactivations of place cells during slow-wave sleep throughout the formation period. Finally, I will investigate how visual and self-motion cues affect established spatial maps. The findings of the project will demonstrate how place and grid cells play distinct roles in building spatial maps, using different cues dependant on experience.

Publications

10 25 50