Non-canonical binocular pathways in human vision

Lead Research Organisation: University of York
Department Name: Psychology

Abstract

The human brain combines information from many sources, including across our two forward-facing eyes with overlapping visual fields. This results in a single image of the external world, and provides us with stereo (3D) vision. But the brain also combines information across the eyes for several other reasons. One example is the response of our pupils to changes in light levels - the brain must use light levels from both eyes to decide how much to dilate or constrict the pupils. This happens in a completely different network of brain regions to those involved in perception. Separately, studies in animals have found a direct pathway to motion-sensitive regions of the brain, that may govern automatic eye movements to fast moving objects. Although we know about the anatomy of these other binocular pathways, we understand much less about precisely what they are doing. This project aims to understand the computations involved, and to compare them to those in the perceptual pathway, which are better understood, and for which mathematical models already exist.

To achieve this, we will perform experiments in human volunteers. Our first study will focus on the pathway that governs pupil diameter. We can measure how the pupils constrict and dilate in response to flickering lights by using an eye-tracker. These responses can be compared to the responses of the perceptual parts of the brain, which we measure at the same time using EEG (a technique that records electrical brain activity at the scalp). Our visual system involves multiple 'colour channels', that are fed by cells in the eye that respond to different wavelengths of light. Some of these appear to be more important than others for deciding the pupil response, so we will compare the characteristics of binocular combination across the different channels. We will also look at whether the channels interact across the eyes, e.g. if we show light of different wavelengths to the left and right eyes, how does activity in the different colour channels interact?

Next, we will investigate the operation of the pathway that governs automatic eye movements to moving objects. The anatomical pathways involved may be directly activated by specific wavelengths of light (blue coloured light, that activates the S-cone pathway). By measuring eye movements in response to fast moving stimuli presented to one or both eyes, targeted towards a particular colour pathway, we can directly assess the contribution of different channels in this calculation. We will also measure brain activity directly, again using EEG, to help us understand the timecourse of the neural operations that govern binocular eye movement planning. We will also repeat some of the above experiments in patients with amblyopia, a disorder of binocular vision in which one eye contributes much less to vision than the other. Although we know much about the consequences of amblyopia for perception, it may be that other binocular pathways remain unaffected. Understanding this will aid the development of treatments for this condition in the future.

A final study will use state-of-the art brain scanners (MRI and MEG) to measure the response of the brain to images shown to either one or two eyes, using a 3D projector. In the perceptual regions of the brain, the increased response when both eyes are open is balanced by a process of suppression between the eyes. This means that the brain activity is about the same whether one or both eyes see the stimulus, consistent with our everyday observation that the world does not change in appearance when two eyes are open compared to one. But we suspect that in the other binocular pathways, there might be a much bigger increase when both eyes are stimulated. So, we will look for brain regions that give a larger response to binocular stimulation than to monocular stimulation. We will do this for a range of different stimuli that are designed to target specific pathways (e.g. the motion pathway).

Technical Summary

Binocular combination occurs in primary visual cortex (V1), and results in a cyclopean percept of the external world. But the brain also combines information across the eyes in other anatomically distinct pathways that bypass the canonical V1 route, and which govern other functions including pupil diameter, and responses to rapid motion. Although the anatomical connections are known, the functional properties of these non-canonical binocular pathways have not been well established. This project will use an interdisciplinary, multimodal approach to investigate the computational algorithms involved non-canonical binocular signal combination. We will use fMRI and MEG to compare monocular and binocular responses across the whole brain. In V1, gain control processes balance excitation and inhibition, and binocular and monocular responses are equal. But in the non-canonical pathways, we anticipate much greater binocular responses, in some areas more than twice the monocular response. Next we will use a combination of pupillometry and steady-state EEG to measure the functional properties of the subcortical pathway that governs pupil diameter, and compare this to cortical responses. By targeting specific photoreceptor classes (e.g. melatonin, S-cones) we can investigate binocular interactions within and between pathways. We will also explore the contributions of different chromatic pathways to reflexive eye movements, because there is a direct (koniocellular) pathway that projects directly to MT and is not constrained by the combination rules of V1. Finally, we will test individuals with amblyopia on a subset of experimental paradigms to determine if binocular combination remains intact in non-canonical binocular pathways; this may be key to developing effective treatments for this condition. All experiments will be informed by the use of contemporary computational models of binocular vision, which provide a single point of comparison across distinct anatomical pathways.
 
Title PySilsub-a toolbox for silent substitution 
Description A normal human retina contains several classes of photosensitive cell-rods for low-light vision, three types of cones for daylight vision, and the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing melanopsin for controlling non-image-forming functions (e.g., pupil size, circadian rhythms). The spectral sensitivities of the photoreceptors overlap significantly, meaning most lights will stimulate all photoreceptors, but to varying degrees. The method of silent substitution (Estévez & Spekreijse, 1982, Vision Research, 22[6], 681-691) provides a principled basis for stimulating individual photoreceptor classes selectively, which is useful in research and clinical settings. The main hardware requirement for silent substitution is a spectrally calibrated light stimulation system with at least as many primaries as there are photoreceptors under consideration. Device settings that will produce lights to selectively stimulate the photoreceptor(s) of interest can be found using a variety of analytic and algorithmic approaches. Here we present PySilSub, a novel Python package for silent substitution featuring object-oriented support for individual colorimetric observer models, multi-primary stimulation devices, and solving silent substitution problems with linear algebra and constrained numerical optimisation. The software is registered with the Python Package Index (pip install pysilsub) and includes example data sets from various multi-primary systems. We hope that PySilSub will further encourage the application of silent substitution in research and clinical settings. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://figshare.com/articles/poster/PySilsub_a_toolbox_for_silent_substitution/21711830/6
 
Title PySilsub-a toolbox for silent substitution 
Description A normal human retina contains several classes of photosensitive cell-rods for low-light vision, three types of cones for daylight vision, and the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing melanopsin for controlling non-image-forming functions (e.g., pupil size, circadian rhythms). The spectral sensitivities of the photoreceptors overlap significantly, meaning most lights will stimulate all photoreceptors, but to varying degrees. The method of silent substitution (Estévez & Spekreijse, 1982, Vision Research, 22[6], 681-691) provides a principled basis for stimulating individual photoreceptor classes selectively, which is useful in research and clinical settings. The main hardware requirement for silent substitution is a spectrally calibrated light stimulation system with at least as many primaries as there are photoreceptors under consideration. Device settings that will produce lights to selectively stimulate the photoreceptor(s) of interest can be found using a variety of analytic and algorithmic approaches. Here we present PySilSub, a novel Python package for silent substitution featuring object-oriented support for individual colorimetric observer models, multi-primary stimulation devices, and solving silent substitution problems with linear algebra and constrained numerical optimisation. The software is registered with the Python Package Index (pip install pysilsub) and includes example data sets from various multi-primary systems. We hope that PySilSub will further encourage the application of silent substitution in research and clinical settings. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://figshare.com/articles/poster/PySilsub_a_toolbox_for_silent_substitution/21711830
 
Title PySilsub-a toolbox for silent substitution 
Description A normal human retina contains several classes of photosensitive cell-rods for low-light vision, three types of cones for daylight vision, and the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing melanopsin for controlling non-image-forming functions (e.g., pupil size, circadian rhythms). The spectral sensitivities of the photoreceptors overlap significantly, meaning most lights will stimulate all photoreceptors, but to varying degrees. The method of silent substitution (Estévez & Spekreijse, 1982, Vision Research, 22[6], 681-691) provides a principled basis for stimulating individual photoreceptor classes selectively, which is useful in research and clinical settings. The main hardware requirement for silent substitution is a spectrally calibrated light stimulation system with at least as many primaries as there are photoreceptors under consideration. Device settings that will produce lights to selectively stimulate the photoreceptor(s) of interest can be found using a variety of analytic and algorithmic approaches. Here we present PySilSub, a novel Python package for silent substitution featuring object-oriented support for individual colorimetric observer models, multi-primary stimulation devices, and solving silent substitution problems with linear algebra and constrained numerical optimisation. The software is registered with the Python Package Index (pip install pysilsub) and includes example data sets from various multi-primary systems. We hope that PySilSub will further encourage the application of silent substitution in research and clinical settings. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://figshare.com/articles/poster/PySilsub_a_toolbox_for_silent_substitution/21711830/5
 
Title PySilsub-a toolbox for silent substitution 
Description A normal human retina contains several classes of photosensitive cell-rods for low-light vision, three types of cones for daylight vision, and the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing melanopsin for controlling non-image-forming functions (e.g., pupil size, circadian rhythms). The spectral sensitivities of the photoreceptors overlap significantly, meaning most lights will stimulate all photoreceptors, but to varying degrees. The method of silent substitution (Estévez & Spekreijse, 1982, Vision Research, 22[6], 681-691) provides a principled basis for stimulating individual photoreceptor classes selectively, which is useful in research and clinical settings. The main hardware requirement for silent substitution is a spectrally calibrated light stimulation system with at least as many primaries as there are photoreceptors under consideration. Device settings that will produce lights to selectively stimulate the photoreceptor(s) of interest can be found using a variety of analytic and algorithmic approaches. Here we present PySilSub, a novel Python package for silent substitution featuring object-oriented support for individual colorimetric observer models, multi-primary stimulation devices, and solving silent substitution problems with linear algebra and constrained numerical optimisation. The software is registered with the Python Package Index (pip install pysilsub) and includes example data sets from various multi-primary systems. We hope that PySilSub will further encourage the application of silent substitution in research and clinical settings. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://figshare.com/articles/poster/PySilsub_a_toolbox_for_silent_substitution/21711830/2
 
Title PySilsub-a toolbox for silent substitution 
Description A normal human retina contains several classes of photosensitive cell-rods for low-light vision, three types of cones for daylight vision, and the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing melanopsin for controlling non-image-forming functions (e.g., pupil size, circadian rhythms). The spectral sensitivities of the photoreceptors overlap significantly, meaning most lights will stimulate all photoreceptors, but to varying degrees. The method of silent substitution (Estévez & Spekreijse, 1982, Vision Research, 22[6], 681-691) provides a principled basis for stimulating individual photoreceptor classes selectively, which is useful in research and clinical settings. The main hardware requirement for silent substitution is a spectrally calibrated light stimulation system with at least as many primaries as there are photoreceptors under consideration. Device settings that will produce lights to selectively stimulate the photoreceptor(s) of interest can be found using a variety of analytic and algorithmic approaches. Here we present PySilSub, a novel Python package for silent substitution featuring object-oriented support for individual colorimetric observer models, multi-primary stimulation devices, and solving silent substitution problems with linear algebra and constrained numerical optimisation. The software is registered with the Python Package Index (pip install pysilsub) and includes example data sets from various multi-primary systems. We hope that PySilSub will further encourage the application of silent substitution in research and clinical settings. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://figshare.com/articles/poster/PySilsub_a_toolbox_for_silent_substitution/21711830/1
 
Title PySilsub-a toolbox for silent substitution 
Description A normal human retina contains several classes of photosensitive cell-rods for low-light vision, three types of cones for daylight vision, and the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing melanopsin for controlling non-image-forming functions (e.g., pupil size, circadian rhythms). The spectral sensitivities of the photoreceptors overlap significantly, meaning most lights will stimulate all photoreceptors, but to varying degrees. The method of silent substitution (Estévez & Spekreijse, 1982, Vision Research, 22[6], 681-691) provides a principled basis for stimulating individual photoreceptor classes selectively, which is useful in research and clinical settings. The main hardware requirement for silent substitution is a spectrally calibrated light stimulation system with at least as many primaries as there are photoreceptors under consideration. Device settings that will produce lights to selectively stimulate the photoreceptor(s) of interest can be found using a variety of analytic and algorithmic approaches. Here we present PySilSub, a novel Python package for silent substitution featuring object-oriented support for individual colorimetric observer models, multi-primary stimulation devices, and solving silent substitution problems with linear algebra and constrained numerical optimisation. The software is registered with the Python Package Index (pip install pysilsub) and includes example data sets from various multi-primary systems. We hope that PySilSub will further encourage the application of silent substitution in research and clinical settings. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://figshare.com/articles/poster/PySilsub_a_toolbox_for_silent_substitution/21711830/4
 
Title PySilsub-a toolbox for silent substitution 
Description A normal human retina contains several classes of photosensitive cell-rods for low-light vision, three types of cones for daylight vision, and the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing melanopsin for controlling non-image-forming functions (e.g., pupil size, circadian rhythms). The spectral sensitivities of the photoreceptors overlap significantly, meaning most lights will stimulate all photoreceptors, but to varying degrees. The method of silent substitution (Estévez & Spekreijse, 1982, Vision Research, 22[6], 681-691) provides a principled basis for stimulating individual photoreceptor classes selectively, which is useful in research and clinical settings. The main hardware requirement for silent substitution is a spectrally calibrated light stimulation system with at least as many primaries as there are photoreceptors under consideration. Device settings that will produce lights to selectively stimulate the photoreceptor(s) of interest can be found using a variety of analytic and algorithmic approaches. Here we present PySilSub, a novel Python package for silent substitution featuring object-oriented support for individual colorimetric observer models, multi-primary stimulation devices, and solving silent substitution problems with linear algebra and constrained numerical optimisation. The software is registered with the Python Package Index (pip install pysilsub) and includes example data sets from various multi-primary systems. We hope that PySilSub will further encourage the application of silent substitution in research and clinical settings. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://figshare.com/articles/poster/PySilsub_a_toolbox_for_silent_substitution/21711830/3