Representing and responding in the visual world: a new model of contextual cuing.

Lead Research Organisation: University College London
Department Name: Experimental Psychology

Abstract

One of the most fundamental psychological functions humans possess is the ability to recognize a familiar scene and perform an action relevant to it or an action relevant to some internal goal. For example, we may want to search for our keys within a room in our house. We know that the actions we perform in completing this simple task will vary, depending on whether we are searching for the keys in the kitchen, the bathroom or the lounge. That is, the context in which we are situated is crucial to the order in which we search locations in the room. We rely on our memory for a specific scene to narrow down the options and make our search as efficient as possible.Of course, this form of learning is not restricted to locating household objects. The same cognitive processes are likely to play an important role when any organism performs an action within many different environments, from driving cars in crowded traffic, walking to the local shop, or even when an animal hunts its prey in a forest. In every case there is a need to process the different features, or cues, within the environment, and then generate predictions about where certain elements will be located using our memory of previous encounters with similar scenes.We currently have a very limited understanding of this fundamental aspect of human behaviour. In experimental tasks developed over the last 10 years, researchers have started to examine how we learn this type of information. In a laboratory task designed to invoke this behaviour, participants view a computer screen which on each trial displays a context of distractor objects (e.g., different coloured "L" shapes) and a unique target object (e.g. a "T" shape). Participants have the goal of locating the target and responding to a particular feature of the shape, such as its orientation. During the task, participant's reaction times are measured. Crucially, some distractor configurations are repeated throughout the task. It has been shown that participants are faster to locate the target in repeating configurations than in completely novel arrangements. This decrease in the time taken to detect targets must be due to participants storing the repeating patterns of context in long-term memory.The current research aims to test a newly proposed computational model of this type of learning. This model learns about repeating scenes by creating memories for how specific objects within the scene are arranged with respect to the target object. Taking the analogy of a familiar kitchen scene, the model predicts that we learn only about the spatial location of a target object relative to other objects in the room (e.g., the toaster is opposite the fridge and to the right of the cooker). This information is enough to explain why the toaster is located more quickly on successive searches, as each object in the kitchen provides some information as to the location of the toaster. However, it seems natural to suppose that we will also learn about other aspects of the kitchen that are not relevant to our search. That is, we will engage in "incidental learning" (unintentional or automatic learning) about the general layout of the room, the objects present and their positions relative to one another (e.g., the cooker is opposite the fridge, the sink is under the window). Recent evidence from our laboratory suggests that these relationships, which are irrelevant to the search task, are also learned. The current project takes these important findings as a starting point and will provide a thorough examination of scene learning processes, allied to the development of a new computational model. In a complimentary strand of research we will monitor and study eye movements during scene learning. This research will provide data which will determine whether attention plays a key role in controlling learning in this behavior. The findings of this research will also inform the development of our new model of scene learning.

Planned Impact

Visual context learning is a fundamental cognitive process which plays an important role in our ability to comprehend and act within the familiar scenes we experience. However, key aspects of the psychological processes responsible for this learning are poorly understood. The proposed project aims to bridge this knowledge gap with the development of a new formal model of scene learning. This will provide a testable account of a wide range of patterns of data in the contextual cuing literature and recent data from our own laboratory. We will explore the predictions of this model in a series of novel behavioural experiments.The primary beneficiaries of this research will be academic in a range of fields within the cognitive sciences. The research ideas stem from the vast literature on associative learning theory. The data will therefore be of interest to researchers with broad interests in animal and human learning, and computational models of learning. The tasks we use are taken from the implicit learning literature and therefore the data will be important for researchers with interest in the explicit/implicit learning distinctions, which are of central importance within cognitive psychology. The research extends the application of associative learning theory to theories of visual context (or scene) learning. This will of course extend the intended audience to the fields of visual perception and attention. Finally, specific attentional components will be tested using eye tracking tools. The relative novelty of these techniques within cognitive psychology means that the methodological and data analysis aspects of our research will be of great interest to researchers in a wide range of fields within cognitive psychology. We will maximise the impact of our research within these academic fields with publications in journals with broad readership (eg Journal of Experimental Psychology: General; Cognitive Science). Similarly, presentations will be given at conferences with attendance from a wide range of research fields (Meeting of the Cognitive Science Society; Annual Meeting of the Psychonomic Society; Meetings of the Experimental Psychology Society). We also believe that the research has wider societal benefits. For example, a greater understanding and formal theories of visual context learning would facilitate the development of artificial systems capable of navigation and scene recognition. The uses of these technologies could therefore include the development of robotics for military use and scene recognition in surveillance systems.

Publications

10 25 50
publication icon
Beesley T (2016) Configural learning in contextual cuing of visual search. in Journal of experimental psychology. Human perception and performance

publication icon
Beesley T (2018) Overt attention in contextual cuing of visual search is driven by the attentional set, but not by the predictiveness of distractors. in Journal of experimental psychology. Learning, memory, and cognition

publication icon
Beesley T (2015) Pre-exposure of repeated search configurations facilitates subsequent contextual cuing of visual search. in Journal of experimental psychology. Learning, memory, and cognition

publication icon
Vadillo MA (2016) Underpowered samples, false negatives, and unconscious learning. in Psychonomic bulletin & review

 
Description The present project developed and tested a new model of visual search that aimed at explaining how we learn to find objects in familiar environments. The predictions of the model have been confirmed by several experiments and computational modeling techniques. Furthermore, these predictions have also been explored using eye-tracking measures that allow us to study how people deploy their attentional resources during visual search in familiar contexts.

The conclusions of our studies have important implications for any attempt to design artificial intelligent algorithms that mimic humans' ability to deploy visual attention efficiently. They also provide a unique insight into the mechanisms of human learning and memory.

This research project has also consolidated a fruitful collaboration between David Shanks (PI of the project), Miguel A. Vadillo (now working at King's College London) and Tom Beesley (University of New South Wales) that remains fully active despite the end of the project. Thanks to the present project we have also contributed to the development of better methods for the use of eye-tracking technologies in behavioural research.
Exploitation Route Our results will be of interest not only for experimental psychologists working in the area of human learning and memory, but also to researchers working on visual cognition in artificial intelligence.

Furthermore, as a result of our research we have developed a new algorithm for the correction of eye-tracking data that is now available for any researcher working with these methods. Further details about these methods and potential applications of our research can be found at the websites of the authors:

David Shanks (https://sites.google.com/site/davidshanksucl/), Miguel A. Vadillo (http://mvadillo.com), and Tom Beesley (http://tombeesley.wordpress.com/).
Sectors Digital/Communication/Information Technologies (including Software),Education,Electronics

URL http://discovery.ucl.ac.uk/1428694/
 
Description Our research findings have yielded 2 publications thus far in leading journals. One of these (in JEP:LMC) reports the results of our tests of theories of attention deployment in contextual cuing and is principally of interest to other researchers in the field. Our results will be of interest not only for experimental psychologists working in the area of human learning and memory, but also to researchers working on visual cognition in artificial intelligence. The second publication (in BRM) is likely to have considerably wider impact. It describes a new algorithm we have developed for the correction of eye-tracking data that is now available for any researcher working with these important and fast-developing methods. This research project has also consolidated a fruitful collaboration between David Shanks (PI of the project), Miguel A. Vadillo (now working at King's College London) and Tom Beesley (University of New South Wales) that remains fully active despite the end of the project. A further grant application is under development.
First Year Of Impact 2015
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Societal,Economic