Does sleep flush out the unwanted leftovers of recent cognitive activities?

Lead Research Organisation: University of Exeter
Department Name: Psychology

Abstract

This project examines whether sleep helps flush out unwanted leftovers of recent perceptual and cognitive activities in addition to consolidating new learning.

Traditional views of learning assume that new memories are shaky for a while, but soon consolidate, becoming resistant to amnestic agents and interference from new learning (McGaugh, 2000; Wixted, 2004). Recent theorizing (Stickgold & Walker, 2013), however, insists that memory consolidation is not the uniform and indiscriminate process just described, but is instead highly selective and adaptive, tailored to the goal-directed needs of the organism.

Sleep, with its associated neurophysiological states, is the prime vehicle for a "memory triage" process: only memories that are emotionally salient, or worth remembering for future use, get assimilated into the brain's landscape of ever evolving knowledge. Here we ask what happens to unwanted memories, which also start off in this no man's land of in between memory (i.e., between working memory and long-term memory; henceforth, IBM)? The goal of this project is to determine whether in addition to stabilizing and assimilating useful memories, sleep also cleans the slate of unwanted memories to reset the system, in order to start afresh the next day.

Evidence for offline, sleep-dependent memory consolidation is substantial, showing a variety of transformative results, including increased resilience to amnestic agents, enhanced accessibility, spontaneous recovery, extraction of underlying structures, and integration with existing knowledge. In contrast, the existence of memory clean-up has not yet been substantiated. We suggest this is largely because researchers thought they could just force learners to forget newly learnt information.

Our premise is that unwanted leftovers mostly comprise the lingering activation of long-term memories recently evoked by perceptual, expressive, or imagery-based experience. This activation accumulates records of involvement of a given memory over the course of a day. As such, this could influence rather insidiously how we behave during the next few hours, and perhaps for as long as we remain awake. These persisting activation byproducts are precisely what an active forgetting mechanism would get rid of.

To test this hypothesis, we explore the evolution of memory traces left by language exposure and/or practice and use. Language provides an ideal testbed, as the same linguistic event can provoke both lingering effects that one would not necessarily want to keep and sleep-dependent offline consolidation of unitized new information. We already have linguistic performance measures that index both kinds of effects, and the consolidation of new lexical knowledge through sleep is well documented at this stage.

Work Package 1 examines the fate of memories for novel word forms and their potential for either consolidation or clean-up. Work Package 2 provides multiple tests of whether sleep does clean the slate by removing lingering traces. Our pilot data (Fig. TA6) suggest that it does. Finally, Work Package 3 focuses on at-risk populations, namely older adults and sleep apnoea patients, to evaluate the impact of poor sleep on both memory consolidation and clean-up simultaneously. If older adults show impaired clean-up on top of consolidation problems, reduced clean-up could be one of the factors behind the gradual cognitive decline that characterizes normal aging.

Our experiments focus on language memories to explore whether sleep resets what we have called IBM. Positive findings will provide a proof of concept, with a strong potential for applications in domains as varied as education, work patterns, sports science, aging, extended military missions, and neurocognitive rehabilitation.

Planned Impact

As this project revolves around the idea that sleep helps flush out unwanted leftovers of recent cognitive activities, it has major implications for understanding the role of sleep on learning and memory. Although we focus on language exposure and practice to explore active forgetting, positive findings will provide a proof of concept, with strong potential applications in domains as varied as education, work patterns, sports science, extended military missions and neurocognitive rehabilitation.

Work Package 3 investigates two populations at risk of weak clean-up of IBM because their sleep is reduced, disturbed, or both: sleep apnoea patients and healthy older adults. As we suggest in the Case for Support, one source of general cognitive decline in aging may be a reduced capacity to actively remove cognitive leftovers of daily activities due to chronic sleep disturbance. Both populations have already shown poor memory consolidation compared to normal or younger controls in some domains of knowledge. Our central claim is that this could be only one edge of a double-edge sword: Sleep may be the key to stabilizing and transforming relevant/salient memories and assimilating them into existing structures, as well as resetting our cognitive apparatus to start afresh the next day. Demonstrating this would be the core contribution of this project.

A search of the Web Of Sciences's core collection returns only 17 hits for papers on sleep AND (unlearning OR "active forgetting"), compared to 1,618 papers on sleep AND "memory consolidation". This demonstrates that very little is known on our topic, and that our project is thus far beyond incremental. Moreover, 6 out of these 17 papers came out in the last three years, with the other 11 papers published between 1993 and 2007. Clearly, as more is understood about how sleep helps to assimilate some memories, there is new interest in the possibility that it could also help us to forget other information. In fact, on Feb. 2 2017, the NY Times had a story on just this topic; this is timely research.

A direct practical implication of our project illustrates its potential impact. Experiment 8 tests whether sleep removes lingering semantic interference in speech production (i.e., naming objects in a given semantic field impairs naming semantically related objects for at least 12 hrs of wake). Although in healthy subjects this effect expresses itself as a mere 30-50 ms increase in speech latencies, in aphasic patients this effect is such that the patient is unable to access the target name and is stuck with the label of a previously named object. If, as suggested by our pilot data, this interference indeed persists and sleep is one way to remove it, then speech remediation protocols should either avoid practice of semantically related items on the same day, or they should include an early afternoon nap to provide a clean break between the items practiced in the morning and those practiced later in the day.

A similar logic underlies the potential impact of our research for extended military missions. Decision making is one of the first higher cognitive functions to degrade because of sleep deprivation (Killgore et al., 2010). This failure is especially likely if the sleep deprivation occurs on top of a chronic lack of sleep, which is the case of most extended sorties. A failure of cognitive control is exactly what one would expect if memory clean-up has not occurred for a period of time. Thus, the optimal micro-nap time needed to allow clean-up should be explored. The efficacy of drugs and dietary complements against sleep deprivation in soldiers should be benchmarked against measures of lingering interference and its natural antidote: sleep-induced clean-up.

These are just two examples of societal implications of our project. If, as our pilot data suggest, sleep actively removes unwanted perceptual and cognitive leftovers, there will be a very wide range of such impacts.

Publications

10 25 50
 
Description Exploring the structure of the reading system via word learning 
Organisation Ohio State University
Country United States 
Sector Academic/University 
PI Contribution I am a full contributor to this project resumed earlier this academic year, in which we (Blair Armstrong and Dennis Miller, from Toronto Psychology Dept, Mark Pitt, from Ohio State Psychology Dept, and myself) study how learning to read aloud proceeds, by means of neural network simulations and experiments on human participants. I contribute intellectually to developing research protocols, analysing results and writing up research findings. In addition, students and research interns inmy lab are running the experiments on human participants under my supervision. After a first strong publication in Journal of Experimental Psychology: General (5-yr Impact Factor: 5.3) in 2017 (Armstrong, B. C., Dumay, N., Kim, W., & Pitt, M. A. (2017). Generalization from newly learned words reveals structural properties of the human reading system. Journal of Experimental Psychology: General, 146(2), 227-249. https://doi.org/10.1037/xge0000257) before this project went dormant for 2 years, we will be presenting our new simulation results at the Annual Conference of the Cognitive Science Society, publishing a 6-p peer reviewed article in their proceedings. This paper is currently under review (see the abstract below). The submitted data will be included in a larger paper to combine human and computer data. Abstract: How do neural network models of quasiregular domains, such as spelling-sound correspondences in English, learn to represent knowledge that varies in its consistency with the domain, and generalize this knowledge appropriately? Recent work proposed that a graded ``warping'' mechanism allows for the implicit representation of how a new word's pronunciation should generalize when it is first learned. We explored the micro-structure of this proposal by training a network to pronounce new made-up words that were consistent with the dominant pronunciation (regulars), were comprised of a completely unfamiliar pronunciation (exceptions), or were consistent with a subordinate pronunciation in English (ambiguous). We also ``diluted'' these pronunciations, such that we either presented one or multiple made-up words that shared the same rhyme, increasing context variability. We observed that dilution promoted generalization of novel pronunciations. These results point to the importance of context variability in modulating warping in quasiregular domains.
Collaborator Contribution Postdoctoral research assistant Dennis Miller is the computer simulation wizzard in this project and works under the supervision of Blair Armstrong in Toronto. The latter is also in charge of organizing several studies on human participants to be carried out at Toronto University. Mark Pitt contributes intellectually to the project, and like all of us is involved at the write-up stage.
Impact - Miller, I. D., Dumay, N., Pitt, M.A., Lam, B., & Armstrong, B.C. (Under revision). Context variability promotes generalization in reading aloud: Insight from a neural network simulation. To appear in Proceedings of the Annual Conference of Cognitive Science Society.
Start Year 2019
 
Description Exploring the structure of the reading system via word learning 
Organisation University of Toronto
Country Canada 
Sector Academic/University 
PI Contribution I am a full contributor to this project resumed earlier this academic year, in which we (Blair Armstrong and Dennis Miller, from Toronto Psychology Dept, Mark Pitt, from Ohio State Psychology Dept, and myself) study how learning to read aloud proceeds, by means of neural network simulations and experiments on human participants. I contribute intellectually to developing research protocols, analysing results and writing up research findings. In addition, students and research interns inmy lab are running the experiments on human participants under my supervision. After a first strong publication in Journal of Experimental Psychology: General (5-yr Impact Factor: 5.3) in 2017 (Armstrong, B. C., Dumay, N., Kim, W., & Pitt, M. A. (2017). Generalization from newly learned words reveals structural properties of the human reading system. Journal of Experimental Psychology: General, 146(2), 227-249. https://doi.org/10.1037/xge0000257) before this project went dormant for 2 years, we will be presenting our new simulation results at the Annual Conference of the Cognitive Science Society, publishing a 6-p peer reviewed article in their proceedings. This paper is currently under review (see the abstract below). The submitted data will be included in a larger paper to combine human and computer data. Abstract: How do neural network models of quasiregular domains, such as spelling-sound correspondences in English, learn to represent knowledge that varies in its consistency with the domain, and generalize this knowledge appropriately? Recent work proposed that a graded ``warping'' mechanism allows for the implicit representation of how a new word's pronunciation should generalize when it is first learned. We explored the micro-structure of this proposal by training a network to pronounce new made-up words that were consistent with the dominant pronunciation (regulars), were comprised of a completely unfamiliar pronunciation (exceptions), or were consistent with a subordinate pronunciation in English (ambiguous). We also ``diluted'' these pronunciations, such that we either presented one or multiple made-up words that shared the same rhyme, increasing context variability. We observed that dilution promoted generalization of novel pronunciations. These results point to the importance of context variability in modulating warping in quasiregular domains.
Collaborator Contribution Postdoctoral research assistant Dennis Miller is the computer simulation wizzard in this project and works under the supervision of Blair Armstrong in Toronto. The latter is also in charge of organizing several studies on human participants to be carried out at Toronto University. Mark Pitt contributes intellectually to the project, and like all of us is involved at the write-up stage.
Impact - Miller, I. D., Dumay, N., Pitt, M.A., Lam, B., & Armstrong, B.C. (Under revision). Context variability promotes generalization in reading aloud: Insight from a neural network simulation. To appear in Proceedings of the Annual Conference of Cognitive Science Society.
Start Year 2019
 
Description Individual differences in plasticity in speech perception 
Organisation Korea Aerospace University
Country Korea, Republic of 
Sector Academic/University 
PI Contribution I am the main investigator of this project in which we (Donghyun Kim from University of Exeter, Meghan Clayards from McGill University, and Eun Jong Kong from Korea Aerospace University) study how listeners flexibly adapt to unfamiliar speech patterns such as foreign accents. In this project, I have been in charge of conceptualizing research goals, designing experiments, data collection, formal analysis, writing an original draft, and revisions. This paper has been revised and resubmitted to a journal and under review now. Abstract: The present study examines whether listeners flexibly adapt to unfamiliar speech patterns such as those encountered in foreign-accented English vowels, where the relative informativeness of primary (spectral quality) and secondary (duration) cues tends to be reversed (e.g., spectrally similar but exaggerated duration differences between bet and bat). This study further tests whether listeners' adaptive strategies are related to individual differences in phoneme categorization gradiency and cognitive abilities. Native English listeners (N=36) listened to a continuum of vowels from /?/ to /æ/ (as in head and had) varying in spectral and duration values to complete a perceptual adaptation task and a visual analog scaling (VAS) task. Participants also completed cognitive tasks examining executive function capacities. Results showed that listeners mostly used spectral quality to signal vowel category at baseline, but flexibly adapted by up-weighting reliance on duration when spectral quality became no longer diagnostic. In the VAS task, some listeners made more categorical responses while others made more gradient responses in vowel categorization, but these differences were not linked to their adaptive patterns. Results of cognitive tasks revealed that individual differences in inhibitory control correlated, to some degree, with the amount of adaptation. Together, these findings suggest that listeners flexibly adapt to unfamiliar speech categories using distributional information in the input and individual differences in cognitive abilities may influence their adaptability.
Collaborator Contribution Meghan Clayards contributes to development of methodology, discussions of results, and revisions to different versions of the manuscripts. Eun Jong Kong also contributes to discussions and revisions to different versions of the manuscripts.
Impact Kim, D., Clayards, M., & Kong, E. J. (revised and resubmitted). Individual differences in perceptual adaptation to unfamiliar phonetic categories. Journal of Phonetics.
Start Year 2018
 
Description Individual differences in plasticity in speech perception 
Organisation McGill University
Country Canada 
Sector Academic/University 
PI Contribution I am the main investigator of this project in which we (Donghyun Kim from University of Exeter, Meghan Clayards from McGill University, and Eun Jong Kong from Korea Aerospace University) study how listeners flexibly adapt to unfamiliar speech patterns such as foreign accents. In this project, I have been in charge of conceptualizing research goals, designing experiments, data collection, formal analysis, writing an original draft, and revisions. This paper has been revised and resubmitted to a journal and under review now. Abstract: The present study examines whether listeners flexibly adapt to unfamiliar speech patterns such as those encountered in foreign-accented English vowels, where the relative informativeness of primary (spectral quality) and secondary (duration) cues tends to be reversed (e.g., spectrally similar but exaggerated duration differences between bet and bat). This study further tests whether listeners' adaptive strategies are related to individual differences in phoneme categorization gradiency and cognitive abilities. Native English listeners (N=36) listened to a continuum of vowels from /?/ to /æ/ (as in head and had) varying in spectral and duration values to complete a perceptual adaptation task and a visual analog scaling (VAS) task. Participants also completed cognitive tasks examining executive function capacities. Results showed that listeners mostly used spectral quality to signal vowel category at baseline, but flexibly adapted by up-weighting reliance on duration when spectral quality became no longer diagnostic. In the VAS task, some listeners made more categorical responses while others made more gradient responses in vowel categorization, but these differences were not linked to their adaptive patterns. Results of cognitive tasks revealed that individual differences in inhibitory control correlated, to some degree, with the amount of adaptation. Together, these findings suggest that listeners flexibly adapt to unfamiliar speech categories using distributional information in the input and individual differences in cognitive abilities may influence their adaptability.
Collaborator Contribution Meghan Clayards contributes to development of methodology, discussions of results, and revisions to different versions of the manuscripts. Eun Jong Kong also contributes to discussions and revisions to different versions of the manuscripts.
Impact Kim, D., Clayards, M., & Kong, E. J. (revised and resubmitted). Individual differences in perceptual adaptation to unfamiliar phonetic categories. Journal of Phonetics.
Start Year 2018