Does sleep flush out the unwanted leftovers of recent cognitive activities?

Lead Research Organisation: UNIVERSITY OF EXETER
Department Name: Psychology

Abstract

This project examines whether sleep helps flush out unwanted leftovers of recent perceptual and cognitive activities in addition to consolidating new learning.

Traditional views of learning assume that new memories are shaky for a while, but soon consolidate, becoming resistant to amnestic agents and interference from new learning (McGaugh, 2000; Wixted, 2004). Recent theorizing (Stickgold & Walker, 2013), however, insists that memory consolidation is not the uniform and indiscriminate process just described, but is instead highly selective and adaptive, tailored to the goal-directed needs of the organism.

Sleep, with its associated neurophysiological states, is the prime vehicle for a "memory triage" process: only memories that are emotionally salient, or worth remembering for future use, get assimilated into the brain's landscape of ever evolving knowledge. Here we ask what happens to unwanted memories, which also start off in this no man's land of in between memory (i.e., between working memory and long-term memory; henceforth, IBM)? The goal of this project is to determine whether in addition to stabilizing and assimilating useful memories, sleep also cleans the slate of unwanted memories to reset the system, in order to start afresh the next day.

Evidence for offline, sleep-dependent memory consolidation is substantial, showing a variety of transformative results, including increased resilience to amnestic agents, enhanced accessibility, spontaneous recovery, extraction of underlying structures, and integration with existing knowledge. In contrast, the existence of memory clean-up has not yet been substantiated. We suggest this is largely because researchers thought they could just force learners to forget newly learnt information.

Our premise is that unwanted leftovers mostly comprise the lingering activation of long-term memories recently evoked by perceptual, expressive, or imagery-based experience. This activation accumulates records of involvement of a given memory over the course of a day. As such, this could influence rather insidiously how we behave during the next few hours, and perhaps for as long as we remain awake. These persisting activation byproducts are precisely what an active forgetting mechanism would get rid of.

To test this hypothesis, we explore the evolution of memory traces left by language exposure and/or practice and use. Language provides an ideal testbed, as the same linguistic event can provoke both lingering effects that one would not necessarily want to keep and sleep-dependent offline consolidation of unitized new information. We already have linguistic performance measures that index both kinds of effects, and the consolidation of new lexical knowledge through sleep is well documented at this stage.

Work Package 1 examines the fate of memories for novel word forms and their potential for either consolidation or clean-up. Work Package 2 provides multiple tests of whether sleep does clean the slate by removing lingering traces. Our pilot data (Fig. TA6) suggest that it does. Finally, Work Package 3 focuses on at-risk populations, namely older adults and sleep apnoea patients, to evaluate the impact of poor sleep on both memory consolidation and clean-up simultaneously. If older adults show impaired clean-up on top of consolidation problems, reduced clean-up could be one of the factors behind the gradual cognitive decline that characterizes normal aging.

Our experiments focus on language memories to explore whether sleep resets what we have called IBM. Positive findings will provide a proof of concept, with a strong potential for applications in domains as varied as education, work patterns, sports science, aging, extended military missions, and neurocognitive rehabilitation.

Planned Impact

As this project revolves around the idea that sleep helps flush out unwanted leftovers of recent cognitive activities, it has major implications for understanding the role of sleep on learning and memory. Although we focus on language exposure and practice to explore active forgetting, positive findings will provide a proof of concept, with strong potential applications in domains as varied as education, work patterns, sports science, extended military missions and neurocognitive rehabilitation.

Work Package 3 investigates two populations at risk of weak clean-up of IBM because their sleep is reduced, disturbed, or both: sleep apnoea patients and healthy older adults. As we suggest in the Case for Support, one source of general cognitive decline in aging may be a reduced capacity to actively remove cognitive leftovers of daily activities due to chronic sleep disturbance. Both populations have already shown poor memory consolidation compared to normal or younger controls in some domains of knowledge. Our central claim is that this could be only one edge of a double-edge sword: Sleep may be the key to stabilizing and transforming relevant/salient memories and assimilating them into existing structures, as well as resetting our cognitive apparatus to start afresh the next day. Demonstrating this would be the core contribution of this project.

A search of the Web Of Sciences's core collection returns only 17 hits for papers on sleep AND (unlearning OR "active forgetting"), compared to 1,618 papers on sleep AND "memory consolidation". This demonstrates that very little is known on our topic, and that our project is thus far beyond incremental. Moreover, 6 out of these 17 papers came out in the last three years, with the other 11 papers published between 1993 and 2007. Clearly, as more is understood about how sleep helps to assimilate some memories, there is new interest in the possibility that it could also help us to forget other information. In fact, on Feb. 2 2017, the NY Times had a story on just this topic; this is timely research.

A direct practical implication of our project illustrates its potential impact. Experiment 8 tests whether sleep removes lingering semantic interference in speech production (i.e., naming objects in a given semantic field impairs naming semantically related objects for at least 12 hrs of wake). Although in healthy subjects this effect expresses itself as a mere 30-50 ms increase in speech latencies, in aphasic patients this effect is such that the patient is unable to access the target name and is stuck with the label of a previously named object. If, as suggested by our pilot data, this interference indeed persists and sleep is one way to remove it, then speech remediation protocols should either avoid practice of semantically related items on the same day, or they should include an early afternoon nap to provide a clean break between the items practiced in the morning and those practiced later in the day.

A similar logic underlies the potential impact of our research for extended military missions. Decision making is one of the first higher cognitive functions to degrade because of sleep deprivation (Killgore et al., 2010). This failure is especially likely if the sleep deprivation occurs on top of a chronic lack of sleep, which is the case of most extended sorties. A failure of cognitive control is exactly what one would expect if memory clean-up has not occurred for a period of time. Thus, the optimal micro-nap time needed to allow clean-up should be explored. The efficacy of drugs and dietary complements against sleep deprivation in soldiers should be benchmarked against measures of lingering interference and its natural antidote: sleep-induced clean-up.

These are just two examples of societal implications of our project. If, as our pilot data suggest, sleep actively removes unwanted perceptual and cognitive leftovers, there will be a very wide range of such impacts.

Publications

10 25 50
publication icon
Baese-Berk MM (2022) Just give it time: Differential effects of disruption and delay on perceptual learning. in Attention, perception & psychophysics

publication icon
Charoy J (2020) The effect of orthography on the recognition of pronunciation variants. in Journal of experimental psychology. Learning, memory, and cognition

publication icon
Miller I.D. (2020) Context variability promotes generalization in reading aloud: Insight from a neural network simulation in Proceedings for the 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020

publication icon
Samuel AG (2021) Auditory selective adaptation moment by moment, at multiple timescales. in Journal of experimental psychology. Human perception and performance

 
Description Although this award ended on the 31st of July, we are still in the process of analyzing the many datasets that were collected during the time of the award.

I will here highlight some of our key findings so far:

Outcome 1: Sleep enhances perceptual details of newly learnt words.
How we perceive objects, whatever the sensory modality involved, depends on the context in which they present themselves to us. Antromorphist painters, like Giuseppe Arcimboldo (1527-1593) and Athanasius Kircher (1602-1680), and collage artists linked to the Dada and surrealist movements, starting with the likes of Hannah Höch (1889-1978) and Man Ray (1890-1976), have long taken advantage of this fact to trick our eyes and have exquisitely made the point.

In the language domain, the "word superiority" effect (Cattell, 1886; Reicher, 1969; Wheeler, 1970) provides another powerful illustration of this Gestalt phenomenon. In skilled readers, it has been repeatedly shown that, under noisy/brief presentation conditions, letters are more readily identified in familiar words than in unfamiliar yet plausible pseudowords (e.g., 'JENK'), let alone random strings (e.g., 'JSNK'). (e.g., Baron & Thurston, 1973; Grainger et al., 2003; Hayman & Jacoby, 1989; Kezilas et al., 2016; McClelland, 1976; McClelland & Johnston, 1977; Paap et al., 1982, 2000; Ripamonti et al., 2018; Rumelhart & McClelland, 1982; Starrfelt et al., 2013).

Two in-lab studies (circa 600 hrs of human data collection; total N = 144) funded by this award examined whether, and if yes how, sleeping after learning a new word contributes to this perceptual effect. Knowing the answer to this question is important because a good proportion of poor readers have difficulties forming stable word representations in memory, which can affect both their reading and spelling.

Experiment 1 relied on a Reicher-Wheeler situation to look at whether overnight sleep strengthens the competitive influence of a new orthographic neighbour (e.g., 'alarchy' for 'anarchy'). We showed that letter identification is better for trained pseudowords, but correspondingly worse for their untrained base words, whether sleep is immediate or delayed by 12 hr compared to no-sleep conditions. To neutralize this sleep-enhanced attraction toward newly learnt strings, Experiment 2 taught participants pairs of lookalikes (e.g., 'alarchy'/'afarchy') and then pits them against each other. We showed that sleep genuinely boosts the ability of the newly learnt strings to support letter identification: At the test, 12-hr-old items show a larger word superiority effect compared to 0-hr-old items, but only after a retention interval including sleep. At the retest, 24 hr later, items slept-on for the first time show increased facilitation compared to pre-sleep levels, whereas items slept-on twice show a cumulative effect compared what was achieved after just one night. In sum, sleep contributes to the word superiority effect in two ways: (a) it makes newly learnt letter strings more likely to attract identification choices to the detriment of long-consolidated words? This could come from the fact that new declarative memories often show increased accessibility by the next day; and (b) it boosts perceptual learning of orthographic information, with the result that by the next day, lexical distinctions are easier.

These findings are reported in a research article currently under review at Journal of Experimental Psychology: General (Impact Factor: 4.9): Dumay, N., & Nash, A. (submitted). Sleep enhances the perceptual representation of new written words: Evidence for overnight cumulative learning in the word superiority effect.

They were also the objects of several conference talks:
- Dumay, N. (2022). Sleep Supports the Word-Superiority Effect by Enhancing Perceptual Details of Newly Learnt Letter Strings. Abstracts of the 63rd Annual Meeting of the Psychonomic Society (p. 113), Boston MA.

- Dumay, N., Bollaert, M., John, J., Lea, I., Pouwels, I., & Wrightson, O. Is James Cattell just asleep? Sleep boosts the word-superiority effect by enhancing perceptual details of newly learnt letter strings. Abstracts of the 22nd Conference of the European Society for Cognitive Psychology (p. 42), Lille, France.

- Dumay, N., & Nash, A. (2019). Sleep makes perceptual memories more accessible: Evidence from Reicher-Wheeler. Abstracts of the 60th Annual Meeting of the Psychonomic Society (p. 65), Montreal, Canada. [session chair]

Outcome 2: Perceptual boundaries between speech sounds continuously fluctuate depending on recent listening experience and slowly return to their average position over time: only hearing the same voice again can undo these changes immediately.

Over the course of a lifetime, adults develop perceptual categories for the vowels and consonants in their native language, based on the distribution of those sounds in their environment. However, in any given listening situation, the short-term distribution of sounds can cause changes in this long-term categorization. For example, if the same sound (the "adaptor") is heard many times in a short period of time, listeners adapt and become less prone to hearing that sound. Although hundreds of speech selective adaptation experiments have been published, there is almost no information about how long this adaptation lasts. Using stimuli chosen to produce very large initial adaptation, we test adaptation effects with essentially no delay, and with delays of 25 min, 90 min, and 5.5 hr; these tests probe the duration of adaptation both in the (single) ear to which the adaptor was presented, and in the opposite ear. Reliable adaptation remains 5.5 hr after exposure in the same-ear condition, whereas it is undetectable at 90 min in the opposite ear. Surprisingly, the amount of residual adaptation is largely unaffected by whether the listener is exposed to speech between adaptation and test, except if this speech is actually produced by the same speaker. This result shows that short-term fluctuations in the boundaries between speech sound categories are speaker-dependent. Whether sleep clears our hears faster than time will be reported on in the final report.
These findings were published in a journal article (Samuel & Dumay, 2021, Journal of Experimental Psychology: Human Perception and Performance) and presented as a talk at the following international conference:
- Samuel, A.G., Yi, Z., & Dumay, N. (2021). Selective adaptation and lexically driven recalibration: Two phonetic boundary adjustment processes with very different recovery times. Abstracts of the 62nd Annual Meeting of the Psychonomic Society, a virtual conference (p. 58).

Outcome 3: Sleep dictates the fate of sublexical plasticity triggered by speech exposure. When speech input is consistent with a given lexical or sublexical representation, that representation's activation increases, with recognition dependent on reaching some absolute or relative activation level. After a representation has been activated during perception, it remains more sensitive to similar subsequent input for some period of time.

In a series of four experiments (total N = 341) that investigate the time course of this increased sensitivity, with sleep as a potential moderating factor. The experiments show that the sensitivity of sublexical representations remains elevated for at least 15-20 minutes, but less than 12 hours, if the listener is awake during this time; if instead sleep follows the activation of sublexical representations, the increase in sensitivity continues for at least 12 hours. Elevated sensitivity of sublexical representations facilitates processing of other pseudowords that contain these sounds. In contrast, active lexical representations impair processing of pseudowords that only differ by one segment. Such impairment was observed after delays of 12 hours, with or without sleep, showing that the increased sensitivity of lexical representations is longer-lasting. Our findings demonstrate that sublexical and lexical representations, once engaged during speech perception, become more sensitive to subsequent similar input, with the durability of the sensitivity differing as a function of lexical status and wake/sleep after the initial perceptual exposure.

They are reported in a manuscripted submitted to Cognition (Impact Factor: 3.6): Samuel, A.G., & Dumay, N. (submitted). Does sleep dictate the fate of sublexical and lexical plasticity triggered by speech exposure?
They were also the object of two conference talks:
-Samuel, A.G., & Dumay, N. (2023). How Active Are Sublexical and Lexical Representations 12 Hours After They Have Been Used to Understand Speech? Abstracts of the 23rd Conference of the European Society for Cognitive Psychology (p. 116), Porto, Portugal.

-Samuel, A.G., & Dumay, N. (2022). How Active Are Sublexical and Lexical Representations 12 Hours After They Have Been Used to Understand Speech? Abstracts of the 63rd Annual Meeting of the Psychonomic Society (p. 100), Boston MA.
Exploitation Route Outcome 1: The fact that sleep plays an instrumental role in turning new objects into holistic representations able to support identification of their parts has implications for reading acquisition and language learning. By extension, this sleep-associated unitization or binding process also has implications for any perceptual learning domain in which objects (e.g., objects, bodies, faces, artwork, melodies, etc.) have to be distinguished. In that respect, our demonstration that sleep tightens the connections between mental objects and their parts may be a first.

Outcome 2: The fact that speech exposure leads to temporary fluctuations in perceptual category boundaries that last for a few hours and which only the same voice can undo as implications for language learning protocols.

Outcome 3: The fact that sleep allows speech units to remain sensitized after they have been used to perceive language demonstrates the importance of sleeping in order to promote procedural learning. This has obvious ramifications for learning and generalization of implicit knowledge, whether in perception or motor learning.
Sectors Education

Healthcare

 
Description Exploring the structure of the reading system via word learning 
Organisation Ohio State University
Country United States 
Sector Academic/University 
PI Contribution I am a full contributor to this project resumed earlier this academic year, in which we (Blair Armstrong and Dennis Miller, from Toronto Psychology Dept, Mark Pitt, from Ohio State Psychology Dept, and myself) study how learning to read aloud proceeds, by means of neural network simulations and experiments on human participants. I contribute intellectually to developing research protocols, analysing results and writing up research findings. In addition, students and research interns inmy lab are running the experiments on human participants under my supervision. After a first strong publication in Journal of Experimental Psychology: General (5-yr Impact Factor: 5.3) in 2017 (Armstrong, B. C., Dumay, N., Kim, W., & Pitt, M. A. (2017). Generalization from newly learned words reveals structural properties of the human reading system. Journal of Experimental Psychology: General, 146(2), 227-249. https://doi.org/10.1037/xge0000257) before this project went dormant for 2 years, we will be presenting our new simulation results at the Annual Conference of the Cognitive Science Society, publishing a 6-p peer reviewed article in their proceedings. This paper is currently under review (see the abstract below). The submitted data will be included in a larger paper to combine human and computer data. Abstract: How do neural network models of quasiregular domains, such as spelling-sound correspondences in English, learn to represent knowledge that varies in its consistency with the domain, and generalize this knowledge appropriately? Recent work proposed that a graded ``warping'' mechanism allows for the implicit representation of how a new word's pronunciation should generalize when it is first learned. We explored the micro-structure of this proposal by training a network to pronounce new made-up words that were consistent with the dominant pronunciation (regulars), were comprised of a completely unfamiliar pronunciation (exceptions), or were consistent with a subordinate pronunciation in English (ambiguous). We also ``diluted'' these pronunciations, such that we either presented one or multiple made-up words that shared the same rhyme, increasing context variability. We observed that dilution promoted generalization of novel pronunciations. These results point to the importance of context variability in modulating warping in quasiregular domains.
Collaborator Contribution Postdoctoral research assistant Dennis Miller is the computer simulation wizzard in this project and works under the supervision of Blair Armstrong in Toronto. The latter is also in charge of organizing several studies on human participants to be carried out at Toronto University. Mark Pitt contributes intellectually to the project, and like all of us is involved at the write-up stage.
Impact - Miller, I. D., Dumay, N., Pitt, M.A., Lam, B., & Armstrong, B.C. (Under revision). Context variability promotes generalization in reading aloud: Insight from a neural network simulation. To appear in Proceedings of the Annual Conference of Cognitive Science Society.
Start Year 2019
 
Description Exploring the structure of the reading system via word learning 
Organisation University of Toronto
Country Canada 
Sector Academic/University 
PI Contribution I am a full contributor to this project resumed earlier this academic year, in which we (Blair Armstrong and Dennis Miller, from Toronto Psychology Dept, Mark Pitt, from Ohio State Psychology Dept, and myself) study how learning to read aloud proceeds, by means of neural network simulations and experiments on human participants. I contribute intellectually to developing research protocols, analysing results and writing up research findings. In addition, students and research interns inmy lab are running the experiments on human participants under my supervision. After a first strong publication in Journal of Experimental Psychology: General (5-yr Impact Factor: 5.3) in 2017 (Armstrong, B. C., Dumay, N., Kim, W., & Pitt, M. A. (2017). Generalization from newly learned words reveals structural properties of the human reading system. Journal of Experimental Psychology: General, 146(2), 227-249. https://doi.org/10.1037/xge0000257) before this project went dormant for 2 years, we will be presenting our new simulation results at the Annual Conference of the Cognitive Science Society, publishing a 6-p peer reviewed article in their proceedings. This paper is currently under review (see the abstract below). The submitted data will be included in a larger paper to combine human and computer data. Abstract: How do neural network models of quasiregular domains, such as spelling-sound correspondences in English, learn to represent knowledge that varies in its consistency with the domain, and generalize this knowledge appropriately? Recent work proposed that a graded ``warping'' mechanism allows for the implicit representation of how a new word's pronunciation should generalize when it is first learned. We explored the micro-structure of this proposal by training a network to pronounce new made-up words that were consistent with the dominant pronunciation (regulars), were comprised of a completely unfamiliar pronunciation (exceptions), or were consistent with a subordinate pronunciation in English (ambiguous). We also ``diluted'' these pronunciations, such that we either presented one or multiple made-up words that shared the same rhyme, increasing context variability. We observed that dilution promoted generalization of novel pronunciations. These results point to the importance of context variability in modulating warping in quasiregular domains.
Collaborator Contribution Postdoctoral research assistant Dennis Miller is the computer simulation wizzard in this project and works under the supervision of Blair Armstrong in Toronto. The latter is also in charge of organizing several studies on human participants to be carried out at Toronto University. Mark Pitt contributes intellectually to the project, and like all of us is involved at the write-up stage.
Impact - Miller, I. D., Dumay, N., Pitt, M.A., Lam, B., & Armstrong, B.C. (Under revision). Context variability promotes generalization in reading aloud: Insight from a neural network simulation. To appear in Proceedings of the Annual Conference of Cognitive Science Society.
Start Year 2019
 
Description Individual differences in plasticity in speech perception 
Organisation Korea Aerospace University
Country Korea, Republic of 
Sector Academic/University 
PI Contribution I am the main investigator of this project in which we (Donghyun Kim from University of Exeter, Meghan Clayards from McGill University, and Eun Jong Kong from Korea Aerospace University) study how listeners flexibly adapt to unfamiliar speech patterns such as foreign accents. In this project, I have been in charge of conceptualizing research goals, designing experiments, data collection, formal analysis, writing an original draft, and revisions. This paper has been revised and resubmitted to a journal and under review now. Abstract: The present study examines whether listeners flexibly adapt to unfamiliar speech patterns such as those encountered in foreign-accented English vowels, where the relative informativeness of primary (spectral quality) and secondary (duration) cues tends to be reversed (e.g., spectrally similar but exaggerated duration differences between bet and bat). This study further tests whether listeners' adaptive strategies are related to individual differences in phoneme categorization gradiency and cognitive abilities. Native English listeners (N=36) listened to a continuum of vowels from /?/ to /æ/ (as in head and had) varying in spectral and duration values to complete a perceptual adaptation task and a visual analog scaling (VAS) task. Participants also completed cognitive tasks examining executive function capacities. Results showed that listeners mostly used spectral quality to signal vowel category at baseline, but flexibly adapted by up-weighting reliance on duration when spectral quality became no longer diagnostic. In the VAS task, some listeners made more categorical responses while others made more gradient responses in vowel categorization, but these differences were not linked to their adaptive patterns. Results of cognitive tasks revealed that individual differences in inhibitory control correlated, to some degree, with the amount of adaptation. Together, these findings suggest that listeners flexibly adapt to unfamiliar speech categories using distributional information in the input and individual differences in cognitive abilities may influence their adaptability.
Collaborator Contribution Meghan Clayards contributes to development of methodology, discussions of results, and revisions to different versions of the manuscripts. Eun Jong Kong also contributes to discussions and revisions to different versions of the manuscripts.
Impact Kim, D., Clayards, M., & Kong, E. J. (revised and resubmitted). Individual differences in perceptual adaptation to unfamiliar phonetic categories. Journal of Phonetics.
Start Year 2018
 
Description Individual differences in plasticity in speech perception 
Organisation McGill University
Country Canada 
Sector Academic/University 
PI Contribution I am the main investigator of this project in which we (Donghyun Kim from University of Exeter, Meghan Clayards from McGill University, and Eun Jong Kong from Korea Aerospace University) study how listeners flexibly adapt to unfamiliar speech patterns such as foreign accents. In this project, I have been in charge of conceptualizing research goals, designing experiments, data collection, formal analysis, writing an original draft, and revisions. This paper has been revised and resubmitted to a journal and under review now. Abstract: The present study examines whether listeners flexibly adapt to unfamiliar speech patterns such as those encountered in foreign-accented English vowels, where the relative informativeness of primary (spectral quality) and secondary (duration) cues tends to be reversed (e.g., spectrally similar but exaggerated duration differences between bet and bat). This study further tests whether listeners' adaptive strategies are related to individual differences in phoneme categorization gradiency and cognitive abilities. Native English listeners (N=36) listened to a continuum of vowels from /?/ to /æ/ (as in head and had) varying in spectral and duration values to complete a perceptual adaptation task and a visual analog scaling (VAS) task. Participants also completed cognitive tasks examining executive function capacities. Results showed that listeners mostly used spectral quality to signal vowel category at baseline, but flexibly adapted by up-weighting reliance on duration when spectral quality became no longer diagnostic. In the VAS task, some listeners made more categorical responses while others made more gradient responses in vowel categorization, but these differences were not linked to their adaptive patterns. Results of cognitive tasks revealed that individual differences in inhibitory control correlated, to some degree, with the amount of adaptation. Together, these findings suggest that listeners flexibly adapt to unfamiliar speech categories using distributional information in the input and individual differences in cognitive abilities may influence their adaptability.
Collaborator Contribution Meghan Clayards contributes to development of methodology, discussions of results, and revisions to different versions of the manuscripts. Eun Jong Kong also contributes to discussions and revisions to different versions of the manuscripts.
Impact Kim, D., Clayards, M., & Kong, E. J. (revised and resubmitted). Individual differences in perceptual adaptation to unfamiliar phonetic categories. Journal of Phonetics.
Start Year 2018
 
Description Starting-up Sleep Polysomnography and Cognition Research at Exeter University 
Organisation University of Exeter
Department School of Psychology
Country United Kingdom 
Sector Academic/University 
PI Contribution Over the last two months PI Nicolas Dumay has been collaborating with newly appointed colleague Dr Lawrence Wong to seek fund in order to acquire and set-up a one-bed sleep research lab in the Exeter Psychology Department. The PI helped frame the application to a Strategic Development Fund (£25k) internal to the College which our department sits under, and to coordinate with the HoD so as to find space for this lab to be created. We recently received the fund and have just secured the space on campus and purchased the equipment.
Collaborator Contribution Partner/colleague Dr Lawrence Wong has dealt with the technical details and will be in charge of heading the lab once built.
Impact NA
Start Year 2022
 
Description Starting-up Sleep Polysomnography and Cognition Research at Exeter University 
Organisation University of Exeter
Department School of Psychology
Country United Kingdom 
Sector Academic/University 
PI Contribution Over the last two months PI Nicolas Dumay has been collaborating with newly appointed colleague Dr Lawrence Wong to seek fund in order to acquire and set-up a one-bed sleep research lab in the Exeter Psychology Department. The PI helped frame the application to a Strategic Development Fund (£25k) internal to the College which our department sits under, and to coordinate with the HoD so as to find space for this lab to be created. We recently received the fund and have just secured the space on campus and purchased the equipment.
Collaborator Contribution Partner/colleague Dr Lawrence Wong has dealt with the technical details and will be in charge of heading the lab once built.
Impact NA
Start Year 2022
 
Description Organizing a research symposium at the Bi-annual Conference of the European Society for Cognitive Psychology 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact This symposium proposal, the abstract of which is pasted below has been accepted as part of the program of the next Conference of the European Society for Cognitive Psychology (Porto, September 2023). It includes 5 speakers (3 males/2 females) from the US, Spain, the UK, and Canada. The typical audience at the conference (circa 2,500 individuals) includes researchers and academics, as well as postgraduate students. As several sessions are run in parallel, I except an audience ranging between 100 and 500 people. The symposium has dedicated time for interactions between presenters and the audience.

The research that Prof Samuel will be presenting and mine are both directly funded by this award.

SLEEP AND THE CONSOLIDATION AND UPDATING OF LINGUISTIC KNOWLEDGE
Organizer: Nicolas Dumay, University of Exeter, UK

The notion that sleep and memory consolidation play a key role in language learning and processing has been around for at least two decades. This symposium aims to provide an overview of what we know and do not know, identify current directions in the field, and generate new ideas and ways to solve points of contention. D. Titone opens the ball by evaluating memory models in light of the literature on word acquisition in the native and non-native language. She also shows how prior knowledge and word properties together determine post-sleep memory. A.G. Samuel looks at the persistence of activation in lexical and sublexical representations, and whether sleep has an impact on these long-lasting by-products of perception. N. Dumay examines the influence of sleep on subphonemic mismatch effects in the visual-world paradigm and explores the idea that these index both sublexical plasticity and lexical learning. A. Takashima and C. Ekerdt look at the brain structures underpinning systems-consolidation of spoken words, from a developmental perspective. Finally, G. Gaskell reports on semantic priming and sentence memory experiments and argues that sleep plays a role also in supporting the maintenance and updating of linguistic knowledge.

Keywords: language plasticity, word learning, sleep, memory consolidation, bilingualism
Year(s) Of Engagement Activity 2023
URL https://escop2023.org/program/symposia