Spending time doing nothing helps artificial neural networks learn faster

Image by Gerd Altmann from Pixabay

Spending time doing nothing helps artificial neural networks learn faster

Humans require 7 to 13 hours of sleep every night, depending on their age. Numerous things take place during this period, including changes in hormone levels, the body relaxing, and variations in heart rate, respiration, and metabolism. Not much happens in the brain.

The University of California San Diego School of Medicine’s Maxim Bazhenov, PhD, professor of medicine and a sleep researcher said that the brain is quite busy when we sleep, reiterating what we have learnt throughout the day.  Sleep aids in memory reorganization and delivers memories in their most effective form.

Bazhenov and colleagues have described how sleep strengthens rational memory, the capacity to retain arbitrary or indirect relationships between things, persons, or events, and guards against forgetting past memories in earlier published work.

The architecture of the human brain is used by artificial neural networks to enhance a wide range of technologies and systems, from fundamental research and health to finance and social media. However, they fall short in one crucial area. When artificial neural networks learn sequentially, new input overwrites existing knowledge, a process known as catastrophic forgetting. In other areas, they have exceeded human performance, such as computing speed.

Conversely, according to Bazhenov, the human brain continually learns and integrates new information into existing knowledge, and it normally learns best when fresh instruction is combined with intervals of sleep for memory consolidation.

Senior author Bazhenov and colleagues describe how biological models may help reduce the risk of catastrophic forgetting in artificial neural networks, increasing their usefulness across a spectrum of research interests. Their article will appear in the PLOS Computational Biology journal on November 18, 2022.

The researchers employed spiking neural networks, which artificially imitate natural brain systems by transmitting information as discrete events (spikes) at certain times rather than continually.

They discovered that catastrophic forgetting was reduced when the spiking networks were taught on a new task but with sporadic off-line intervals that mirrored sleep. The networks may repeat previous memories while “sleeping,” just like the human brain, according to the study’s authors, without explicitly requiring prior training data.

In the human brain, patterns of synaptic weight—the force or amplitude of a link between two neurons—represent memories.

According to Bazhenov, neurons fire in a precise order as we acquire new information, which increases the number of synapses between them. The spiking patterns we learnt while awake are automatically replicated when we sleep. Reactivation or replay is the term used.

Synaptic plasticity, the ability to change or shape synapses, is still present during sleep and can further increase synaptic weight patterns that represent the memory, helping to avoid forgetting or enabling the transfer of information from old to new activities.

This method was used to prevent catastrophic forgetting in artificial neural networks, as discovered by Bazhenov and coworkers.

It implied that these networks may continue to learn, much like people or animals. Improving memory in humans may be made easier by having a better understanding of how the brain processes information when we sleep. Improving sleep patterns can improve memory.

In other studies, we employ computational tools to create the best plans for applying stimulation while we sleep, such audio tones that promote learning and sleep patterns. This may be crucial in situations where memory isn’t functioning at its best, such when it deteriorates with age or under certain medical illnesses like Alzheimer’s disease.


Golden R, Delanois JE, Sanda P, Bazhenov M (2022) Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation. PLoS Comput Biol 18(11): e1010628. https://doi.org/10.1371/journal.pcbi.1010628


Related Links:

Research Summary: Myopic control of neural dynamics

Abstract Manipulating the dynamics of neural systems through targeted stimulation is a frontier of research and clinical neuroscience; however, the control schemes considered for neural systems are mismatched for the unique needs of manipulating neural dynamics. An appropriate control method should respect the variability in neural systems, incorporating moment to moment “input” to the neural … Continue reading

Research Summary: Highly Pure and Expandable PSA-NCAM-Positive Neural Precursors from Human ESC and iPSC-Derived Neural Rosettes

ABSTRACT Homogeneous culture of neural precursor cells (NPCs) derived from human pluripotent stem cells (hPSCs) would provide a powerful tool for biomedical applications. However, previous efforts to expand mechanically dissected neural rosettes for cultivation of NPCs remain concerns regarding non-neural cell contamination. In addition, several attempts to purify NPCs using cell surface markers have not … Continue reading