Off-line periods during training mitigate ‘catastrophic forgetfulness’ in computing systems – ScienceDaily


Depending on age, a person needs from 7 to 13 hours of sleep every 24 hours. During this time, a lot happens: heart rate, breathing and metabolism ebb and flow; Adjust hormone levels. The body relaxes. Not much brain.

“The brain is very busy when we sleep, repeating what we learned during the day,” said Maxim Bazhenov, PhD, professor of medicine and sleep researcher at the University of California San Diego School of Medicine. “Sleep helps reorganize and present memories in the most efficient way.”

In previous published work, Bazhenov and colleagues reported how sleep constructs rational memory, the ability to recall arbitrary or indirect associations between objects, people, or events, and protects against forgetting of old memories.

Artificial neural networks take advantage of the structure of the human brain to improve many technologies and systems, from basic science and medicine to finance and social media. In some ways, they achieve superhuman performance, such as computational speed, but they fail in one key aspect: when artificial neural networks learn sequentially, new information replaces previous information, a phenomenon called catastrophic forgetting.

“In contrast, the human brain is constantly learning and integrating new data into existing knowledge, and it usually learns best when new training overlaps with periods of sleep to strengthen memory,” Bazhenov said.

Writing in the November 18, 2022 issue of PLOS Computational Biology, Lead author Bazhenov and colleagues discuss how biological models can help mitigate the risk of catastrophic forgetting in artificial neural networks, increasing their usefulness across a range of research interests.

The scientists used spiky neural networks that artificially mimic natural neural systems: instead of communicating information continuously, it is transmitted as discrete events (spikes) at specific points in time.

They found that when the spike networks were trained on a new task, but with offline periods simulating sleep, catastrophic forgetting was mitigated. Like the human brain, the study authors said, “sleeping” the networks allowed them to bring back old memories without explicitly using old training data.

Memories are represented in the human brain by patterns of synaptic weight — the strength or breadth of the connection between two neurons.

Bazhenov said: “When we learn new information, neurons fire in a certain order and this increases the synapses between them. During sleep, the activation patterns that we learned during the waking state are automatically repeated. This is called reactivation or reboot.”

“Synaptic plasticity, the ability to change or shape, remains in place during sleep and can reinforce synaptic weight patterns that represent memory, helping prevent forgetting or enabling the transfer of knowledge from old tasks to new tasks.”

When Bazhenov and his colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting.

“This means that these networks can learn continuously, like humans or animals. Understanding how the human brain processes information during sleep can help increase memory in humans. Increasing the rhythm of sleep can lead to better memory.”

“In other projects, we use computer models to develop optimal strategies for applying stimuli during sleep, such as auditory tones, that reinforce the sleep rhythm and improve learning. This may be particularly important when memory is not perfect, such as when memory declines in old age or in some cases such as Alzheimer’s disease “.

Co-authors are: Ryan Golden and Jean Erik Delanois, both at UC San Diego; and Pavel Sanda, Institute of Computer Science of the Czech Academy of Sciences.



Source link

Related Posts

Precaliga