2020 EJRNL PP ALDO BATTIST 1.pdf)u
Terbatas Ratnasari
» ITB
Terbatas Ratnasari
» ITB
Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy,
high-dimensional dynamics. We study here how to learn the ?N2 pairwise interactions in a RNN with N
neurons to embed L manifolds of dimensionD ? N.We show that the capacity, i.e., the maximal ratio L=N,
decreases as j log ?j?D, where ? is the error on the position encoded by the neural activity along each manifold.
Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial
resolution. Our results rely on a combination of analytical tools fromstatistical mechanics and random matrix
theory, extending Gardner’s classical theory of learning to the case of patterns with strong spatial correlations.
Perpustakaan Digital ITB