Topology and dynamics of attractor neural networks: The role of loopiness

  • Pan Zhang
  • , Yong Chen*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We derive an exact representation of the topological effect on the dynamics of sequence processing neural networks within signal-to-noise analysis. A new network structure parameter, loopiness coefficient, is introduced to quantitatively study the loop effect on network dynamics. A large loopiness coefficient means a high probability of finding loops in the networks. We develop recursive equations for the overlap parameters of neural networks in terms of their loopiness. It was found that a large loopiness increases the correlation among the network states at different times and eventually reduces the performance of neural networks. The theory is applied to several network topological structures, including fully-connected, densely-connected random, densely-connected regular and densely-connected small-world, where encouraging results are obtained.

Original languageEnglish
Pages (from-to)4411-4416
Number of pages6
JournalPhysica A: Statistical Mechanics and its Applications
Volume387
Issue number16-17
DOIs
StatePublished - 1 Jul 2008
Externally publishedYes

Keywords

  • Complex networks
  • Loopiness
  • Neural networks
  • Probability theory

Fingerprint

Dive into the research topics of 'Topology and dynamics of attractor neural networks: The role of loopiness'. Together they form a unique fingerprint.

Cite this