The capacity of the dense associative memory networks

  • Han Bao
  • , Richong Zhang*
  • , Yongyi Mao
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper revisits the dense associative memory (DAM) networks and studies rigorously the capacity of the DAM networks. We present the capacity theorem of the DAM networks with an attraction radius or a noise level from the messages and prove that the probe can converge to the targeted message just after the one-step update. Under this convergence, the capacity of DAM networks is between a lower bound and an upper bound. Although when the attraction radius is 0.0 away from the messages, i.e. noiseless, previous literature provides an approximate result. However, a rigorous proof is not given in this study. In addition, we consider a more general notion of capacity which allows the retrieval of messages from noisy probes (the attraction radius is not 0.0). We demonstrates that the convergence result can be acquired just after the one-step update when the probe is a corrupted version with a Gaussian noise from one message. We further provide simulated experiments to validate theorems herein.

Original languageEnglish
Pages (from-to)198-208
Number of pages11
JournalNeurocomputing
Volume469
DOIs
StatePublished - 16 Jan 2022

Keywords

  • Capacity
  • DAM networks
  • Hopfield network
  • Noise recovery

Fingerprint

Dive into the research topics of 'The capacity of the dense associative memory networks'. Together they form a unique fingerprint.

Cite this