TY - GEN
T1 - Synaptic Weight Optimization for Oscillatory Neural Networks
T2 - 2024 IEEE International Conference on Agents, ICA 2024
AU - Liao, Shuhao
AU - Liu, Xuehong
AU - Wu, Wenjun
AU - Shi, Rongye
AU - Zhang, Junyu
AU - Wang, Haopeng
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The Oscillatory Neural Network (ONN) presents itself as a promising architecture model for pattern recognition (PR), based on which advanced neuromorphic computing and integrated circuit designs are implemented. The core of the ONN's PR capability lies in the synaptic weight design, i.e., how the neurons are connected to each other. Conventional design methods, like the Hebbian rule, are able to store only a limited number of patterns. In this paper, we propose a strategy to leverage the Multi-Agent Reinforcement Learning (MARL) for acquiring the optimal synaptic weights that can efficiently store more patterns into the ONN system as stable quilibria. To obtain the synaptic weights in a more efficient manner and further increase the number of patterns to be stored, we additionally propose a method to leverage Curriculum Learning (CL) to optimize the learning process of the policy. Experimental results demonstrate that the proposed MARL-based method outperforms baseline methods in terms of storing more patterns as stable equilibria in ONN.
AB - The Oscillatory Neural Network (ONN) presents itself as a promising architecture model for pattern recognition (PR), based on which advanced neuromorphic computing and integrated circuit designs are implemented. The core of the ONN's PR capability lies in the synaptic weight design, i.e., how the neurons are connected to each other. Conventional design methods, like the Hebbian rule, are able to store only a limited number of patterns. In this paper, we propose a strategy to leverage the Multi-Agent Reinforcement Learning (MARL) for acquiring the optimal synaptic weights that can efficiently store more patterns into the ONN system as stable quilibria. To obtain the synaptic weights in a more efficient manner and further increase the number of patterns to be stored, we additionally propose a method to leverage Curriculum Learning (CL) to optimize the learning process of the policy. Experimental results demonstrate that the proposed MARL-based method outperforms baseline methods in terms of storing more patterns as stable equilibria in ONN.
KW - Multi-Agent Reinforcement Learning
KW - Oscillatory Neural Network
KW - Synaptic weights
UR - https://www.scopus.com/pages/publications/85215583652
U2 - 10.1109/ICA63002.2024.00040
DO - 10.1109/ICA63002.2024.00040
M3 - 会议稿件
AN - SCOPUS:85215583652
T3 - Proceedings - 2024 IEEE International Conference on Agents, ICA 2024
SP - 152
EP - 158
BT - Proceedings - 2024 IEEE International Conference on Agents, ICA 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 December 2024 through 6 December 2024
ER -