TY - JOUR
T1 - On the convergence of distributed projected gradient play with heterogeneous learning rates in monotone games
AU - Tan, Shaolin
AU - Tao, Ye
AU - Ran, Maopeng
AU - Liu, Hao
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/12
Y1 - 2023/12
N2 - In this paper, we consider distributed game-theoretic learning problems in which a number of players are to seek the Nash equilibrium through merely local information sharing during a repeated game process. In particular, we are interested in scenarios where each player uses uncoordinated (heterogeneous) instead of identical learning rates for local action updating. It is found that both the maximum and the heterogeneity of players’ learning rates play a role in determining the convergence of the distributed projected gradient play. To this end, we establish explicit conditions on the learning rates based on the contraction mapping theorem to guarantee geometric convergence of both the consensus-based and the augmented game based distributed projected gradient play. Furthermore, to relax these conditions, several variants of the distributed projected gradient play are proposed by adopting different strategies of information sharing in networks. A numerical example is provided to support the theoretic development.
AB - In this paper, we consider distributed game-theoretic learning problems in which a number of players are to seek the Nash equilibrium through merely local information sharing during a repeated game process. In particular, we are interested in scenarios where each player uses uncoordinated (heterogeneous) instead of identical learning rates for local action updating. It is found that both the maximum and the heterogeneity of players’ learning rates play a role in determining the convergence of the distributed projected gradient play. To this end, we establish explicit conditions on the learning rates based on the contraction mapping theorem to guarantee geometric convergence of both the consensus-based and the augmented game based distributed projected gradient play. Furthermore, to relax these conditions, several variants of the distributed projected gradient play are proposed by adopting different strategies of information sharing in networks. A numerical example is provided to support the theoretic development.
KW - Distributed Nash equilibrium seeking
KW - Game-theoretic learning
KW - Monotone games
KW - Projected gradient play
UR - https://www.scopus.com/pages/publications/85174332034
U2 - 10.1016/j.sysconle.2023.105654
DO - 10.1016/j.sysconle.2023.105654
M3 - 文章
AN - SCOPUS:85174332034
SN - 0167-6911
VL - 182
JO - Systems and Control Letters
JF - Systems and Control Letters
M1 - 105654
ER -