TY - GEN
T1 - Learning Precoding Policy with Inductive Biases
T2 - 2023 IEEE Global Communications Conference, GLOBECOM 2023
AU - Zhao, Baichuan
AU - Ma, Yang
AU - Wu, Jiajun
AU - Yang, Chenyang
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Deep learning has been introduced to optimize wireless policies such as precoding for enabling real-time implementation. Yet prevalent studies assume that training and test samples are drawn from the same distribution, which is not true in dynamic wireless environments. As a result, a well-trained deep neural network (DNN) may require retraining to adapt to new environments, incurring the overhead of data collection. The required training samples for adaptation can be reduced by introducing inductive biases into DNNs, which can be learned automatically by meta-learning or embedded in DNNs by designing graph neural networks (GNNs). Almost all previous works on meta-learning overlooked the prior-known permutation equivariance (PE) properties, which widely exist in wireless policies and can be harnessed to reduce the hypothesis space of a DNN. In this paper, we strive to answer the following question: which way of introducing inductive biases is more effective in reducing samples for retraining, GNNs or meta-learning? We take the sum-rate maximization precoding problem as an example to answer the question. Simulation results show that the GNNs are more efficient than meta-learning, and meta-learning for precoding cannot adapt to new scenarios where the number of users differs from the training scenario.
AB - Deep learning has been introduced to optimize wireless policies such as precoding for enabling real-time implementation. Yet prevalent studies assume that training and test samples are drawn from the same distribution, which is not true in dynamic wireless environments. As a result, a well-trained deep neural network (DNN) may require retraining to adapt to new environments, incurring the overhead of data collection. The required training samples for adaptation can be reduced by introducing inductive biases into DNNs, which can be learned automatically by meta-learning or embedded in DNNs by designing graph neural networks (GNNs). Almost all previous works on meta-learning overlooked the prior-known permutation equivariance (PE) properties, which widely exist in wireless policies and can be harnessed to reduce the hypothesis space of a DNN. In this paper, we strive to answer the following question: which way of introducing inductive biases is more effective in reducing samples for retraining, GNNs or meta-learning? We take the sum-rate maximization precoding problem as an example to answer the question. Simulation results show that the GNNs are more efficient than meta-learning, and meta-learning for precoding cannot adapt to new scenarios where the number of users differs from the training scenario.
KW - Meta-learning
KW - graph neural network
KW - inductive bias
KW - permutation equivariance
KW - precoding
UR - https://www.scopus.com/pages/publications/85187374948
U2 - 10.1109/GLOBECOM54140.2023.10436861
DO - 10.1109/GLOBECOM54140.2023.10436861
M3 - 会议稿件
AN - SCOPUS:85187374948
T3 - Proceedings - IEEE Global Communications Conference, GLOBECOM
SP - 4835
EP - 4840
BT - GLOBECOM 2023 - 2023 IEEE Global Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 4 December 2023 through 8 December 2023
ER -