TY - GEN
T1 - Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision
AU - Zhang, Zeyang
AU - Wang, Xin
AU - Zhang, Ziwei
AU - Shen, Guangyao
AU - Shen, Shiqi
AU - Zhu, Wenwu
N1 - Publisher Copyright:
© 2023 Neural information processing systems foundation. All rights reserved.
PY - 2023
Y1 - 2023
N2 - The existing graph neural architecture search (GNAS) methods heavily rely on supervised labels during the search process, failing to handle ubiquitous scenarios where supervisions are not available.In this paper, we study the problem of unsupervised graph neural architecture search, which remains unexplored in the literature.The key problem is to discover the latent graph factors that drive the formation of graph data as well as the underlying relations between the factors and the optimal neural architectures.Handling this problem is challenging given that the latent graph factors together with architectures are highly entangled due to the nature of the graph and the complexity of the neural architecture search process.To address the challenge, we propose a novel Disentangled Self-supervised Graph Neural Architecture Search (DSGAS) model, which is able to discover the optimal architectures capturing various latent graph factors in a self-supervised fashion based on unlabeled graph data.Specifically, we first design a disentangled graph super-network capable of incorporating multiple architectures with factor-wise disentanglement, which are optimized simultaneously.Then, we estimate the performance of architectures under different factors by our proposed self-supervised training with joint architecture-graph disentanglement.Finally, we propose a contrastive search with architecture augmentations to discover architectures with factor-specific expertise.Extensive experiments on 11 real-world datasets demonstrate that the proposed DSGAS model is able to achieve state-of-the-art performance against several baseline methods in an unsupervised manner.
AB - The existing graph neural architecture search (GNAS) methods heavily rely on supervised labels during the search process, failing to handle ubiquitous scenarios where supervisions are not available.In this paper, we study the problem of unsupervised graph neural architecture search, which remains unexplored in the literature.The key problem is to discover the latent graph factors that drive the formation of graph data as well as the underlying relations between the factors and the optimal neural architectures.Handling this problem is challenging given that the latent graph factors together with architectures are highly entangled due to the nature of the graph and the complexity of the neural architecture search process.To address the challenge, we propose a novel Disentangled Self-supervised Graph Neural Architecture Search (DSGAS) model, which is able to discover the optimal architectures capturing various latent graph factors in a self-supervised fashion based on unlabeled graph data.Specifically, we first design a disentangled graph super-network capable of incorporating multiple architectures with factor-wise disentanglement, which are optimized simultaneously.Then, we estimate the performance of architectures under different factors by our proposed self-supervised training with joint architecture-graph disentanglement.Finally, we propose a contrastive search with architecture augmentations to discover architectures with factor-specific expertise.Extensive experiments on 11 real-world datasets demonstrate that the proposed DSGAS model is able to achieve state-of-the-art performance against several baseline methods in an unsupervised manner.
UR - https://www.scopus.com/pages/publications/85189620930
M3 - 会议稿件
AN - SCOPUS:85189620930
T3 - Advances in Neural Information Processing Systems
BT - Advances in Neural Information Processing Systems 36 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023
A2 - Oh, A.
A2 - Neumann, T.
A2 - Globerson, A.
A2 - Saenko, K.
A2 - Hardt, M.
A2 - Levine, S.
PB - Neural information processing systems foundation
T2 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023
Y2 - 10 December 2023 through 16 December 2023
ER -