Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision

  • Zeyang Zhang
  • , Xin Wang*
  • , Ziwei Zhang
  • , Guangyao Shen
  • , Shiqi Shen
  • , Wenwu Zhu*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The existing graph neural architecture search (GNAS) methods heavily rely on supervised labels during the search process, failing to handle ubiquitous scenarios where supervisions are not available.In this paper, we study the problem of unsupervised graph neural architecture search, which remains unexplored in the literature.The key problem is to discover the latent graph factors that drive the formation of graph data as well as the underlying relations between the factors and the optimal neural architectures.Handling this problem is challenging given that the latent graph factors together with architectures are highly entangled due to the nature of the graph and the complexity of the neural architecture search process.To address the challenge, we propose a novel Disentangled Self-supervised Graph Neural Architecture Search (DSGAS) model, which is able to discover the optimal architectures capturing various latent graph factors in a self-supervised fashion based on unlabeled graph data.Specifically, we first design a disentangled graph super-network capable of incorporating multiple architectures with factor-wise disentanglement, which are optimized simultaneously.Then, we estimate the performance of architectures under different factors by our proposed self-supervised training with joint architecture-graph disentanglement.Finally, we propose a contrastive search with architecture augmentations to discover architectures with factor-specific expertise.Extensive experiments on 11 real-world datasets demonstrate that the proposed DSGAS model is able to achieve state-of-the-art performance against several baseline methods in an unsupervised manner.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 36 - 37th Conference on Neural Information Processing Systems, NeurIPS 2023
EditorsA. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, S. Levine
PublisherNeural information processing systems foundation
ISBN (Electronic)9781713899921
StatePublished - 2023
Externally publishedYes
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

Publication series

NameAdvances in Neural Information Processing Systems
Volume36
ISSN (Print)1049-5258

Conference

Conference37th Conference on Neural Information Processing Systems, NeurIPS 2023
Country/TerritoryUnited States
CityNew Orleans
Period10/12/2316/12/23

Fingerprint

Dive into the research topics of 'Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision'. Together they form a unique fingerprint.

Cite this