Abstract
Generating photo-realistic remote sensing images conditioned on semantic masks has many practical applications like image editing, detecting deep fake geography, and data augmentation. Although previous methods achieved high-quality synthesis results for natural images like faces and everyday objects, they still underperform in remote sensing scenarios in terms of both visual fidelity and diversity. The high data imbalance and high semantic similarity of remote-sensing object categories make the semantic synthesis of remote sensing images more challenging than natural images. To tackle these challenges, we propose a novel method named conducted semantic embedding GAN (CSEBGAN) for semantic-controllable remote sensing image synthesis. The proposed method decouples different semantic classes into independent semantic embeddings, which explores the regularities between classes to improve visual fidelity and naturally supports semantic-level. We further introduce a novel tripartite cooperation adversarial training scheme that involves a conductor network to provide fine-grained semantic feedback for the generator. We also show that the proposed semantic image synthesis method can be utilized as an effective data augmentation approach on improving the performance of the downstream remote sensing image segmentation tasks. Extensive experiments show the superiority of our method compared with the state-of-the-art image synthesis methods.
| Original language | English |
|---|---|
| Article number | 4702811 |
| Journal | IEEE Transactions on Geoscience and Remote Sensing |
| Volume | 61 |
| DOIs | |
| State | Published - 2023 |
Keywords
- Generative adversarial networks
- image segmentation
- remote sensing images
- semantic image synthesis
Fingerprint
Dive into the research topics of 'Remote Sensing Image Synthesis via Semantic Embedding Generative Adversarial Networks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver