• Acta Optica Sinica
  • Vol. 41, Issue 7, 0728003 (2021)
Zishuo Han, Chunping Wang*, Qiang Fu, and Bin Zhao
Author Affiliations
  • Department of Electronic and Optical Engineering, Shijiazhuang Campus, Army Engineering University, Shijiazhuang, Hebei 050003, China
  • show less
    DOI: 10.3788/AOS202141.0728003 Cite this Article Set citation alerts
    Zishuo Han, Chunping Wang, Qiang Fu, Bin Zhao. Remote Sensing Image Mode Translation by Spatial Disentangled Representation Based GAN[J]. Acta Optica Sinica, 2021, 41(7): 0728003 Copy Citation Text show less

    Abstract

    Resting on the translation framework of spatially separated images, we proposed a cycle-consistent generative adversarial network (GAN) based on spatial disentangled representation to address the large mode difference and difficult translation between synthetic aperture radar images and optical remote sensing images. The proposed model separates images into style and content features by a deeper network layer and jump connection. Furthermore, the content features are translated by content mapping learning and combined with target style features for image translation. In addition, PatchGAN, as the discriminator, enhances the image detail generation, and target error loss and generation & reconstruction loss are introduced to limit the translation task to one-to-one mapping, thus reducing the information added and constraining the GAN. The experimental results in SEN1-2, SARptical, and WHU-SEN-City datasets show that compared with other image translation algorithms, the proposed method can translate two types of remote sensing images and generate images of high resolution, complete detail features, and strong authenticity.
    Zishuo Han, Chunping Wang, Qiang Fu, Bin Zhao. Remote Sensing Image Mode Translation by Spatial Disentangled Representation Based GAN[J]. Acta Optica Sinica, 2021, 41(7): 0728003
    Download Citation