• Optics and Precision Engineering
  • Vol. 30, Issue 10, 1217 (2022)
Qihang LI1, Long FENG1, Qing YANG1, Yu WANG2, and Guohua GENG1,*
Author Affiliations
  • 1School of Information Science and Technology, Northwest University, Xi ’an7027, China
  • 2School of Mathematics, Northwest University, Xi ’an71017, China
  • show less
    DOI: 10.37188/OPE.20223010.1217 Cite this Article
    Qihang LI, Long FENG, Qing YANG, Yu WANG, Guohua GENG. Single-image translation based on multi-scale dense feature fusion[J]. Optics and Precision Engineering, 2022, 30(10): 1217 Copy Citation Text show less

    Abstract

    To solve the problems of low image quality and poor detail features generated by the existing single image translation models, a single image translation model based on multi-scale dense feature fusion is proposed in this paper. First, in this model, the idea of multi-scale pyramid structure is used to downsample the original and target images to obtain input images of different sizes. Then, in the generator, images of different sizes are input into the dense feature module for style feature extraction, which are transferred from the original image to the target image, and the required translation image is generated through continuous game confrontation with the discriminator. Finally, dense feature modules are added in each stage of training by means of incremental growth generator training, which realizes the migration of generated images from global to local styles, and generates the required translation images. Extensive experiments have been conducted on various unsupervised images to perform image translation tasks. The experimental results demonstrate that in contrast to the existing methods, the training time of this method is shortened by 80%, and the SIFID value of the generated image is reduced by 22.18%. Therefore, the model proposed in this paper can better capture the distribution difference between the source and target domains, and improve the quality of image translation.
    IA(B)n=IA(B)N×(IA(B)1/IA(B)N)N-nN(1)

    View in Article

    xl=Hl([x1,x2,,xl-1])(2)

    View in Article

    IABn=GABn(IAn)(3)

    View in Article

    IBAn=GBAn(IBn)(4)

    View in Article

    LALLn=LADVn+λCYCLCYCn(5)

    View in Article

    LADVn=DBn(IBn)-DBn(GABn(IAn))+DAn(IAn)-DAn(GBAn(IBn))-λPENÎBnDBn(ÎBn)2-12-λPENÎAnDAn(ÎAn)2-12,(6)

    View in Article

    LCYCn=IAn-IABAn1+IBn-IBABn1(7)

    View in Article

    为保证输入图像和输出图像的颜色分布相似,且空间结构不发生改变。本文对生成器添加了空间相关性损失16。空间相关性损失只为了捕捉图像中的空间关系,以精确地表达场景的结构,而不是使用原始像素或特征,综合了图像的外观和结构。例如,给定图像IAnA,经过生成器GABn翻译后,图像的颜色和纹理不应发生变化: LSn=1-cos(IAn,IAAn)1+1-cos(IBn,IBBn)1

    View in Article

    LTVn=Ltv(IABn)+Ltv(IBAn)(9)

    View in Article

    Qihang LI, Long FENG, Qing YANG, Yu WANG, Guohua GENG. Single-image translation based on multi-scale dense feature fusion[J]. Optics and Precision Engineering, 2022, 30(10): 1217
    Download Citation