Image-to-image style transfer based on the ghost module

Yan Jiang, Xinrui Jia, Liguo Zhang, Ye Yuan, Lei Chen, Guisheng Yin

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The technology for image-to-image style transfer (a prevalent image processing task) has developed rapidly. The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target image domain using a deep neural network. However, the existing methods typically have a large computational cost. To achieve efficient style transfer, we introduce a novel Ghost module into the GANILLA architecture to produce more feature maps from cheap operations. Then we utilize an attention mechanism to transform images with various styles. We optimize the original generative adversarial network (GAN) by using more efficient calculation methods for image-to-illustration translation. The experimental results show that our proposed method is similar to human vision and still maintains the quality of the image. Moreover, our proposed method overcomes the high computational cost and high computational resource consumption for style transfer. By comparing the results of subjective and objective evaluation indicators, our proposed method has shown superior performance over existing methods.

Original languageEnglish
Pages (from-to)4051-4067
Number of pages17
JournalComputers, Materials and Continua
Volume68
Issue number3
DOIs
StatePublished - 2021

Keywords

  • Attention mechanism
  • Generative adversarial networks
  • Ghost module
  • Human visual habits
  • Style transfer

Fingerprint

Dive into the research topics of 'Image-to-image style transfer based on the ghost module'. Together they form a unique fingerprint.

Cite this