Abstract
The technology for image-to-image style transfer (a prevalent image processing task) has developed rapidly. The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target image domain using a deep neural network. However, the existing methods typically have a large computational cost. To achieve efficient style transfer, we introduce a novel Ghost module into the GANILLA architecture to produce more feature maps from cheap operations. Then we utilize an attention mechanism to transform images with various styles. We optimize the original generative adversarial network (GAN) by using more efficient calculation methods for image-to-illustration translation. The experimental results show that our proposed method is similar to human vision and still maintains the quality of the image. Moreover, our proposed method overcomes the high computational cost and high computational resource consumption for style transfer. By comparing the results of subjective and objective evaluation indicators, our proposed method has shown superior performance over existing methods.
Original language | English |
---|---|
Pages (from-to) | 4051-4067 |
Number of pages | 17 |
Journal | Computers, Materials and Continua |
Volume | 68 |
Issue number | 3 |
DOIs | |
State | Published - 2021 |
Keywords
- Attention mechanism
- Generative adversarial networks
- Ghost module
- Human visual habits
- Style transfer