Author
Listed:
- YiMi Liao
- YouFu Huang
- George Papakostas
Abstract
Generative adversarial network (GAN) is a deep learning model that is widely applied to image generation, semantic segmentation, superresolution tasks, and so on. CycleGAN is a new model architecture that is used for various applications in image translation. This paper mainly focuses on the CycleGAN algorithm model. To improve the network model’s capacity of extracting image features, the generator model uses the neural network of Unet, which consists of eight down-sampling and eight up-sampling layers, to extract image features. We use the Markov discriminator of PatchGAN since it has high-resolution and high-detail characteristics in the image style transfer. In order to improve the running efficiency, the depthwise separable convolution and standard convolution are combined in the Markov discriminator. The experimental results show that it can effectively shorten the running time. Then, we compare the image with the generative image that uses the L1 loss function, the L2 loss function, and the smooth L1 loss function. The experimental results show that the CycleGAN neural network can effectively complete the image style transfer. The L1 loss model can well retain the details of the original image. The L2 loss model is clear in the distant part of the natural photo generated by Monet painting, and the color tone is more similar to the original image. The image generated by the smoothL1 loss model is smoother. The L1 loss model and the smooth L1 loss model have some miscoloring in the natural photos generated by the Monet painting. In general, the L2 loss model is more stable, and the generative image is better.
Suggested Citation
YiMi Liao & YouFu Huang & George Papakostas, 2022.
"Deep Learning-Based Application of Image Style Transfer,"
Mathematical Problems in Engineering, Hindawi, vol. 2022, pages 1-10, August.
Handle:
RePEc:hin:jnlmpe:1693892
DOI: 10.1155/2022/1693892
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:jnlmpe:1693892. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.