Bayoumi, Razan and Alfonse, Marco and Salem, Abdel-Badeeh (2022) Multi-Stage Hybrid Text-to-Image Generation Models. International Journal of Intelligent Computing and Information Sciences, 22 (3). pp. 82-91. ISSN 2535-1710
IJICIS_Volume 22_Issue 3_Pages 82-91.pdf - Published Version
Download (1MB)
Abstract
Generative Adversarial Networks (GANs) have proven their outstanding potential in creating realistic images that can't differentiate between them and the real images, but text-to-image (conditional generation) still faces some challenges. In this paper, we propose a new model called (AttnDM GAN) stands for Attentional Dynamic Memory Generative Adversarial Memory, which seeks to generate realistic output semantically harmonious with an input text description. AttnDM GAN is a three-stage hybrid model of the Attentional Generative Adversarial Network (AttnGAN) and the Dynamic Memory Generative Adversarial Network (DM-GAN), the 1st stage is called the Initial Image Generation, in which low resolution 64x64 images are generated conditioned on the encoded input textual description. The 2nd stage is the Attention Image Generation stage that generates higher-resolution images 128x128, and the last stage is Dynamic Memory Based Image Refinement that refines the images to 256x256 resolution images. We conduct an experiment on our model the AttnDM GAN using the Caltech-UCSD Birds 200 dataset and evaluate it using the Frechet Inception Distance (FID) with a value of 19.78. We also proposed another model called Dynamic Memory Attention Generative Adversarial Networks (DMAttn-GAN) which considered a variation of the AttnDM GAN model, where the second and the third stages are switched together, its FID value is 17.04.
Item Type: | Article |
---|---|
Subjects: | OA Digital Library > Computer Science |
Depositing User: | Unnamed user with email support@oadigitallib.org |
Date Deposited: | 29 Jun 2023 04:19 |
Last Modified: | 07 Sep 2024 10:03 |
URI: | http://library.thepustakas.com/id/eprint/1642 |