X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence
Main Author: | |
---|---|
Publication Date: | 2024 |
Other Authors: | , , , , |
Format: | Conference object |
Language: | eng |
Source: | Repositório Institucional da UNESP |
Download full: | http://dx.doi.org/10.5220/0012618400003690 https://hdl.handle.net/11449/306242 |
Summary: | Generative Adversarial Networks (GANs) create artificial images through adversary training between a generator (G) and a discriminator (D) network. This training is based on game theory and aims to reach an equilibrium between the networks. However, this equilibrium is hardly achieved, and D tends to be more powerful. This problem occurs because G is trained based on only a single value representing D’s prediction, and only D has access to the image features. To address this issue, we introduce a new approach using Explainable Artificial Intelligence (XAI) methods to guide the G training. Our strategy identifies critical image features learned by D and transfers this knowledge to G. We have modified the loss function to propagate a matrix of XAI explanations instead of only a single error value. We show through quantitative analysis that our approach can enrich the training and promote improved quality and more variability in the artificial images. For instance, it was possible to obtain an increase of up to 37.8% in the quality of the artificial images from the MNIST dataset, with up to 4.94% more variability when compared to traditional methods. |
id |
UNSP_1cb178d4c1f16a817f5a21abc5f14c6b |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/306242 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial IntelligenceExplainable Artificial IntelligenceGAN TrainingGenerative Adversarial NetworksGenerative Adversarial Networks (GANs) create artificial images through adversary training between a generator (G) and a discriminator (D) network. This training is based on game theory and aims to reach an equilibrium between the networks. However, this equilibrium is hardly achieved, and D tends to be more powerful. This problem occurs because G is trained based on only a single value representing D’s prediction, and only D has access to the image features. To address this issue, we introduce a new approach using Explainable Artificial Intelligence (XAI) methods to guide the G training. Our strategy identifies critical image features learned by D and transfers this knowledge to G. We have modified the loss function to propagate a matrix of XAI explanations instead of only a single error value. We show through quantitative analysis that our approach can enrich the training and promote improved quality and more variability in the artificial images. For instance, it was possible to obtain an increase of up to 37.8% in the quality of the artificial images from the MNIST dataset, with up to 4.94% more variability when compared to traditional methods.Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Fundação de Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG)Department of Computer Science and Engineering (DISI) University of BolognaFaculty of Engineering University of Porto (FEUP)Science and Technology Institute (ICT) Federal University of São Paulo (UNIFESP)Faculty of Computer Science (FACOM) Federal University of Uberlândia (UFU)Department of Computer Science and Statistics (DCCE) São Paulo State UniversityDepartment of Computer Science and Statistics (DCCE) São Paulo State UniversityFAPESP: #2022/03020-1CAPES: #311404/2021-9CAPES: #313643/2021-0FAPEMIG: #APQ-00578-18University of BolognaUniversity of Porto (FEUP)Universidade de São Paulo (USP)Universidade Federal de Uberlândia (UFU)Universidade Estadual Paulista (UNESP)Rozendo, Guilherme Botazzo [UNESP]Lumini, AlessandraRoberto, Guilherme FreireTosta, Thaína Aparecida Azevedodo Nascimento, Marcelo ZanchettaNeves, Leandro Alves [UNESP]2025-04-29T20:05:44Z2024-01-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/conferenceObject674-681http://dx.doi.org/10.5220/0012618400003690International Conference on Enterprise Information Systems, ICEIS - Proceedings, v. 1, p. 674-681.2184-4992https://hdl.handle.net/11449/30624210.5220/00126184000036902-s2.0-85194001440Scopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengInternational Conference on Enterprise Information Systems, ICEIS - Proceedingsinfo:eu-repo/semantics/openAccess2025-04-30T13:57:21Zoai:repositorio.unesp.br:11449/306242Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestrepositoriounesp@unesp.bropendoar:29462025-04-30T13:57:21Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence |
title |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence |
spellingShingle |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence Rozendo, Guilherme Botazzo [UNESP] Explainable Artificial Intelligence GAN Training Generative Adversarial Networks |
title_short |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence |
title_full |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence |
title_fullStr |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence |
title_full_unstemmed |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence |
title_sort |
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence |
author |
Rozendo, Guilherme Botazzo [UNESP] |
author_facet |
Rozendo, Guilherme Botazzo [UNESP] Lumini, Alessandra Roberto, Guilherme Freire Tosta, Thaína Aparecida Azevedo do Nascimento, Marcelo Zanchetta Neves, Leandro Alves [UNESP] |
author_role |
author |
author2 |
Lumini, Alessandra Roberto, Guilherme Freire Tosta, Thaína Aparecida Azevedo do Nascimento, Marcelo Zanchetta Neves, Leandro Alves [UNESP] |
author2_role |
author author author author author |
dc.contributor.none.fl_str_mv |
University of Bologna University of Porto (FEUP) Universidade de São Paulo (USP) Universidade Federal de Uberlândia (UFU) Universidade Estadual Paulista (UNESP) |
dc.contributor.author.fl_str_mv |
Rozendo, Guilherme Botazzo [UNESP] Lumini, Alessandra Roberto, Guilherme Freire Tosta, Thaína Aparecida Azevedo do Nascimento, Marcelo Zanchetta Neves, Leandro Alves [UNESP] |
dc.subject.por.fl_str_mv |
Explainable Artificial Intelligence GAN Training Generative Adversarial Networks |
topic |
Explainable Artificial Intelligence GAN Training Generative Adversarial Networks |
description |
Generative Adversarial Networks (GANs) create artificial images through adversary training between a generator (G) and a discriminator (D) network. This training is based on game theory and aims to reach an equilibrium between the networks. However, this equilibrium is hardly achieved, and D tends to be more powerful. This problem occurs because G is trained based on only a single value representing D’s prediction, and only D has access to the image features. To address this issue, we introduce a new approach using Explainable Artificial Intelligence (XAI) methods to guide the G training. Our strategy identifies critical image features learned by D and transfers this knowledge to G. We have modified the loss function to propagate a matrix of XAI explanations instead of only a single error value. We show through quantitative analysis that our approach can enrich the training and promote improved quality and more variability in the artificial images. For instance, it was possible to obtain an increase of up to 37.8% in the quality of the artificial images from the MNIST dataset, with up to 4.94% more variability when compared to traditional methods. |
publishDate |
2024 |
dc.date.none.fl_str_mv |
2024-01-01 2025-04-29T20:05:44Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/conferenceObject |
format |
conferenceObject |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://dx.doi.org/10.5220/0012618400003690 International Conference on Enterprise Information Systems, ICEIS - Proceedings, v. 1, p. 674-681. 2184-4992 https://hdl.handle.net/11449/306242 10.5220/0012618400003690 2-s2.0-85194001440 |
url |
http://dx.doi.org/10.5220/0012618400003690 https://hdl.handle.net/11449/306242 |
identifier_str_mv |
International Conference on Enterprise Information Systems, ICEIS - Proceedings, v. 1, p. 674-681. 2184-4992 10.5220/0012618400003690 2-s2.0-85194001440 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
International Conference on Enterprise Information Systems, ICEIS - Proceedings |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
674-681 |
dc.source.none.fl_str_mv |
Scopus reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
repositoriounesp@unesp.br |
_version_ |
1834482655881592832 |