Improving art style classification with synthetic images from self-attention generative adversarial network.

Detalhes bibliográficos
Ano de defesa: 2022
Autor(a) principal: Pérez, Sarah Pires
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/3/3141/tde-25112022-091647/
Resumo: Art is the means by which humanity has always expressed itself, as art oers a record of humanitys feelings, its ways of life and its conception of the world. Although we are fortunate to have a vast store of cultural wealth from past generations, the sheer number of artworks has become an obstacle to their categorization into styles. This research explores a strategy that maximizes the performance of style classifiers applied to works of art. Automatically classifying artworks into styles is quite challenging due to the relative lack of tagged data and the complexity of the class definitions. This complexity is manifested by the fact that some image augmentation techniques not only do not improve performance but may also degrade performance. We propose to resort to Adversary Generating Networks (GANs). Originally, GANs set out to create images capable of deceiving the human eye and making us believe that generated images are true images. The proposal here is not to create art, but rather to use this architecture as a data augmentation tool. To assess the impact of using GANs on image augmentation, we have studied performance improvements over EfficientNet B0, a state-of-the-art image classifier. In addition, we present a Class-by-Class Performance Analysis that can be useful in the study of other high-complexity image datasets.