Modelo baseado em redes neurais profundas com unidades recorrentes bloqueadas para legendagem de imagens por referências

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: Nogueira, Tiago do Carmo lattes
Orientador(a): Cruz Júnior, Gélson da lattes
Banca de defesa: Cruz Júnior, Gélson da, Ferreira, Deller James, Santos, Gilberto Antonio Marcon dos, Vinhal, Cássio Dener Noronha, Lemos, Rodrigo Pinto
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de Goiás
Programa de Pós-Graduação: Programa de Pós-graduação em Engenharia Elétrica e da Computação (EMC)
Departamento: Escola de Engenharia Elétrica, Mecânica e de Computação - EMC (RG)
País: Brasil
Palavras-chave em Português:
Palavras-chave em Inglês:
Área do conhecimento CNPq:
Link de acesso: http://repositorio.bc.ufg.br/tede/handle/tede/10884
Resumo: Describing images using natural language has become a challenging task for computer vision. Image captioning can automatically create descriptions through deep learning architectures that use convolutional neural networks (CNNs) and recurrent neural networks (RNNs). Image captioning has several applications, such as object descriptions in scenes to help blind people walk in unknown environments, and medical image descriptions for early diagnosis of diseases. However, architectures supported by traditional RNNs, in addition to problems of exploding and fading gradients, can generate non-descriptive sentences. To solve these difficulties, this study proposes a model based on the encoder-decoder structure using CNNs to extract the image characteristics and multimodal gated recurrent units (GRU) to generate the descriptions. The part-of-speech (PoS) and the likelihood function are used to generate weights in the GRU. The proposed method performs knowledge transfer in the validation phase using the k-nearest neighbors (kNN) technique. The experimental results in the Flickr30k and MS-COCO data sets demonstrate that the proposed PoS-based model is statistically superior to the leading models. It provides more descriptive captions that are similar to the expected captions, both in the predicted and kNN-selected captions. These results indicate an automatic improvement of the image descriptions, benefitting several applications, such as medical image captioning for early diagnosis of diseases.