Representação multimodal para classificação de informação

Detalhes bibliográficos
Ano de defesa: 2018
Autor(a) principal: Ito, Fernando Tadao
Orientador(a): Caseli, Helena de Medeiros lattes
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de São Carlos
Câmpus São Carlos
Programa de Pós-Graduação: Programa de Pós-Graduação em Ciência da Computação - PPGCC
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Palavras-chave em Inglês:
Área do conhecimento CNPq:
Link de acesso: https://repositorio.ufscar.br/handle/20.500.14289/10365
Resumo: The most basic meaning of "multimodality" is the use of multiple means of information to compose an "artifact", a man-made object that expresses a concept. In our day-to-day life, most media outlets use multimedia to express information: news are composed of videos, narrations and ancillary texts; theater plays tell a story from actors, gestures and songs; electronic games use the player's physical gestures as actions, and respond with visual or musical cues. To interpret such "artifacts," we have to extract information from multiple media and combine them mathematically. The extraction of characteristics is done from mathematical models that receive raw data (texts, images, audio signals) and turns it into a numerical vector, where the distance between instances denotes its relation, where close data encode similar meanings. To create a multimodal semantic space, we use models that `` fuse '' information from multiple data types. In this work, we investigate the interaction between different modes of information representation in the formation of multimodal representations, presenting some of the most used algorithms for vector representation of texts and images and how to merge them. To measure the relative performance of each combination of methods, we use classification and similarity tasks in databases with images and paired texts. We found that in our data sets different methods of unimodal representation can lead to vastly different results. We also note that the performance of a representation in the data classification task does not mean that such representation does not encode the concept of an object, having different results in similarity tasks.