Cross Domain Visual Search with Feature Learning using Multi-stream Transformer-based Architectures

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: Ribeiro, Leo Sampaio Ferraz
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/55/55134/tde-02062023-161527/
Resumo: Within the general field of Computer Vision, the task of Cross-domain Visual Search is one of the most useful and studied and yet it is rarely seen throughout our daily lives. In this thesis we explore Cross-domain Visual Search using the specific and mature Sketch-based Image Retrieval (SBIR) task as a canvas. We draw four distinct hypothesis as to how to further the field and demonstrate their validity with each contribution. First we present a new architecture for sketch representation learning that forgoes traditional Convolutional networks in favour of the recent Transformer design, called Sketchformer. Then we explore two alternative definitions for the SBIR task that each approach the scale and generalisation necessary for implementation in the real world. For both tasks we introduce state-of-the-art models: our Scene Designer combines traditional multi-stream networks with a Graph Neural Network to learn representations for sketched scenes with multiple object; our Sketch-an-Anchor shows that it is possible to harvest general knowledge from pre-trained models for the Zero-shot SBIR task. These contributions have a direct impact on the literature of sketch-based tasks and a cascaded impact on Image Undestanding and Cross-domain representations at large.