A new approach to semantic mapping using reusable consolidated visual representations
Ano de defesa: | 2023 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Tese |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Universidade Federal de Pernambuco
UFPE Brasil Programa de Pos Graduacao em Ciencia da Computacao |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | https://repositorio.ufpe.br/handle/123456789/55203 |
Resumo: | The advancement of robotics may produce a positive impact on several aspects of our society. However, in order for robotic agents to assist humans in a variety of everyday ac- tivities, they need to possess representations of their environments that allow spatial and human-centered semantic understanding. Many works in the recent literature use Convolu- tional Neural Network (CNN) models to recognize semantic properties of images and incor- porate the results into traditional metric or topological maps, a procedure known as semantic mapping. The types of semantic properties (e.g., room size, place category, and objects) and their semantic classes (e.g., kitchen and bedroom, for place category) are usually previously defined and restricted to the planned tasks. Thus, all the visual data acquired and processed during the construction of the maps is lost, and only the recognized semantic properties re- main on the maps. In contrast, this research proposes using the visual data acquired during the mapping process to create reusable representations of regions by consolidating deep features extracted from the data. These consolidated representations would allow the recognition of new semantic information in a flexible way, and consequently, the adaptation of the semantics of the maps to new requirements of new tasks without the need for remapping. Such use of reusable consolidated representations for the generation of semantic maps is demonstrated in a topological mapping method that creates consolidated representations of deep visual fea- tures extracted from RGB images captured around each topological node. This is done using a process we denote as Topological Consolidation of Features by Moving Averages (TCMA). Experiments performed with real-world indoor datasets suggested that the proposed method is able to create consolidated representations that fairly preserve the visual features of the original images they consolidated and do not degrade in quality over time. Furthermore, the very promising results suggested that the consolidated representations produced are suitable for recognizing different semantic properties, indicating the topological location of images and adapting previously created maps with new semantic information. The experiments included two different CNNs for deep features extraction, classifiers trained on large-scale datasets from the literature, and more practical real-time scenarios. Different variations of the method were evaluated, including a derivation of the TCMA process that uses the arithmetic mean of multiple exponential moving averages. |