Detalhes bibliográficos
Ano de defesa: |
2023 |
Autor(a) principal: |
Xavier, Francisco Geilson de Lima |
Orientador(a): |
Não Informado pela instituição |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
por |
Instituição de defesa: |
Não Informado pela instituição
|
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: |
|
Link de acesso: |
http://repositorio.ufc.br/handle/riufc/74798
|
Resumo: |
Depth detection is essential for many robotic tasks, including mapping, localization, and obstacle prevention. On small robotic platforms, weight, volume, and power consumption limitations are some of the challenges that motivate the use of depth estimation using a monocular camera rather than depth sensors. This dissertation proposes an approach for the localization of autonomous mobile robots in an indoor environment using monocular vision aided by depth estimation maps from a single RGB input image, applied to the concept of Transfer Learning tied to Convolutional Neural Networks (CNNs). The performance of the classifiers in estimating the location was observed and compared using a unique configuration of RGB-D images transformed into a mosaic image. The images were combined with the descriptive power of the CNNs in the following scenarios: depth captured by the Kinect sensor and depth estimation generated by the AdaBins block. The results show that the proposed approach achieved 99.8% in Accuracy and F1-Score. Based on these results, the performances were analyzed concerning feature extraction time and training, achieving 7.929ms and 0.022s, respectively, for the best combination of architecture and classifier in the proposed approach. Keywords: |