Image-based mapping and localization using VG-RAM weightless neural networks
Ano de defesa: | 2014 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Universidade Federal do Espírito Santo
BR Mestrado em Informática Centro Tecnológico UFES Programa de Pós-Graduação em Informática |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://repositorio.ufes.br/handle/10/4269 |
Resumo: | Mapping and localization are fundamental problems in autonomous robotics. Autonomous robots need to know where they are in their operational area to navigate through it and to perform activities of interest. In this work, we present an image-based mapping and localization system that employs Virtual Generalizing Random Access Memory Weightless Neural Networks (VGRAM WNN) for localizing an autonomous car. In our system, a VG-RAM WNN learns world positions associated with images and three-dimensional landmarks captured along a trajectory, in order to build a map of the environment. During the localization, the system uses its previous knowledge and uses an Extended Kalman Filter (EKF) to integrate sensor data over time through consecutive steps of state prediction and correction. The state prediction step is computed by means of our robot’s motion model, which uses velocity and steering angle information computed from images using visual odometry. The state correction step is performed by integrating the VG-RAM WNN learned world positions in combination to the matching of landmarks previously stored in the robot’s map. Our system efficiently solves the (i) mapping, (ii) global localization and (iii) position tracking problems using only camera images. We performed experiments with our system using real-world datasets, which were systematically acquired during laps around the Universidade Federal do Espírito Santo (UFES) main campus (a 3.57 km long circuit). Our experimental results show that the system is able to learn large maps (several kilometres in length) of real world environments and perform global and position tracking localization with mean pose precision of about 0.2m compared to the Monte Carlo Localization (MCL) approach employed in our autonomous vehicle. |