Visual and inertial data fusion for Globally consistent point cloud registration
Ano de defesa: | 2013 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal de Minas Gerais
UFMG |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/1843/ESSA-9D6GLH |
Resumo: | This work addresses the problem of mapping 3D static environments by using an RGBD sensor, that captures image and depth, and a MARG sensor, composed by inertial sensors and magnetometers. The approached problem is relevant to the robotics field, since its solution will allow mobile robots to autonomously navigate and map unknown environments. Besides, it has impacts on several applications that perform 3D modeling by using scans obtained from depth sensors. Amongst them, one can mention the digital replication of sculptures and art objects, the modeling of characters for games and movies, and the reconstruction of CAD models from old buildings. We have decided to address the problem by performing a rigid registration of point clouds sequentially captured by the depth sensor, posteriorly using the data provided by the inertial sensor as a guide both during the coarse alignment stage and during the global optimization of the estimated map. During the point cloud alignment based on feature matching, the rotation estimated from the MARG sensor is used as an initial estimation of the attitude between point clouds. Thereby, we seek to match keypoints considering only three translational degrees of freedom. The attitude given by the MARG is also used to reduce the search space for loop closures. The fusion of RGB-D and inertial data is still very little explored in the related literature. A similar work already published only uses inertial data to improve the attitude estimation during the pairwise alignment in an ad-hoc fashion, potentially discarding it under specific conditions, and neglecting the global optimization stage. Since we use a MARG sensor, we assume the sensor drift to be negligible for the purposes of our application, which allows us to always use its data, specially during the global optimization stage. In our experiments, we mapped the walls of a rectangular room with dimensions 9.84m x 7.13m and compared the results with a map from the same scene captured by a Zebedee sensor, state of the art in terms of laser-based 3D mapping. We also compared the proposed algorithm against the RGB-D SLAM methodology, which, unlike our methodology, was not capable of detecting the loop closure region. |