Localização e planejamento de movimento de robôs móveis em ambientes internos utilizando processos de decisão de Markov
Ano de defesa: | 2020 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal de Minas Gerais
Brasil ENG - DEPARTAMENTO DE ENGENHARIA ELÉTRICA Programa de Pós-Graduação em Engenharia Elétrica UFMG |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/1843/34512 |
Resumo: | Deterministic motion planners perform well in simulated environments, where sensors and actuators are perfect. However, these assumptions are restrictive and consequently motion planning will have poor performance if applied to real robotic systems (or a more realistic simulator), as they are inherently fraught with uncertainty. In most real robotic systems, states cannot be directly observed, and measurements received by the robot are noisy projections of the true state. The actions performed by a robot have uncertainties, given that the robot’s actuators make mistakes when following the desired control commands. Thus, the robot must make use of a new class of planners that take into account system uncertainties when making a decision. In the present work, the Partially Observable Markov Decision Process (POMDP) is presented as an alternative to solve problems immersed in uncertainties, selecting optimal actions aiming to perform a given task. POMDP is a probabilistic method that considers: that robot states cannot be measured directly, but are inferred through indirect observations; that the decisions taken have uncertain results; and that the result of an action in a state depends only on the action and the current state of the process (Markovian property). In POMDP, each action results in an observation, probabilistically related to states of the system. Instead of a current system state, in POMDP there is a probability distribution over states, called belief. To estimate the belief, this work used the probabilistic structure of the Hidden Markov Model (HMM). The above methodology was applied to a simulated system for localization and controlling the actions of a robot that moves around a warehouse used for products storage, as well as for navigating a real robot in a living space. Simulations and experiments show the robustness and efficiency of the methods used. |