Aprendizado por esforço aplicado ao combate em jogos eletrônicos de estratégia em tempo real

Detalhes bibliográficos
Ano de defesa: 2014
Autor(a) principal: Botelho Neto, Gutenberg Pessoa
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal da Paraí­ba
BR
Informática
Programa de Pós-Graduação em Informática
UFPB
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://repositorio.ufpb.br/jspui/handle/tede/6128
Resumo: Electronic games and, in particular, real-time strategy (RTS) games, are increasingly seen as viable and important fields for artificial intelligence research because of commonly held characteristics, like the presence of complex environments, usually dynamic and with multiple agents. In commercial RTS games, the computer behavior is mostly designed with simple ad hoc, static techniques that require manual definition of actions and leave the agent unable to adapt to the various situations it may find. This approach, besides being lengthy and error-prone, makes the game relatively predictable after some time, allowing the human player to eventually discover the strategy used by the computer and develop an optimal way of countering it. Using machine learning techniques like reinforcement learning is a way of trying to avoid this predictability, allowing the computer to evaluate the situations that occur during the games, learning with these situations and improving its behavior over time, being able to choose autonomously and dynamically the best action when needed. This work proposes a modeling for the use of SARSA, a reinforcement learning technique, applied to combat situations in RTS games, with the goal of allowing the computer to better perform in this fundamental area for achieving victory in an RTS game. Several tests were made with various game situations and the agent applying the proposed modeling, facing the game's default AI opponent, was able to improve its performance in all of them, developing knowledge about the best actions to choose for the various possible game states and using this knowledge in an efficient way to obtain better results in later games