Aprendizado profundo aplicado em imagens de VANT para a detecção de Dipteryx alata

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: Marcio Santos Araujo
Orientador(a): Jose Marcato Junior
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Fundação Universidade Federal de Mato Grosso do Sul
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Brasil
Palavras-chave em Português:
Link de acesso: https://repositorio.ufms.br/handle/123456789/3834
Resumo: This dissertation proposes the investigation of a method based on deep learning applied in the classification of Dipteryx alata, popularly known as Cumbaru, which is an arboreal species of environmental interest from Mato Grosso do Sul (MS) from RGB images collected by UAV (Unmanned Aerial Vehicles). It is organized in two chapters, the first chapter presents general considerations regarding the studied species, its socio-environmental relevance and the pertinent legislation for environmental licensing and forest inventory. It also presents a brief approach to deep learning. The second chapter aimed to evaluate the RetinaNet object detection method in identifying the species of environmental interest. The collection of images was carried out in selected places with the use of UAV and a bank of images was generated with notes of the studied species; Initial experiments were carried out within the UFMS campus (Federal University of Mato Grosso do Sul) and also in nearby areas, with a focus on mapping and monitoring Dipteryx alata. The collection period was between August 2018 and December 2019. The images were divided into training, validation and testing in the proportion of 60%, 20% and 20%. The investigated approach is based on the RetinaNet object detection method, which uses annotations with surrounding rectangles. Subsequently, a test was performed with images collected in a distant place from UFMS, aiming to evaluate the generalization capacity of the method employed. Detection accuracy was around 80% for this second test area.