Semi-automatic construction of subsea inspection datasets using deep convolutional neural networks
Ano de defesa: | 2020 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Universidade Federal do Rio de Janeiro
Brasil Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia Programa de Pós-Graduação em Engenharia Elétrica UFRJ |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/11422/23209 |
Resumo: | Undersea pipeline inspection requires that specialists analyze many hours of video searching for relevant events, a time-consuming and expensive task. Using deep neural models for image classification can accelerate event discovery in videos, substituting or reducing the work required from the specialists. To train such models with millions of parameters, large labeled datasets are required. The datasets must be built by annotating the events on videos. If that is done by human annotators, it becomes a task that is slow, time-consuming and difficult to scale. This work explores and adapts a method of annotating images using human effort in tandem with deep neural networks in an sequential, iterative manner. The method is used to annotate 146 videos of undersea inspection and builds a dataset of 457 thousand images to solve a hierarchical classification task with three levels. This dataset is compared to a dataset built using only human effort by using both to train and evaluate classifier models. The new dataset allows a model to achieve best performance in 10 out of 14 tests in comparison to the performance of models trained from the previous dataset. The method also produces an annotation effort amplification of 45:1 in the best case and 13:1 in the worst, and is estimated to allow the new dataset to be annotated 4.3 times faster than the previous, manual method. |