Uma arquitetura multifluxo baseada em aprendizagem profunda para reconhecimento de sinais em libras no contexto de saúde
Ano de defesa: | 2020 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal da Paraíba
Brasil Informática Programa de Pós-Graduação em Informática UFPB |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | https://repositorio.ufpb.br/jspui/handle/123456789/21163 |
Resumo: | Deaf people are a considerable part of the world population. However, although many countries adopt their sign language as an official language, there are linguistics barriers to accessing fundamental rights, especially access to health services. This situation has been the focus of some government policies that oblige essential service providers to provide sign language interpreters to assist deaf people. However, this type of solution has high operating costs, mainly to serve the entire deaf community in all environments. These setbacks motivate the investigation of methodologies and automated tools to support this type of problem. Thus, in this paper, we proposed a two-stream model for the recognition of the Brazilian Sign Language (Libras). The proposed solution does not use any additional capture sensor or hardware, being entirely base on images or sequences of images (videos). The results show that the best accuracy for the test set was 99.80%, considering a scenario where the interpreter used in the test set was not used in the training set. Besides, we also created a new dataset in the Brazilian sign language (Libras) containing 5000 videos of 50 signs in the health context, which may assist the development and research of other solutions. |