Detalhes bibliográficos
Ano de defesa: |
2018 |
Autor(a) principal: |
RAMOS NETO, Geovane Menezes
 |
Orientador(a): |
BRAZ JÚNIOR, Geraldo
 |
Banca de defesa: |
BRAZ JÚNIOR, Geraldo
 |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
por |
Instituição de defesa: |
Universidade Federal do Maranhão
|
Programa de Pós-Graduação: |
PROGRAMA DE PÓS-GRADUAÇÃO EM CIÊNCIA DA COMPUTAÇÃO/CCET
|
Departamento: |
DEPARTAMENTO DE INFORMÁTICA/CCET
|
País: |
Brasil
|
Palavras-chave em Português: |
|
Palavras-chave em Inglês: |
|
Área do conhecimento CNPq: |
|
Link de acesso: |
https://tedebc.ufma.br/jspui/handle/tede/2361
|
Resumo: |
The need to use a visual language code makes the development of hearing impaired individuals difficult. This difficulty is explained by the low number of people who are fluent in a sign language, limiting the inclusion of the hearing impaired. The current solutions for communication between people without the domain of sign language and the hearing impaired are the use of human translators, which are expensive resources due to the necessary professional experience. This study presents a methodology that uses computer vision and machine learning techniques to recognize signals from the Sign Language of Argentina. The recognition takes place through the use of a 3D Convolutional Neural Network architecture, which was built through the selection of the parameters that provided the best results among the tests performed. For validation, we use the LSA64 video base, which contains 64 signs of the Sign Language Argentina. The best architecture achieved an average accuracy of 94.22% which, when compared to related works, proved to be a promising methodology in the automatic recognition of sign languages. |