Detalhes bibliográficos
Ano de defesa: |
2020 |
Autor(a) principal: |
Mastella, Juliana Obino |
Orientador(a): |
De Rose, César Augusto Fonticielha |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
por |
Instituição de defesa: |
Pontifícia Universidade Católica do Rio Grande do Sul
|
Programa de Pós-Graduação: |
Programa de Pós-Graduação em Ciência da Computação
|
Departamento: |
Escola Politécnica
|
País: |
Brasil
|
Palavras-chave em Português: |
|
Palavras-chave em Inglês: |
|
Área do conhecimento CNPq: |
|
Link de acesso: |
http://tede2.pucrs.br/tede2/handle/tede/9908
|
Resumo: |
In last years it has been witnessed an exponential growth of data volume, data variability and data velocity. It is known that most of them are in an unstructured availability which intensify the data analysis challenge. Considering this scenario, the usage os Natural Language Processing (NLP) tools for text classification has been inspiring researchers from several knowlage domains, among them it can be highlighted the Legal Sciences. The justice in its root depends on analysis of huge text data volume which turns it into an important potential area for applying NLP tools. The choice of an algorithm for solving a specific text classification issue is not a trivial task. The picked classification approach quality and viability will depends on the issue to be solved, the data volume and the data behavior, in addition to the best use of available computational resources in order to results be delivered in time. Motivated by the problem of automatic classification of legal texts for application to electronic processes of a Brazilian State Court, this research proposes a methodology to optimize the choice of parameters for the classification algorithm of legal documents paralleling the training of Bi-LSTM Recurrent Neural Networks. For data application 107,010 petitions from a Brazilian State Court, with classes previously noted, underwent training of 216 Recurrent Neural Networks in parallel. At the end of training, the best individual performance was F1 = 0.846. Combining the 4 best models through an Ensemble technique resulted in a final model with lower performance than the best individual one (F1 = 0.826). Through the parallel training of models it was possible to reach a superior result to the majority of the tested parameterizations (10 % better than the worst parameterization tested and 9.8% better than the average ) in approximately 20 times less time than it would take for test all the same possibilities sequentially. |