Detalhes bibliográficos
Ano de defesa: |
2020 |
Autor(a) principal: |
Bittencourt Júnior, José Adenaldo Santos
 |
Orientador(a): |
Soares, Anderson da Silva
 |
Banca de defesa: |
Soares, Anderson da Silva,
Rosa, Thierson Couto,
Nogueira, Rodrigo Frassetto |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
por |
Instituição de defesa: |
Universidade Federal de Goiás
|
Programa de Pós-Graduação: |
Programa de Pós-graduação em Ciência da Computação (INF)
|
Departamento: |
Instituto de Informática - INF (RG)
|
País: |
Brasil
|
Palavras-chave em Português: |
|
Palavras-chave em Inglês: |
|
Área do conhecimento CNPq: |
|
Link de acesso: |
http://repositorio.bc.ufg.br/tede/handle/tede/10411
|
Resumo: |
Writing is one of the most relevant and valued human skills. One of the most traditional way of evaluating writing is with an essay. Nowadays, the essay evaluation and student guidance are done manually, which makes the process costly and time consuming, therefore it is not very scalable. Automatic Essay Scoring (AES) is the main alternative to the conventional manual method. Its main characteristic is that the essay scoring is done without human interference. AES systems are widely used in english exams, however, they are seldom used in portuguese exams. With the recent advances in deep learning and the skills of such systems to surpass other models that represent the state of the art in similar areas, this work proposes the development of deep neural networks for Automatic Essay Scoring (AES) in portuguese. The first contribution of this work was the investigation and parameterization of architectures for portuguese texts. The second contribution was the proposition of a new multi prompt architecture, based on the hypothesis that the features learned by a neural network to evaluate essays of a given prompt could help to improve performance to evaluate essays of other prompts. The proposed architecture surpassed two models considered state of the art for AES in english when applied to portuguese by a margin greater than 15 \% according to the QWK metric, obtaining a QWK close to 0.5 when evaluated for essays of 18 different prompts, which shows that the predicted grades have a reasonable correlation with the grades given by human evaluators. |