Auto-treinamento com ruído utilizando data augmentations para tarefas de detecção de comentários ofensivos e discurso de ódio

Detalhes bibliográficos
Ano de defesa: 2024
Autor(a) principal: Leite, João Augusto
Orientador(a): Silva, Diego Furtado lattes
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Universidade Federal de São Carlos
Câmpus São Carlos
Programa de Pós-Graduação: Programa de Pós-Graduação em Ciência da Computação - PPGCC
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Inglês:
Área do conhecimento CNPq:
Link de acesso: https://repositorio.ufscar.br/handle/20.500.14289/20264
Resumo: Online social media is rife with offensive and hateful comments, necessitating the development of automated detection systems to manage the vast volume of posts generated every second. Creating high-quality human-labeled datasets for this task is challenging and costly, primarily because non-offensive posts significantly outnumber offensive ones. In contrast, unlabeled data is abundant, more accessible, and cheaper to obtain. This thesis explores the application of self-training methods, which leverage weakly-labeled examples to augment training datasets, in the context of offensive and hate speech detection. The core of this thesis is the paper "Noisy Self-Training with Data Augmentations for Offensive and Hate Speech Detection Tasks", which investigates the efficacy of noisy self-training approaches incorporating data augmentation techniques to enhance prediction consistency and robustness against noisy data and adversarial attacks. Experiments are conducted with both default and noisy self-training using three different textual data augmentation techniques across five distinct pre-trained BERT architectures of varying sizes. The results indicated that noisy self-training with textual data augmentations, despite its success in similar settings, decreased performance in offensive and hate speech domains compared to the default method. This finding and reveals limitations of noisy self- training methods with data augmentations for domains such as offensive speech detection, where certain specific keywords cannot be modified without introducing semantic variations.