Improving binary classifiers on imbalanced data using large language models

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: BARBOSA, José Matheus Lacerda
Orientador(a): BARBOSA, Luciano de Andrade
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Universidade Federal de Pernambuco
Programa de Pós-Graduação: Programa de Pos Graduacao em Ciencia da Computacao
Departamento: Não Informado pela instituição
País: Brasil
Palavras-chave em Português:
Link de acesso: https://repositorio.ufpe.br/handle/123456789/53563
Resumo: In the realm of real-world classification tasks, the challenge of imbalanced data fre- quently hinders the efficacy of machine learning models in performing accurate binary classifications. To address this issue directly, this study introduces "BALANCE," a novel framework designed to rectify data imbalance in text datasets for binary classification. BALANCE leverages prompt-based learning to efficiently generate synthetic data that mimics the characteristics of the minority class. This is achieved by optimizing the de- coding parameters of a specific natural language generation model and tailoring text gen- eration to the minority class. A customized prompt is subsequently employed to generate instances using the fine-tuned language model. We conducted a comprehensive experimen- tal evaluation using three imbalanced real-world text classification datasets. The findings of our study reveal that BALANCE consistently outperforms existing methods for data creation and imbalance correction in the majority of scenarios. These results underscore the high quality of the generated instances and the potential of BALANCE to significantly enhance the performance of text classification models when dealing with imbalanced data.