Ativando λ-justiça: não-discriminação algorítmica em árvores de decisão

Detalhes bibliográficos
Ano de defesa: 2022
Autor(a) principal: Silva, Maria de Lourdes Maia
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Não Informado pela instituição
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://www.repositorio.ufc.br/handle/riufc/69953
Resumo: With technological advancements, several entities use computational models to classify individuals to deny or grant a benefit. For example, the bank ranks whether a person can receive a loan in bank loan grants. Although Artificial Intelligence (AI) applications are helpful for decision-making, they are not free from discrimination. When an algorithm is trained with historically discriminatory data or the database is unbalanced concerning minority characteristics, the model tends to propagate the bias present in the training data. To classify similar individuals similarly, that is, to correspondingly label people with similar abilities and characteristics to perform a task, restrictions of fairness are necessary, which, in turn, can change the classifications, impairing accuracy. In this work, we define a metric to measure how fair is a model, and two properties to mitigate the problem generated by the propagation of discrimination of individuals while dealing with the trade-off between utility and fairness. A model achieves the properties in post-processing step. Furthermore, we propose activating the properties defined for the Decision Tree model. The results obtained from the application of the proposed justice properties using the Decision Tree reached high levels of utility and fairness.