Variable weighted fuzzy clustering algorithm for qualitative data

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: TEOTONIO, Gabriel Harrison Fidelis
Orientador(a): SOUZA, Renata Maria Cardoso Rodrigues de
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso embargado
Idioma: eng
Instituição de defesa: Universidade Federal de Pernambuco
Programa de Pós-Graduação: Programa de Pos Graduacao em Ciencia da Computacao
Departamento: Não Informado pela instituição
País: Brasil
Palavras-chave em Português:
Link de acesso: https://repositorio.ufpe.br/handle/123456789/53504
Resumo: This work focuses on the clustering methods within unsupervised learning, a challenging sub-division of Machine Learning where there is no response variable available. Clustering is a technique for finding groups in a dataset, where the observations in each group are similar to each other and different from those in other groups. The K-Means method, recognized as the most well-known and widely used clustering technique, efficiently handles quantitative variables, like many other existing clustering methods. However, the K-Means algorithm cannot be used with qualitative variables, such as gender or education level. To overcome this limitation, the K-Modes method was proposed, which uses modes instead of means to represent the clusters. The existing partitional clustering algorithms without variable weighting have a limitation in that they assign equal importance to all variables. It can be problematic when clustering high-dimensional, sparse data where the characterization of cluster partitions can be explained by a particular subset of variables. To address this issue, subspace clustering techniques and adaptive distances have been proposed, with the latter being derived from constraints based on the sum and product of the weights relative to the importance of the variables. This work proposes a new fuzzy clustering algorithm for qualitative data based on adaptive distances, which demonstrates improved performance compared to conventional methods. The local adaptive distances, which assign different weights to each variable across the clusters, perform better for datasets with high levels of dispersion and overlap of classes. The results extend the capabilities of existing clustering algorithms based on adaptive distances.