Detalhes bibliográficos
Ano de defesa: |
2019 |
Autor(a) principal: |
Silva, Douglas Eder Uno
 |
Orientador(a): |
Bittencourt, Roberto Almeida
 |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
por |
Instituição de defesa: |
Universidade Estadual de Feira de Santana
|
Programa de Pós-Graduação: |
Mestrado em Computação Aplicada
|
Departamento: |
DEPARTAMENTO DE TECNOLOGIA
|
País: |
Brasil
|
Palavras-chave em Português: |
|
Palavras-chave em Inglês: |
|
Área do conhecimento CNPq: |
|
Link de acesso: |
http://tede2.uefs.br:8080/handle/tede/774
|
Resumo: |
Architecture module views of software are made up of modules with distinct functional responsibilities but with dependencies between them. Previous work has evaluated architecture recovery techniques of module views in order to better understand their strengths and weaknesses. In this context, different similarity metrics are used to evaluate such techniques, especially those based on clustering algorithms. However, few studies try to evaluate whether such metrics accurately capture the similarities between two clusters. Among the similarity metrics in the literature, we can cite examples from both the field of software engineering and from other fields (e.g., classification). This work evaluates six cluster similarity metrics through intrinsic quality and stability metrics and the use of software architecture models proposed by developers. To do so, we used the dimensions of stability and authoritativeness, in accordance with what has been discussed in the literature. For authoritativeness, the concentration statistics of the MeCl metric were higher, in comparison with the other similarity metrics. However, in the absence of architectural models, the Purity metric shows better results. As architecture models are very relevant to software engineers, we understand that the MeCl metric is the most appropriate. For stability, all metrics have values close to unity, despite the presence of outliers. Here as well, the MeCl metric was considered the best because of its superiority in this item. Being better in both dimensions, especially in authoritativeness, we decided to use the MeCl metric as the basis for comparison of clustering algorithms. We compared, using the MeCl metric, four agglomerative clustering algorithms in the context of four software systems. For both authoritativeness and stability, the SL90 algorithm produced higher values in two of the four systems studied by comparing the data series generated by all algorithms. In this case, the SL90 agglomerative algorithm was the best. In conclusion, we empirically realized that the MeCl metric is the best metric to measure group similarity; regarding the clustering algorithms, no algorithm exceeds the others in all comparisons, although SL90 presented better results in two of the four systems we analyzed |