Evaluation and model selection for unsupervised outlier detection and one-class classification

Detalhes bibliográficos
Ano de defesa: 2019
Autor(a) principal: Marques, Henrique Oliveira
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/55/55134/tde-07012020-105601/
Resumo: Outlier detection (or anomaly detection) plays an important role in the pattern discovery from data that can be considered exceptional in some sense. An important distinction is that between the supervised, semi-supervised and unsupervised techniques. In this work, we focus on semisupervised and unsupervised techniques. It has been shown that unsupervised outlier detection techniques can be adapted to be applicable also in the semi-supervised setting. Therefore, we conduct a comparative study between the semi-supervised techniques and unsupervised techniques adapted to the semi-supervised context. The main focus of this work, however, is on the unsupervised evaluation of outlier detection. Although there is a large and growing literature that tackles the outlier detection problem, the unsupervised evaluation of outlier detection results is still virtually untouched in the literature, especially in the context of unsupervised detection. The so-called internal evaluation, based solely on the data and the assessed solutions themselves, is required if one wants to statistically validate (in absolute terms) or just compare (in relative terms) the solutions provided by different algorithms or by different parameterizations of a given algorithm in the absence of labeled data. However, in contrast to cluster analysis, where indexes for internal evaluation and validation of clustering solutions have been conceived and shown to be very useful, in the outlier detection domain this problem has been notably overlooked. Here we discuss this problem and provide solutions for the internal evaluation of outlier detection results. In the scenario of semi-supervised detection, we propose an (relative) internal evaluation measure based on data perturbation and compared it with the main measures of the literature, providing the reader with clear recommendations of the best scenario for the use of each one. In the scenario of unsupervised detection, the pioneering measure for internal evaluation of binary outlier solutions, proposed by the author of this thesis in his masters work, is extended to the more general scenario of non-binary outlier solutions, which involves the evaluation of outlier detection scorings, which is the type of result produced by most widely used database-oriented algorithms in the literature. We extensively evaluate both measures in several experiments involving different collections of synthetic and real datasets collected from public repositories.