Classificadores de alta interpretabilidade e de alta precisão
Ano de defesa: | 2013 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal de Minas Gerais
UFMG |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/1843/ESBF-9GNGG3 |
Resumo: | Building a Machine Learning application, typically requires an expert who defines the objectives and data that have a causal relationship with these objectives, selects the best model that fits the assumptions and data, conducts some experiments and analyses the quality of the solution. In the analysis phase, a fundamental property of the model is its interpretability. In some application domains, such as medical or business, the interpretability is taken as a differential solution. To build an interpretable model, it is recommended the use of few features within the parsimony principle, which states that everything being equal, simpler explanations are preferable. Recently, this principle has shown to be well suited to associative classifiers, where the number of rules composing the classifier can be substantially reduced by using condensed representations such as maximal or closed rules. However, the remaining amount of rules is still large, and the resulting models are hard to interpret. In this work we propose a more aggressive filtering strategy, which decreases the number of rules within the classifier without hurting its accuracy. Our strategy consists in evaluating each rule under different statistical criteria, and filtering only those rules that show a positive balance between all the criteria considered. Specifically, each candidate rule is associated with a point in an n-dimensional scatter-gram, where each coordinate corresponds to a statistical criterion. Points that are not dominated by any other point in the scatter-gram compose the Pareto frontier, and correspond to rules that are optimal in the sense that there is no rule that is better off when all the criteria are taken into account. Finally, rules lying in the Pareto frontier are filtered and compose the classifier. A systematic set of experiments involving benchmark data as well as recent data from actual application scenarios, followed by an extensive set of significance tests, reveal that the proposed strategy decreases the number of rules by up to two orders of magnitude and produces classifiers that are more readable without hurting accuracy. |