Random Forest interpretability - explaining classification models and multivariate data through logic rules visualizations

Detalhes bibliográficos
Ano de defesa: 2021
Autor(a) principal: Popolin Neto, Mário
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/55/55134/tde-03032022-105725/
Resumo: Classification models have immense potential and ubiquitous future, considering the vast number of prediction tasks in different domains where such models are applicable. Models interpretability may be just as important as performance, providing global and local explanations to interpret the acquired knowledge and audit decisions. In addition to the predictive ability, classification models can also be employed as descriptive tools, where interpretability involves data explanations. Logic rules have been widely used in interpretability solutions, and Decision Trees are well recognized for consistent logic rules generation. The Random Forest approach (Decision Trees ensemble) has been broadly adopted due to its ability to produce accurate results and deal with multivariate datasets. However, Random Forest models interpretability faces the challenge of handling a substantial number of logic rules. Based on logic rules visualization into a matrix-like visual metaphor, this doctoral thesis leads to Visual Analytics methods for Random Forest models interpretability, supporting models and data explanations covering predictive and descriptive purposes. For models (predictive) explanations, ExMatrix arranges logic rules towards global and local visual representations, providing overviews and decisions reasoning. Global explanations can unveil the knowledge learned by the model from a class-labeled dataset, whereas local explanations focus on a particular data instance classification. For data (descriptive) explanations, VAX handles logic rules, resulting in descriptive rules visualization for automated data insights. Data explanations support the identification and visual interpretation of patterns in multivariate datasets. Any problem denoted by a class-labeled dataset is a potential use case for the proposed methods. ExMatrix was applied in analytical chemistry, and VAX was used in real-world datasets for multivariate data analyses. The main contribution of this doctoral thesis lies in Visual Analytics methods supporting Random Forest interpretability for predictive and descriptive purposes in model and data explanations.