Qualitative global sensitivity analysis for probabilistic circuits

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: Llerena, Julissa Giuliana Villanueva
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/45/45134/tde-25092023-112802/
Resumo: A Probabilistic Circuit (PC) is an expressive generative model that encodes a probability distribution through an structure of weighted sums, products and univariate or multivariate distributions. Subject to some restrictions, PCs are tractable for large classes of queries. The most popular examples of PCs are Sum-Product Networks, Probabilistic Sentential Decision Diagrams, and Generative Random Forests. These models have shown competitive performance in several machine learning tasks. Despite the relative success of PCs, several issues can affect the quality of their predictions. In this work, we focus on two relevant issues. (i) PCs with a high number of parameters and scarce data can produce unreliable and overconfident inferences. (ii) Typical approaches treat missing data either by marginalization or heuristically, assuming that the missingness process is ignorable or uninformative; however, data is often missing in a non-ignorable way, which introduces bias into the prediction if not handled properly. To address these issues, we developed two algorithms based on Credal Probabilistic Circuits, which are sets of PCs obtained by a simultaneously perturbing of all model parameters (with the model structure fixed). Our first algorithm performs a qualitative global sensitivity analysis on the model parameters, measuring the variability of the predictions to perturbations of the model weights. To mitigate the second issue, we propose a procedure to perform tractable predictive inference under non-ignorable missing data. We evaluate our algorithms on challenging tasks such as image completion, multi-label classification, and multi-class classification.