Detalhes bibliográficos
Ano de defesa: |
2022 |
Autor(a) principal: |
Araujo, Gabriel Gazetta de |
Orientador(a): |
Não Informado pela instituição |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
eng |
Instituição de defesa: |
Biblioteca Digitais de Teses e Dissertações da USP
|
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: |
|
Link de acesso: |
https://www.teses.usp.br/teses/disponiveis/55/55134/tde-13102022-112418/
|
Resumo: |
With the rise of deep learning and complex machine learning algorithms, higher performance has been sought to reach equally high accuracy in a variety of environments and applications. The search for high accuracy has led to complex predictive models known as black-boxes that do not offer access to their decision-making processes: these models provide little to no explanations on why a certain outcome has resulted or what influenced that outcome. Unfortunately, these drawbacks can be utterly significant especially with sensitive scenarios such as legal, social, medical or financial applications that a misclassified outcome or even an outcome classified for the wrong reason might cause tremendous impacts. Driven by this consternation, interpretability techniques have come into play in an effort to bring, through a variety of methods, explanations to the outcome of a black-box model or even the reasoning behind that model, or sometimes proposing an interpretable predicting algorithm altogether. However, these techniques are not well established yet, which means that they are in constant development; similarly, the assessment of these techniques is also lacking. Currently, there is not a consensus on how they can be evaluated or even what properties interpretability methods are supposed to meet. Driven by that gap, this work proposes a set of evaluation metrics that are capable of calculating three desired properties obtained from interpretability techniques. These metrics can be used to assess and determine the best parameters or the best interpretability technique for determined experiments. |