MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável

Detalhes bibliográficos
Ano de defesa: 2022
Autor(a) principal: Carvalho, Niltemberg de Oliveira
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Não Informado pela instituição
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://www.repositorio.ufc.br/handle/riufc/70423
Resumo: The interest in systems that use machine learning has been growing in recent years. Some algorithms implemented in these intelligent systems hide their fundamental assumptions, input information and parameters in black box models that are not directly observable. The adoption of these systems in sensitive and large-scale application domains involves several ethical issues. One way to promote these ethics requirements is to improve the explainability of these models. However, explainability may have different goals and content according to the intended audience (developers, domain experts, and end-users. Some explanations does not always represent the requirements of the end-users, because developers and users do not share the same social meaning system, making it difficult to build more effective explanations. This paper proposes a conceptual model, based on Semiotic Engineering, which explores the problem of explanation as a communicative process, in which designers and users work together on requirements on explanations. A Model to Reason about the eXplanation design in Artificial Intelligence Systems (MoReXAI) is based on a structured conversation, with promotes reflection on subjects such as Privacy, Fairness, Accountability, Equity and Explainability, aiming to help end-users understand how the systems work and supporting the explanation design system. The model can work as an epistemic tool, given the reflections raised in the conversations related to the topics of ethical principles, which helped in the process of raising important requirements for the design of the explanation.