A study on the generation of explanations based on ontologies: a case study in mHealth
Ano de defesa: | 2021 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal da Paraíba
Brasil Informática Programa de Pós-Graduação em Informática UFPB |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | https://repositorio.ufpb.br/jspui/handle/123456789/32096 |
Resumo: | While mobile health (mHealth) applications provide a proper way to continuously assess data about the health conditions of their users, machine learning (ML) is the main technique used to process such data by means of inductive reasoning. However, ML algorithms do not usually give any explanation concerning the rationale of their produced outputs due to the black-box feature of such algorithms. This study analyzed 120 mHealth applications to create an integrated ontology that represents the health condition of mobile users and can be used as background knowledge to generate explanations for inductive reasoning. The integrated ontology involved several quality of life (QoL) dimensions (e.g., diet, physical activity, emotional, etc.), enabling the specification of a holistic process of reasoning that can improve the effectiveness of interventions. Therefore, the main contributions of this study are (1) the proposal of a strategy to create background knowledge for mHealth applications that support holistic reasoning and explanations regarding the results obtained by means of inductive reasoning, (2) evaluation of a description logics based approach to generate explanations using a simplified version of the ontology, and (3) discussions about important elements that can affect the readability and accuracy of explanations, such as the use of unnamed classes and configuration of the explanation algorithms. |