Human vs machine towards neonatal pain assessment: a comparison of the facial features extracted by adults and convolutional neural networks
Ano de defesa: | 2023 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Centro Universitário FEI, São Bernardo do Campo
|
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | https://repositorio.fei.edu.br/handle/FEI/4763 https://doi.org/10.31414/EE.2023.D.131608 |
Resumo: | One of the most important challenges of the scientific community is to mitigate the several consequences for neonates due to pain exposure. This challenge is mainly justified by the fact that neonates are not able to verbally communicate pain, hindering the correct identification of the presence and intensity of this phenomenon. In this context, several clinical scales have been proposed to assess pain, using, among other parameters, the facial features of the neonate. However, a better comprehension of these features is yet required, since some recent results have shown the subjectivity of these scales. Meanwhile, computational frameworks have been implemented to automate neonatal pain assessment. Despite their impressive performances, these frameworks still lack to understand the corresponding decision-making processes. Therefore, we propose to investigate in this dissertation the facial features related to the human and machine neonatal pain assessments, comparing the visual perceived regions by health-professionals experts and parents of neonates with the most relevant ones extracted by eXplainable Artificial Intelligence (XAI) methods using two classification models: (i) VGG-Face, trained originally in facial recognition, and (ii) N-CNN, implemented and trained end-to-end for neonatal pain assessment. Our findings show that the regions used by the classification models are clinically relevant to neonatal pain assessment, yet do not agree with the facial perception of healthprofessionals and parents. Consequently, these differences suggest that humans and machines can learn with each other in order to improve their current decision-making process of identifying the discriminant information related to neonatal pain. Additionally, we observed that, using the same classification model, the XAI methods implemented here yield distinct relevant facial features to the same input image. These results raise concerns about the effective use and interpretation of XAI methods, and, more importantly, what regions of the image are truly relevant to the decision-making process of the classification model. Nevertheless, our findings advance the current knowledge on how humans and machines code and decode the neonatal facial response to pain. We believe that these findings might enable further improvements in clinical scales and computation tools widely used in real situations, whether based on human or machine decision-making process |