Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models
Autor(a) principal: | |
---|---|
Data de Publicação: | 2024 |
Outros Autores: | |
Tipo de documento: | Artigo |
Idioma: | eng |
Título da fonte: | Revista Thesis Juris |
Texto Completo: | https://periodicos.uninove.br/thesisjuris/article/view/26510 |
Resumo: | Artificial intelligence (AI) has been extensively employed across various domains, with increasing social, ethical, and privacy implications. As their potential and applications expand, concerns arise about the reliability of AI systems, particularly those that use deep learning techniques that can make them true “black boxes”. Explainable artificial intelligence (XAI) aims to offer information that helps explain the predictive process of a given algorithmic model. This article examines the potential of XAI in elucidating algorithmic decisions and mitigating bias in AI systems. In the first stage of the work, the issue of AI fallibility and bias is discussed, emphasizing how opacity exacerbates these issues. The second part explores how XAI can enhance transparency, helping to combat algorithmic errors and biases. The article concludes that XAI can contribute to the identification of biases in algorithmic models, then it is suggested that the ability to “explain” should be a requirement for adopting AI systems in sensitive areas such as court decisions. |
id |
UNINOVE-2_f26114f55e1063e9f18f0d4cad37335e |
---|---|
oai_identifier_str |
oai:ojs.periodicos.uninove.br:article/26510 |
network_acronym_str |
UNINOVE-2 |
network_name_str |
Revista Thesis Juris |
repository_id_str |
|
spelling |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic modelsXAI explainable artificial intelligencealgorithmic opacitytransparencyXAIexplainable artificial intelligencealgorithmic opacitytransparencyArtificial intelligence (AI) has been extensively employed across various domains, with increasing social, ethical, and privacy implications. As their potential and applications expand, concerns arise about the reliability of AI systems, particularly those that use deep learning techniques that can make them true “black boxes”. Explainable artificial intelligence (XAI) aims to offer information that helps explain the predictive process of a given algorithmic model. This article examines the potential of XAI in elucidating algorithmic decisions and mitigating bias in AI systems. In the first stage of the work, the issue of AI fallibility and bias is discussed, emphasizing how opacity exacerbates these issues. The second part explores how XAI can enhance transparency, helping to combat algorithmic errors and biases. The article concludes that XAI can contribute to the identification of biases in algorithmic models, then it is suggested that the ability to “explain” should be a requirement for adopting AI systems in sensitive areas such as court decisions.Universidade Nove de Julho - UNINOVE2024-06-28info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionapplication/pdfhttps://periodicos.uninove.br/thesisjuris/article/view/2651010.5585/13.2024.26510Revista Thesis Juris; v. 13 n. 1 (2024): jan./jun.; 03-252317-3580reponame:Revista Thesis Jurisinstname:Universidade Nove de Julho (UNINOVE)instacron:UNINOVEenghttps://periodicos.uninove.br/thesisjuris/article/view/26510/11010Copyright (c) 2024 Otavio Morato de Andrade, Professor Marco Antônio Sousa Alveshttps://creativecommons.org/licenses/by-nc-sa/4.0info:eu-repo/semantics/openAccess Morato de Andrade, OtavioSousa Alves, Marco Antônio2025-01-20T12:48:58Zoai:ojs.periodicos.uninove.br:article/26510Revistahttps://periodicos.uninove.br/thesisjurisPRIhttps://periodicos.uninove.br/thesisjuris/oaithesis@uninove.br2317-35802317-3580opendoar:2025-01-20T12:48:58Revista Thesis Juris - Universidade Nove de Julho (UNINOVE)false |
dc.title.none.fl_str_mv |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models |
title |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models |
spellingShingle |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models Morato de Andrade, Otavio XAI explainable artificial intelligence algorithmic opacity transparency XAI explainable artificial intelligence algorithmic opacity transparency |
title_short |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models |
title_full |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models |
title_fullStr |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models |
title_full_unstemmed |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models |
title_sort |
Using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models |
author |
Morato de Andrade, Otavio |
author_facet |
Morato de Andrade, Otavio Sousa Alves, Marco Antônio |
author_role |
author |
author2 |
Sousa Alves, Marco Antônio |
author2_role |
author |
dc.contributor.author.fl_str_mv |
Morato de Andrade, Otavio Sousa Alves, Marco Antônio |
dc.subject.por.fl_str_mv |
XAI explainable artificial intelligence algorithmic opacity transparency XAI explainable artificial intelligence algorithmic opacity transparency |
topic |
XAI explainable artificial intelligence algorithmic opacity transparency XAI explainable artificial intelligence algorithmic opacity transparency |
description |
Artificial intelligence (AI) has been extensively employed across various domains, with increasing social, ethical, and privacy implications. As their potential and applications expand, concerns arise about the reliability of AI systems, particularly those that use deep learning techniques that can make them true “black boxes”. Explainable artificial intelligence (XAI) aims to offer information that helps explain the predictive process of a given algorithmic model. This article examines the potential of XAI in elucidating algorithmic decisions and mitigating bias in AI systems. In the first stage of the work, the issue of AI fallibility and bias is discussed, emphasizing how opacity exacerbates these issues. The second part explores how XAI can enhance transparency, helping to combat algorithmic errors and biases. The article concludes that XAI can contribute to the identification of biases in algorithmic models, then it is suggested that the ability to “explain” should be a requirement for adopting AI systems in sensitive areas such as court decisions. |
publishDate |
2024 |
dc.date.none.fl_str_mv |
2024-06-28 |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
https://periodicos.uninove.br/thesisjuris/article/view/26510 10.5585/13.2024.26510 |
url |
https://periodicos.uninove.br/thesisjuris/article/view/26510 |
identifier_str_mv |
10.5585/13.2024.26510 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
https://periodicos.uninove.br/thesisjuris/article/view/26510/11010 |
dc.rights.driver.fl_str_mv |
Copyright (c) 2024 Otavio Morato de Andrade, Professor Marco Antônio Sousa Alves https://creativecommons.org/licenses/by-nc-sa/4.0 info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Copyright (c) 2024 Otavio Morato de Andrade, Professor Marco Antônio Sousa Alves https://creativecommons.org/licenses/by-nc-sa/4.0 |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
application/pdf |
dc.publisher.none.fl_str_mv |
Universidade Nove de Julho - UNINOVE |
publisher.none.fl_str_mv |
Universidade Nove de Julho - UNINOVE |
dc.source.none.fl_str_mv |
Revista Thesis Juris; v. 13 n. 1 (2024): jan./jun.; 03-25 2317-3580 reponame:Revista Thesis Juris instname:Universidade Nove de Julho (UNINOVE) instacron:UNINOVE |
instname_str |
Universidade Nove de Julho (UNINOVE) |
instacron_str |
UNINOVE |
institution |
UNINOVE |
reponame_str |
Revista Thesis Juris |
collection |
Revista Thesis Juris |
repository.name.fl_str_mv |
Revista Thesis Juris - Universidade Nove de Julho (UNINOVE) |
repository.mail.fl_str_mv |
thesis@uninove.br |
_version_ |
1841435606424485888 |