Ética e inteligência artificial : da possibilidade filosófica de agentes morais artificiais

Detalhes bibliográficos
Ano de defesa: 2021
Autor(a) principal: Silveira, Paulo Antônio Caliendo Velloso da lattes
Orientador(a): Souza, Draiton Gonzaga lattes
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Pontifícia Universidade Católica do Rio Grande do Sul
Programa de Pós-Graduação: Programa de Pós-Graduação em Filosofia
Departamento: Escola de Humanidades
País: Brasil
Palavras-chave em Português:
Área do conhecimento CNPq:
Link de acesso: http://tede2.pucrs.br/tede2/handle/tede/9534
Resumo: This Dissertation intends to verify and assume the philosophical possibility of the emergence of an authentic artificial moral agent. The plausibility of overcoming the Turing Test, the Chinese Room and the Ada Lovelace Test is taken as an assumption, as well as the possible emergence of an authentic artificial moral agent, with intentional deliberations from a first person perspective. Thus, the assumption of the possibility of a computational code capable of giving rise to the emergency is accepted. This study’s main problem will be to investigate the philosophical possibility of an artificial ethics, as a result of the will and rationality of an artificial subject, that is, of artificial intelligence as a moral subject. An artificial ethical agent must act on its own characteristics and not according to a predetermined external schedule. Authentic artificial ethics are internal and not external to the automaton. A model proposed and with increasing acceptance, which demonstrates this computational possibility, is that of a morality that is built from bottom-up, in which case the system can start to acquire moral capacities independently. This model comes close to the Aristotelian ethics of virtues. Another possible way is the union of a computational floor model, with models based on deontology, with the most general formulation of duties and maxims. In another way, it is demonstrated that it is possible to build a viable and autonomous model of artificial morality in at least one case. There is no clear demonstration of the impossibility for artificial moral agents to have artificial emotions. The conclusion reached by several programming scientists is that a model of artificial agency based on machine learning, combined with the ethics of virtue, is a natural, cohesive, coherent, integrated and seamless. Thus, there is a coherent, consistent, and wellfounded answer that indicates that an authentic artificial moral agent’s impossibility has not been proven. Finally, a responsible ethical theory must consider the concrete possibility of the emergence of complete artificial moral agents and all the consequences of this dividing phenomenon in human history.