Proteção de dados na cultura do algoritmo

Detalhes bibliográficos
Ano de defesa: 2019
Autor(a) principal: Florêncio, Juliana Abrusio lattes
Orientador(a): Guerra Filho, Willis Santiago
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso embargado
Idioma: por
Instituição de defesa: Pontifícia Universidade Católica de São Paulo
Programa de Pós-Graduação: Programa de Estudos Pós-Graduados em Direito
Departamento: Faculdade de Direito
País: Brasil
Palavras-chave em Português:
Palavras-chave em Inglês:
Área do conhecimento CNPq:
Link de acesso: https://tede2.pucsp.br/handle/handle/22255
Resumo: This thesis investigates the transformations brought by the information paradigm inserted in the present time, which have led to the displacement of postindustrial society to information society. It analyzes its main phenomena, such as big data; the omnipresent connectivity and the datification of the 'internet of things'; as well as the algorithm culture, backed on machine learning and artificial intelligence technologies. It is also verified how the market of attention, through the incessant production of personal data, is the new gear of data economy, and once the personal data is the input of this market, the research aims at analyzing the privacy, from its beginning to its feature of data protection, guided by the doctrine of informational self-determination. Furthermore, the normative evolution that culminated in regulations in Europe, Brazil, and briefness, in the United States is examined. Further, the research deepens the study to specific aspects arising from the culture of the algorithm, such as: the centralization of control of acts of individuals' lives; matters of ethical values in the use of the algorithmic machine; discriminatory associations in algorithm calculations; and the inequality of opportunities imposed by the algorithms. Also, it turns to the 'black box' character and the opacity of the algorithms, analyzing the main challenges to comply with the principle of transparency, as well as the right to explanation, regarding the artificial intelligence systems responsible for profiling definition and by automated decision making, as a way to guarantee the protection of personal data. It is examined how current legal constructions, such as data consent and rendered anonymous, are inefficient. Finally, it is analyzed how the ways of regulating design architecture, including the principle of explainable artificial intelligence, can contribute to the protection of personal data in the algorithm culture