Accountability e direito fundamental à proteção de dados pessoais enquanto limites ao uso da inteligência artificial na relação de emprego

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: Morais Júnior, Ricardo Antonio Maia de
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Não Informado pela instituição
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://www.repositorio.ufc.br/handle/riufc/72687
Resumo: The study proposes to verify which measures are necessary so that the use of Artificial Intelligence mechanisms in the employment relationship can guarantee the right to the protection of the employee's personal data, mitigating the risks and following the requirements of the principle of "accountability" in the automated data processing. To achieve this general objective, this research initially analyzes what are the main uses of AI in employment relationships, delimiting which technologies should be considered, what is the influence of Big Data for its diffusion in the employment relationship, and what are the main purposes for which its use is intended by the employer. In a second moment, based on the peculiarities of the employment relationship that would justify a differentiated approach from the others, the risks arising from the use of these technologies based on AI are considered regarding the protection of employee data, notably regarding their personality rights, the right to privacy and the right to equality and non-discrimination in the workplace. After that, the fundamental right to data protection is presented as a limit to the employer's power of control, verifying the legal requirements for the use of AI in the employment relationship, in view of the duty of accountability (responsibility and accountability) business. The research can be classified as basic, explanatory and qualitative (from the point of view of how the problem is approached), adopting the hypothetical-deductive research method. The hypothesis tested throughout the research is that the measures currently provided for in the Brazilian data protection legislation are not sufficient to meet the principle of accountability, and there must be other measures considered good practices, including specificities aimed at to the scope of employment relations, in view of their own characteristics. After tests throughout the research, the use of AI was initially verified in the employment relationship from the pre-contractual phase, in recruitment and selection processes; during the employment contract, through algorithmic work management; and in exceptional situations, AI is used to reward employees for promotions, but even more so in situations aimed at applying sanctions to those who do not perform their duties properly. Secondly, it was indicated the existence of some characteristics in the employment relationship that differentiate it from the others regarding the use of AI by the employer, which allowed identifying the occurrence of risks to the rights of employees that demand the adoption of measures by the employer to mitigate them and document which measures were effective for that purpose. Finally, the hypothesis initially formulated was confirmed, that only the adoption of the legal regime of data protection is not enough, considering that there is no sectoral regulation regarding the use of AI in the employment relationship, which makes the adoption necessary, and also good governance practices for personal data, based on the risks involved. As possible good practices, the following were suggested in this research: (i) the formulation of data governance rules for the entire chain of data processing agents; (ii) mapping risks to employee rights and applying the precautionary principle; (iii) the adoption of measures to safeguard the quality of databases; (iv) the inclusion of human participation in automated decisions as a rule; (v) the implementation of transparency and explanation measures regarding the use of AI in the employment relationship and; (vi) the availability of these systems for possible audits by third parties, through the preparation and presentation of impact reports on the protection of personal data. It was found that the suggested measures are relevant, but they are not the only ones, and employers should always check which other good practices can be added, according to the degree of mapped risks. In addition, it was identified that the present work should serve as a guide from which employers can implement their own measures to mitigate the risks arising from AI, but it should also serve as a guide to administrative, judicial or union authorities, who seek the supervision of companies, monitoring whether they will be accountable for their data processing; as well as in judicial activity, in which the Labor Judiciary can monitor whether the employer's attitudes are consistent with legal commands and good practices for protecting the rights of employees, in specific cases.