Discriminações algorítmicas nas relações trabalhistas: enfrentamento de vieses sexistas em intermediações voltadas à contratação de vagas de emprego

Detalhes bibliográficos
Ano de defesa: 2024
Autor(a) principal: Luiza Barreto Braga Fidalgo
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de Minas Gerais
Brasil
DIREITO - FACULDADE DE DIREITO
Programa de Pós-Graduação em Direito
UFMG
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://hdl.handle.net/1843/77223
Resumo: The proposed research was dedicated to analyzing the impacts of Artificial Intelligence on labor relations, with an emphasis on discriminatory algorithmic practices. Technological advances have revealed the development of machine learning and algorithmic models, with unprecedented data collection and processing speeds. The speed of these innovations was not accompanied by adequate legislative protection regarding technological implications in everyday life. Algorithmic models can present biases, tendencies, which reflect the prejudiced and unequal social structures in which they were formed. At the same time, there are algorithmic models that, by their very nature, are unsupervised, raising more rigorous challenges regarding the consequences of discrimination that may arise from them. Informational self-determination was recognized as an autonomous right. The protection of personal data was driven by the urgency of safeguarding the privacy and intimacy of individuals, in a context of the increasing presence of machines collecting even sensitive information about them. The growth in the use of social networks has expanded the supply of freely available data without people even questioning the provision of such information. Practical implications related to the impact of fake news, deepfake, on elections, and incitements to crimes, brought urgency to the debate on regulating AI mechanisms. It was demonstrated that the need for severe, consistent monitoring of technological tools would not be subsumed into an abstract debate. The establishment of ethical guidelines, respect for human rights, with supervision, inspection, governance, of algorithmic models, is relevant not only to the topic of discriminatory algorithmic practices, but to several other AI applications. It was analyzed how principles, such as explainability, transparency, intelligibility, auditability, must be understood for adequate use of AI tools. Considerations were made about the consequences of automation, technological innovations, in respect for dignified and decent work. It was observed how women are mainly affected by labor discrimination, including algorithmic discrimination. The “glass ceiling” metaphor was clarified to reinforce the relevance of female representation in positions of power. It was considered what intersectionality would consist of, as well as the reasons why black women are the most affected by patriarchy, machism, misogyny, which historically persevere within society. Correlations were observed between the Sustainable Development Goals of the UN 2030 Agenda and discriminatory algorithmic practices. Concrete examples of algorithmic labor discrimination found in foreign literature and national journalistic reports were narrated. Confronting the opacity of black box machines involves access to the fundamentals that led to the design of algorithmic models as they are. There is no need to uncover the mathematical sequence of source codes themselves. It is possible to preserve the industry secret claimed by platforms when they deny access to such data. However, it is essential that, at least, jurisdictional and control bodies can monitor the input and output data of algorithmic models used by technological intermediaries to recruit workers. Supervision of the fundamentals that led to the formatting of algorithmic models to recruit,maintain or dismiss workers, by at least public bodies, is viable and elementary to guarantee the smoothness of these procedures.