As verdades dos profundamente falsos : um estudo semiótico sobre deepfakes nas eleições presidenciais brasileiras de 2022
Ano de defesa: | 2023 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal de Minas Gerais
Brasil FAF - DEPARTAMENTO DE COMUNICAÇÃO SOCIAL Programa de Pós-Graduação em Comunicação Social UFMG |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/1843/68122 |
Resumo: | This dissertation aims to understand the semiotic strategies involved in the creation of deepfakes in the context of the 2022 presidential elections. We understand deepfakes as texts of a single language or syncretic texts (FLOCH, 1985), which articulate verbal and nonverbal languages that, through appearance, simulate reality. Our research problem is stated as follows: how is synthesized media used for the creation of political discourse, and how is this content perceived as false or true? To do so, we use the concepts of veridiction, iconization, syncretism, and regimes of interaction and meaning, derived from discursive semiotics (GREIMAS, 2004; GREIMAS, COURTÉS, 2008; FLOCH, 1985) and sociosemiotics (LANDOWSKI, 2014a, 2014b, 2022). The corpus configuration included the systematic collection of videos and news about deepfakes circulating on Twitter, between July 20th (party conventions) and October 31st (end of the second round). We take the deepfakes related to Jornal Nacional (TV Globo) as an empirical cut because the program was featured in the election coverage, having reported electoral polls, as well as organizing interviews and debates with the main candidates. Our analysis starts from the understanding of deepfakes as texts figuratively covered to exhaustion, through synthesized faces and voices, resulting in the effect of iconization and, thus, creating a referent effect (GREIMAS; COURTÉS, 2008). Some plastic characteristics (GREIMAS, 2004), such as the tone of the voice (synthesized audio) or the quality of the image (pixelated image), are important for deepfakes to be read as such. Additionally, deepfakes that do not carry such marks in the expression plan can be recognized as such because they textually announce, through a warning or indexing such as the use of the hashtag #deepfake. Finally, we point to a typology of deepfakes, considering the regimes of interaction, meaning, and truth (LANDOWSKI, 2022). We conclude that there are two types of deepfakes found in our corpus: (1) those that are not and do not seem to be (in the sense that what they iconize is not true) that we call marked deepfakes; and (2) those that seem but are not, which we call unmarked. From the analyses undertaken, we observe that the main semiotic strategy found in the production of a deepfake is iconization, which involves the synthesis of voices, static or moving images, and with this, a referent effect is created. Thanks to the extreme iconization procedure obtained by artificial intelligence that improves every day, a deepfake may be read as true. Finally, this research aims to contribute to the knowledge about deepfakes through the perspective of discursive semiotics and sociosemiotics, whose theoretical-methodological bases allowed us to understand the semiotic procedures pointed out here that aim to produce truth effects. |