Detalhes bibliográficos
Ano de defesa: |
2015 |
Autor(a) principal: |
Silva, Eduardo Batista da [UNESP] |
Orientador(a): |
Não Informado pela instituição |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Tese
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
por |
Instituição de defesa: |
Universidade Estadual Paulista (Unesp)
|
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: |
|
Link de acesso: |
http://hdl.handle.net/11449/138436
|
Resumo: |
Reading and writing of scientific articles and reference works in the English language rely on the knowledge of the academic vocabulary - a terminological set which shows high frequency and regular distribution in eight specialized areas (Formal and Earth Sciences, Life Sciences, Engineering, Health Sciences, Agricultural Sciences, Applied Social Sciences, Humanities, Linguistics, Language Arts and Arts), sorted in 69 subfields. The present study has the general objective of describing and analyzing the academic vocabulary in the English language, occurring in eight fields of knowledge Regarding the specific objectives, the research intends to: 1) constitute a specialized corpus in the English language; 2) propose a methodology to identify and retrieve the academic vocabulary; 3) identify the fundamental academic vocabulary; 4) establish equivalence in the Portuguese language; 5) develop a terminological dictionary of the fundamental academic vocabulary in the English language with equivalents in the Portuguese language and 6) revise the Academic Word List and the Academic Vocabulary List. To ground the theoretical framework, this research resorts to the works carried out in the realm of Terminology (BARBOSA, 1999, 2009; BARROS, 2004; CABRÉ, 1993, 1999), Corpus Linguistics (BERBER SARDINHA, 2004; SINCLAIR, 2004) and Lexical Statistics (LARSON; FARBER, 2012; OAKES, 1998; BUTLER, 1985). Concerning the methodology, we constituted an academic corpus in the English language with 113.337.773 tokens. As for the aid of software, WordSmith Tools, version 5, was the linguistic-statistical tool used to process the corpora and retrieve the terms. In order to identify the terms, we used the use coefficient (result of the multiplication of the Juilland's dispersion coefficient by the normalized frequency) and posterior validation of the eligible terms by experts. The dictionary entries were developed with 10 fields. The results allow to highlight |