Quality of accounting undergraduate programs in Brazil: How to estimate value-added?

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: Fernandes, Vivian Duarte Couto
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Universidade Federal de Uberlândia
Brasil
Programa de Pós-graduação em Ciências Contábeis
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://repositorio.ufu.br/handle/123456789/29044
http://doi.org/10.14393/ufu.te.2020.365
Resumo: School quality assessment has been the object of research by economists, educators, policy makers and various stakeholders worldwide. In Brazil the National Assessment System for Higher Education (Sinaes – Sistema Nacional de Avaliação do Ensino Superior) is a sound initiative that seeks to assess the country’s undergraduate programs, their faculty members, and the students’ academic achievement, as well as to provide quality indicators that account for the differences between them. One of such indicators is the Indicator of Difference between Observed and Expected Achievements (IDD – Indicador de Diferença entre os Desempenhos Observado e Esperado), which measures the contribution of an undergraduate program to its students’ achievements upon their undergraduate studies. The first IDD was released in 2006; since then, lawmakers have changed its estimation methodology, seeking to improve it as an accurate measure of value added. This doctoral dissertation aims to discuss such different methodologies and their impacts on the ranking of undergraduate programs in Accounting in Brazil, as provided through an indicator named Preliminary Program Quality Level (CPC – Conceito Preliminar de Curso). A quantitative design was used both to test four value-added models that included the historical records of the IDD and to identify the impact of changes in estimation methodology implemented since 2006 on the quality indicators of the undergraduate programs under scrutiny. The analysis was based on data from Brazilian Accounting students who took the 2015 National Exam of Student Achievement (Enade), completed a student questionnaire and had a valid score in the National Exam of High School Education (Enem) for admission to an undergraduate program. The sample consisted of 30,668 students from 911 undergraduate programs, which represents 46.98% of the total population. The results show that the current model is more accurate than the previous ones for estimating value added. However, both the literature and the findings indicate that the model could be improved by introducing explanatory variables for academic achievement that cannot be controlled by higher education institutions. Even though the current model is statistically more robust than the previous ones, it is still inappropriate because it misrepresents all institutions and all students as equals. A new IDD estimation methodology, Model IDD-VDCF, is proposed to estimate value added while still including significant variables of academic achievement that cannot be controlled by organization leaders. The findings point to 1) gender, marital status, and reading and study habits as control variables at the student level, and 2) type of higher education institution, learning modality, and regional location as control variables at the institution level. Such variables reduced the IDD estimate bias associated with student selection for admission to a given undergraduate program. This study suggests that the IDD is relevant to identify differences across undergraduate programs and it should bear more weight within the Sinaes. It also suggests that the CPC quality index should be abolished, as it is not able to portray the actual quality of the undergraduate programs. While the IDD points to significant differences across the undergraduate programs regardless of the methodology used, the rationale underlying the CPC estimation reduces such differences and renders distinct programs as similar in quality. Not only does this study contribute to the methodological discussion about how to estimate value added in higher education, but it also adds to the debates about how to assess the effectiveness of higher education institutions in order to support policy making. The students’ learning context must be considered when comparing performance across institutions through standardized exams such as the Enade – even though the undergraduate program bears great responsibility for their academic achievement, such factors as motivation, commitment, and professional aspirations, which are not subject to control by higher education institutions, can impact the academic outcomes.