Modelos computacionais e probabilísticos em riscos de crédito

Detalhes bibliográficos
Ano de defesa: 2015
Autor(a) principal: Barboza, Flavio Luiz de Moraes lattes
Orientador(a): Basso, Leonardo Fernando Cruz lattes
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Presbiteriana Mackenzie
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Palavras-chave em Inglês:
Área do conhecimento CNPq:
Link de acesso: http://dspace.mackenzie.br/handle/10899/23234
Resumo: This dissertation studies credit risk to promove a discussion about the breadth of scientific literature and two highlighted topics: regulatory capital and bankruptcy prediction modelling. These issues are divided among three essays. The first one is a review of literature in nature. The main studies on credit risk were classified and coded, and a citation-based approach was used to determine its relevance and contributions. Interesting omissions of knowledge are found in this work, which give us motivation to develop two subjects. The second essay discusses the influence of the desirefor higher rating positions for financial instituitons strategies when aiming to minimize economic capital, considering the borrower s credit rating and target rating itself. Using a probabilistic distribution model to simulate loss-given default (LGD), our results show that the use of credit ratings in the guidance for calculating minimum capital requirements can be an alternative to the banks. Yet, we find it possible to get better rankings to lend to some small intervals of LGD. The third study shows a comparative analysis in the performance of computational models, which are widely used to solve classification problems, and traditional methods applied to predict failures one year before the event. The models are formulated by machine learning techniques (support vector machines, bagging, boosting and random forest). Applying data from U.S. companies from 1985 to 2013, we compare the results of these innovative methods with neural networks, logistic regression, and discriminant analysis. The major result of this part of the study is a substantial improvement in predictive power by using machine learning techniques, when, besides the original variable Z-Score from Altman (1968), six metrics (or constructs) selected from Carton e Hofer (2006) are included as explanatory variables. The analysis shows that the bagging and the random forest models outperform other techniques; all predictions are improved when the suggested constructs are included in the survey.