Regularização de extreme Learning Machines: uma abordagem com matrizes de afinidade
Ano de defesa: | 2014 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Tese |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal de Minas Gerais
UFMG |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/1843/BUBD-9VKFSE |
Resumo: | Inducing models from a dataset is an inverse problem and usually it is ill-posed. To turn this into a well-posed problem, regularization techniques have been used with success, including in Arti cial Neural Networks (ANN). These techniques use a priori information about the problem. This information may be, for example, imposing smoothness to the solution, or using structural information about the dataset. The structural information can be provided by anity matrices - for example, kernel matrices and cosine similarity matrices. In the ANN context, a Single-Layer Feedforward Neural Network (SLFN) training algorithm has been attracting attention of the scienti c community in recent years, especially because of its simplicity and speed of training. Because its training is done in two steps: random projection in a high-dimensional space and calculating the output layer weights using the pseudo-inverse, the ELM algorithm allows to interfere on it in order to insert information obtained by anity matrices. This can be done by combining the ELM projections with the anity matrices. In this thesis, we show that using such structural information in ELM training provides an e ect similar to Tikhonov regularization. Moreover, this change in ELM algorithm enables it to be used in the semi-supervised learning context. This type of learning, in which labels are scarce, usually uses structural information about the data in order to help model construction. Experiments performed with the developed algorithm, which we call Anity Matrix Regularized ELM (AMR-ELM), validate both the regularization e ect in the context of supervised learning and the ability to deal with semi-supervised learning scarcity of labels. Furthermore, if a parameter-free anity matrix is used, like the cosine similarity matrix, regularization is performed without any need for parameter tunning. |