Aprendizado multi-objetivo deredes RBF e de Máquinas de kernel

Detalhes bibliográficos
Ano de defesa: 2010
Autor(a) principal: Illya Kokshenev
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de Minas Gerais
UFMG
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://hdl.handle.net/1843/BUOS-8CCHNX
Resumo: As known from statistical learning theory, the training error and complexity of a model must be simultaneously minimized and yet certainly balanced for a valid generalization. Modern learning algorithms, such as support vector machines, achieve this goal by means of regularization and kernel methods, whose combination providespossibilities for analysis and construction of efficient nonlinear learning machines. In such algorithms, due to the non-convexity of the learning problem when the kernel is not fixed, the choice of the kernel is commonly addressed using sophisticated techniques of model selection, in a manner, different from the original idea of balancebetween the error and complexity. In contrast, the search of balance between the error and complexity in non-convex learning problems can be treated within the multi-objective framework, by viewing the supervised learning as a decision process in the environment of two conflicting goals. However, modern methods of multiobjective learning are focused on evolutionary optimization, paying a few attention to implementation of key learning principles. This work develops a multi-objective approach to supervised learning as an extension of the traditional (single-objective) concepts, such as regularization and margin maximization, to the cases of non-convex hypothesis spaces, induced with multiple kernels. In the proposed learning scheme, approximate solutions to generally nonconvex problems are obtained from their decompositions into the subsets of convex subproblems, where the application of deterministic nonlinear programming is efficient. Aiming for implementation of the principle of structural risk minimization, there are several complexity measures derived, each one inducing a particular multiobjectivealgorithm. In particular, the proposed smoothness-based complexity measure for the Gaussian radial-basis function (RBF) networks led to an efficient multi-objective algorithm, which is capable of finding the weights, widths, locations, and quantities of basis functions in a deterministic manner. In combination with the Akaike and Bayesian information criteria, the developed algorithm demonstrates a high generalization efficiency on several synthetic and real-world benchmark problems. Aiming to extendthe concept of margin maximization to supervised learning with multiple kernels, the techniques of feature normalization and equalization were proposed. The further analysis shows the necessity in extension of the concept of margin to the more general property of a separation hyperplane, such as its stability. As the result, the proposed stability-based complexity measure, which reliability has been experimentally confirmed, allows a construction of multi-objective algorithms for arbitrary classes of kernels.