Propagação de alvos em redes neurais artificiais

Detalhes bibliográficos
Ano de defesa: 2019
Autor(a) principal: Farias, Tiago de Souza
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de Santa Maria
Brasil
Física
UFSM
Programa de Pós-Graduação em Física
Centro de Ciências Naturais e Exatas
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://repositorio.ufsm.br/handle/1/16956
Resumo: Artificial neural network is a type of artificial intelligence focued on the study of learning systems with data. The training of neural networks requires a method that evolves their internal parameters, which are responsible for solving some given problem. Target propagation is a training method where it evaluates the best neural activity to accomplish an objective. We developed a variation of target propagation in which the ideal activity is obtained from a gradient of an objective function. Random weights initialization is a technique to set the random values in neural networks before training. We present an initialization scheme that considers the non-linear effects from neurons and the distribution of data. Hyperparameters are values that regulate evolving parameters. These values are, in general, obtained heuristically, wasting computational resources. We show a method to obtain the hyperparameters without the need of searches algorithms. Quantum neural networks are a type of artificial intelligence where it harness quantum phenomena for computation power. Inspired by a theory of entanglement in the biological brain, we developed a quantum correlation technique among neurons that can improve performance. Within the problem of image classification, the results show that the four techniques can improve the neural network performance and, under certain conditions, lower the computational cost.