Implementando uma máquina virtual diferenciável mínima em redes neurais recorrentes
Ano de defesa: | 2018 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal do Rio de Janeiro
Brasil Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia Programa de Pós-Graduação em Engenharia de Sistemas e Computação UFRJ |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/11422/13068 |
Resumo: | Deep Learning techniques have achieved impressive results in many domains over the last few years. However, it’s still difficult to produce understandable models that clearly show the embedded logic behind the decision process while still having competitive performance. One step in this direction is the recent development of neural programmers. In this work, it’s proposed a very simple neural programmer with an extensible differentiable virtual machine that can be easily integrated in existing deep learning architectures, providing modules with more transparent reasoning to current models. At the same time it enables neural networks to learn to write and execute algorithm within the same training environment. Tests conducted with the proposed network suggests that it has the potential to induce algorithms even without any kind of special optimization and being competitive with current recurrent neural networks architectures. |