Towards FPGA-embedded CNNs: network quantization and HDL infrastructure for bringing CNNs into FPGAs.

Detalhes bibliográficos
Ano de defesa: 2021
Autor(a) principal: Ferreira, Vitor Finotti
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/3/3141/tde-14042021-153731/
Resumo: Convolutional neural networks (CNNs) have played a prominent role in recent years in the field of computational vision, becoming the dominant approach for recognition and detection tasks. To bring the benefits of CNNs to mobile and embedded devices, quantization strategies have been used to reduce the model size and increase computational efficiency.However, embedding convolutional neural networks is not a matter of only changing the target hardware architecture. Several restrictions of storage, memory, computational resources, and even available energy pose a challenge in bringing the benefits of modern CNN architectures into embedded systems. In the particular case of FPGAs, where energyefficiency meets low-latency and high-bandwidth, this challenges are even more complex given the dominance of general-purpose architectures such as GPUs or CPUs in the field. This work proposes to investigate relevant aspects for a successful implementation of CNNs into embedded systems in general and, in more details, for into FPGAs, where the CNNs benefits may be associated to low-latency and high-bandwith. The state-of-the-art on strategies for efficient computation and storage of CNNs is explored. We show that it is possible to reduce CNN model size by more than 50% while keeping similar classification accuracy without the need for retraining or model adjustment. We also measure the relationship between classification complexity and tolerance to quantization, finding an inverse correlation between the quantization level and dataset complexity. For the specific case of CNNs on FPGAs, details on the required infrastructure for CNN inference are given, presenting a soft-microcontroller and a complete framework capable of supporting CNN implementations.