Detalhes bibliográficos
Ano de defesa: |
2020 |
Autor(a) principal: |
SANTOS, Luã Lázaro Jesus dos |
Orientador(a): |
ZANCHETTIN, Cleber |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
eng |
Instituição de defesa: |
Universidade Federal de Pernambuco
|
Programa de Pós-Graduação: |
Programa de Pos Graduacao em Ciencia da Computacao
|
Departamento: |
Não Informado pela instituição
|
País: |
Brasil
|
Palavras-chave em Português: |
|
Link de acesso: |
https://repositorio.ufpe.br/handle/123456789/39490
|
Resumo: |
Embedding artificial intelligence on constrained platforms has become a trend since the growth of embedded systems and mobile devices, experimented in recent years. Al though constrained platforms do not have enough processing capabilities to train a sophis ticated deep learning model, like Convolutional Neural Network (CNN), they are already capable of performing inference locally by using a previously trained embedded model. This approach enables numerous advantages such as more privacy, smaller response la tency, and no real-time network dependence. Still, the use of a local CNN model on constrained platforms is restricted by its storage size and processing power. Most of the research in CNN has focused on increasing network depth to improve accuracy. In the text classification area, deep models were proposed with excellent performance but rely ing on large architectures with thousands of parameters, and consequently, they require high storage size and processing. One of the models with much renown is the Very Deep Convolutional Neural Networks (VDCNN). In this dissertation, it is proposed an archi tectural modification in the VDCNN model to reduce its storage size while keeping its performance. In this optimization process, the impacts of using Temporal Depthwise Sep arable Convolutions and Global Average Pooling in the network are evaluated regarding parameters, storage size, dedicated hardware dependence, and accuracy. The proposed Squeezed Very Deep Convolutional Neural Networks (SVDCNN) model is between 10x and 20x smaller than the original version, depending on the network depth, maintain ing a maximum disk size of 6MB. Regarding accuracy, the network experiences a loss between 0.1% and 1.0% in the accuracy performance while obtains lower latency over non-dedicated hardware and higher inference time ratio compared to the baseline model. |