Improving data locality of bagging ensembles for data streams through mini-batching

Detalhes bibliográficos
Ano de defesa: 2021
Autor(a) principal: Cassales, Guilherme
Orientador(a): Senger, Hermes lattes
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Universidade Federal de São Carlos
Câmpus São Carlos
Programa de Pós-Graduação: Programa de Pós-Graduação em Ciência da Computação - PPGCC
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Palavras-chave em Inglês:
Área do conhecimento CNPq:
Link de acesso: https://repositorio.ufscar.br/handle/20.500.14289/15176
Resumo: Machine Learning techniques have been employed in virtually all domains in the past few years. In many applications, learning algorithms will have to cope with dynamic environments, under both memory and time constraints, to provide a (near) real-time answer. In this scenario, Ensemble learning comprises a class of stream mining algorithms that achieved remarkable predictive performance. Ensembles are implemented as a set of (several) individual learners whose predictions are aggregated to predict new incoming instances. Although ensembles can be computationally more expensive, they are naturally amendable for task-parallelism. However, the incremental learning and dynamic data structures used to capture the concept drift increase the cache misses and hinder the benefit of parallelism. In this thesis, we devise a method capable of reducing the execution time and increasing the energy efficiency of several bagging ensembles for data streams. The method is based on a task-parallel model capable of leveraging the natural independence of the underlying learners from this class of ensembles (bagging). The parallel model is combined with a mini-batching technique that can improve the memory access locality of the ensembles. We consistently achieve speedups of 4X to 5X with 8 cores, with even a superlinear speedup of 12X in one case. We demonstrate that mini-batching can significantly decrease the reuse distance and the number of cache-misses.We provide data regarding the trade-off regarding the reduction of execution time with a loss in predictive performance (ranging from less than 1% up to 12%). We conclude that loss in predictive performance depends on dataset characteristics and the mini-batch size used. We present evidence that using small mini-batch sizes (e.g., up to 50 examples) provides a good compromise between execution time and predictive performance. We demonstrate that energy efficiency can be improved under three different workloads. Although the biggest reduction in energy consumption happens in the smallest workload, it comes at the cost of a big delay in response time, which may hinder the idea of real-time processing. In the higher workloads, however, the proposed method presents a better performance in both the energy consumption and the delay in response time when compared to the baseline version. We evaluate our method using many hardware platforms, with a total of six different hardware platforms used among all experimental frameworks. At the same time, we use up to six different algorithms and up to five different datasets on the experimental frameworks. By providing data about the execution of the proposed method in such a wide range of setups, we believe that the proposed method is a viable solution for improving the performance of online bagging ensembles.