Unsupervised Dimensionality Reduction in Big Data via Massive Parallel Processing with MapReduce and Resilient Distributed Datasets

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: Oliveira, Jadson Jose Monteiro
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/55/55134/tde-20012021-125711/
Resumo: The volume and complexity of data generated in scientific and commercial applications have been growing exponentially in many areas. Nowadays, it is common the need for finding patterns in Terabytes or even Petabytes of complex data, such as image collections, climate measurements, fingerprints and large graphs extracted from the Web or from Social Networks. For example, how to analyze Terabytes of data from decades of frequent climate measurements comprised of dozens of climatic features, such as temperatures, rainfall and air humidity, so to identify patterns that precede extreme weather events for use in alert systems? A well-known fact in complex data analysis is that the search for patterns requires preprocessing by means of dimensionality reduction, due to a problem known as the curse of high-dimensionality. Nowadays, few techniques have been able to effectively reduce the dimensionality of such data in the scale of Terabytes or even Petabytes, which are referred to in this monograph as Big Data. In this context, massively parallel processing, linear scalability to the number of objects, and the ability to detect the most diverse types of correlations among the attributes are exceptionally desirable. This MSc work presents an in-depth study comparing two distinct approaches for dimensionality reduction in Big Data: ( a ) a standard approach based on data variance preservation, and; ( b ) an alternative, Fractal-based solution that is rarely explored, for which we propose a fast and scalable algorithm based on MapReduce and concepts from Resilient Distributed Datasets, using a new attribute-set-partitioning strategy that enables us to process datasets of high dimensionality. We evaluated both strategies by inserting into 11 real-world datasets, redundant attributes formed by correlations of various types, such as linear, quadratic, logarithmic and exponential, and verifying the ability of these approaches to detect such redundancies. The results indicate that, at least for large datasets with up to 1;000 attributes, our fractal-based technique is the best option. It removed redundant attributes in nearly all cases with high precision, as opposed to the standard variance-preservation approaches that presented considerably worse results even when applying the KPCA technique that is made to detect nonlinear correlations.