Resumo: |
This thesis presents CoLN (Combined Learning Network), a new approach to decentralized machine learning that employs a non-convex model focused on privacy preservation. Unlike traditional methods used in federated learning, which rely on simple parameter aggregations like weighted averages, CoLN explores solutions beyond the convex space, addressing the complexities of non-identically distributed data commonly found in deep learning scenarios. CoLN stands out as an efficient alternative for collaborative learning in industrial and research contexts, particularly due to its applicability in scenarios with limited participants, where data centralization is not feasible due to privacy or regulatory constraints. The model enables effective collaboration between distinct parties, maintaining confidentiality and delivering consistent performance in a few iterations, even in environments with few participants. This is especially relevant for shared objectives that require collaboration among different stakeholders, enhancing participants\' overall performance without the need to share raw data for centralized model training. Empirical tests demonstrate that CoLN can approximate the performance of centralized models, showing robustness across distinct neural network architectures, even with substantial variations in data among local models. With a simplified implementation, rapid adaptation to imbalanced datasets, and the ability to achieve combined generalization in few iterations, CoLN emerges as a promising alternative for collaborative learning. |
---|