Minimizing latency and maintenance time during server updates on edge computing infrastructures

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: Souza, Paulo Silas Severo de
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Pontif?cia Universidade Cat?lica do Rio Grande do Sul
Escola Polit?cnica
Brasil
PUCRS
Programa de P?s-Gradua??o em Ci?ncia da Computa??o
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://tede2.pucrs.br/tede2/handle/tede/11414
Resumo: Edge Computing offers low latency for real-time applications by shifting processing tasks from traditional cloud data centers to the network?s edge, near data sources. As expectations about Edge Computing grow, so does the pressure on IT personnel responsible for planning and executing maintenance operations on edge infrastructures. Maintenance at the edge is critical, given that edge servers, especially those installed in outdoor facilities, are exposed to several vulnerabilities, including hardware issues and security threats. To make things even more complicated, many unique characteristics of edge infrastructures, such as tight application latency requirements and the physical dispersion of edge servers, hinder the possibility of reusing maintenance strategies designed for the cloud. In light of the given scenario, this doctoral thesis seeks to enable faster updates of edge servers while reducing maintenance?s impact on edge application performance. To this end, this doctoral thesis makes the following contributions: (i) a literature review that organizes existing maintenance research targeting Edge Computing and two related paradigms (Cloud Computing and Internet of Things) according to a novel taxonomy; (ii) a simulation toolkit, called EdgeSimPy, that models various components of edge infrastructures and supports maintenance use cases; (iii) two maintenance strategies, called Lamp and Laxus, that incorporate user location awareness into maintenance decision-making to reduce the impact of server updates on application latency; and (iv) a maintenance strategy, called Hermes, that capitalizes on the shared content of container images of edge applications to reduce maintenance time through optimized relocations. Extensive experiments show that proposed solutions can accelerate edge server updates while reducing the impact of maintenance on edge application performance compared to strategies from the literature.