Análise do impacto de ataques adversários na detecção de intrusão em sistemas ciberfísicos
Ano de defesa: | 2024 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | por |
Instituição de defesa: |
Universidade Federal de Uberlândia
Brasil Programa de Pós-graduação em Ciência da Computação |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | https://repositorio.ufu.br/handle/123456789/45044 http://doi.org/10.14393/ufu.di.2025.5044 |
Resumo: | Cyber-Physical Systems (CPSs) are complex technological ecosystems that integrate computing, networking, and physical processes through interconnected devices. Ensuring information security in these systems is critical, and machine learning algorithms are widely employed to train anomaly-based Intrusion Detection Systems (IDSs) for their protection. This study evaluates the impact of adversarial attacks on machine learning algorithms applied to anomaly-based IDSs using two datasets: Power System Smart Grid Monitoring Power and Ereno IEC-61850 Intrusion Dataset. The research investigates both single classifier and ensemble classifier approaches, focusing on two adversarial attacks: the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA). Comprehensive analyses were conducted, including comparisons between FGSM and JSMA attacks, assessments of single classifier versus ensemble classifier performance, evaluations of the effect of reducing the stolen training set size, and the impact of incorporating adversarial samples into the training set. The findings reveal that the impact of adversarial attacks varies with the classifier type and dataset. Notably, ensemble classifiers generally exhibited greater resistance to adversarial attacks. A significant degradation in baseline performance was observed for FGSM attacks as the stolen training set size decreased. Conversely, in some scenarios, incorporating adversarial samples into the training set enhanced classifier performance. In summary, FGSM attacks were found to have a more pronounced negative impact on IDS performance. Additionally, ensemble classifiers demonstrated superior robustness to adversarial attacks compared to single classifiers, highlighting their effectiveness in IDSs for CPSs. |