Adversarial selection of challenge-response pairs as a defense against strong PUF modeling attacks
Ano de defesa: | 2019 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Universidade Federal do Rio de Janeiro
Brasil Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa de Engenharia Programa de Pós-Graduação em Engenharia de Sistemas e Computação UFRJ |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/11422/14057 |
Resumo: | In this work, we present methods to further secure authentication mechanisms embedded in hardware known as Physically Unclonable Functions (PUFs). These mechanisms use the unique physical characteristics of the chips they are embedded in to create a set of responses were found to be vulnerable to Machine Learning modelling attacks. The techniques developed herein are focused on using Adversarial Machine Learning to select the binary strings used for the authentication operations in question, commonly known as Challenge-Response Pairs, in order to protect the devices using these PUFs from having their authentication credentials copied. The result of this research are a series of methods that apply to different scenarios that reduce the accuracy of possible modelling attacks in up to 19%. |