Autoparbench: a framework for parallel code verification
Ano de defesa: | 2020 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Universidade Federal de Minas Gerais
Brasil ICX - DEPARTAMENTO DE CIÊNCIA DA COMPUTAÇÃO Programa de Pós-Graduação em Ciência da Computação UFMG |
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Link de acesso: | http://hdl.handle.net/1843/43133 |
Resumo: | There exist presently many parallelization tools based on the automatic insertion of OpenMP pragmas into programs. However, it is challenging to automatically and quantitatively compare these tools for their strengths and limitations, due to the diverse choices to parallelize a program. This work describes AutoParBench, a test framework aimed to mitigate this problem. AutoParBench consists of benchmarks and a verifier. Benchmarks currently include 99 programs with 1,579 loops. A procedure is defined to allow quick and easy additions of new programs. The verifier consists of a common intermediate representation, based on JSON, plus all the machinery necessary to convert OpenMP programs into a format henceforth called a JSON snapshot. The snapshots produced by different tools enable automatic semantics-aware comparison of syntactically different parallelization results. AutoParBench is an effective bug-finding instrument. By investigating differences in snapshots produced by separate sources, i.e., tool versus tool or tool versus human, we have reported bugs in selected parallelization tools such as ICC, Cetus, AutoPar and DawnCC, all of which have been confirmed. |