Auto test generator: a framework to generate test cases from requirements in natural language

Detalhes bibliográficos
Ano de defesa: 2019
Autor(a) principal: PINA, Thaís Melise Lopes
Orientador(a): SAMPAIO, Augusto Cezar Alves
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Universidade Federal de Pernambuco
Programa de Pós-Graduação: Programa de Pos Graduacao em Ciencia da Computacao
Departamento: Não Informado pela instituição
País: Brasil
Palavras-chave em Português:
Link de acesso: https://repositorio.ufpe.br/handle/123456789/33916
Resumo: Testing is essential in the software engineering development process. However, it is also one of the most costly tasks. Thus, test automation has become the goal of many researches. Since design, implementation, and execution phases depend substantially on the system requirements, it is of the utmost importance that requirements text is standardized and clear. However, most companies use free natural language to write these documents, which entails the phenomenon of (lexical and structural) ambiguity, giving rise to different interpretations. An option to mitigate this problem is via the use of a Controlled Natural Language (CNL), aiming at standardization and accuracy of texts. A CNL is a subset of a natural language that uses a restrict lexicon to a particular domain, and follow grammatical rules which guide the elaboration of sentences, thus reducing ambiguity and allowing mechanized processing, like the automatic generation of test cases from CNL requirements. This work, in the software testing area, presents the Auto Test Generator (ATG), a tool to assist the writing of requirements and the automatic generation of test cases written in English, which are then automatically translated in test scripts using an automation framework. From a requirement written in CNL, the ATG creates a Use Case (UC). Due to the standardization of the language, it is possible to perform a consistency and dependency analysis, for each UC step, through a graph of associations (dependencies and cancellations) between test actions. Test cases are generated automatically in a transparent way from UCs to the user. ATG was developed and evaluated in partnership with Motorola Mobility. Experimental evaluations were performed. From the seven requirements analyzed, it was possible to create 34 test cases in total. The generated test cases resulted in 151 steps, which were passed to the Zygon (a proprietary automated tool for testing) in order to be automated. As a result, 131 test steps were correctly automated (86% of the total given as input).