On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement

Detalhes bibliográficos
Autor(a) principal: Hao, Gao
Data de Publicação: 2022
Outros Autores: Hijazi, Haytham, Durães, João, Medeiros, Julio, Couceiro, Ricardo, Lam, Chan Tong, Teixeira, César A., Castelhano, João, Castelo-Branco, Miguel, Carvalho, Paulo de, Madeira, Henrique
Tipo de documento: Artigo
Idioma: eng
Título da fonte: Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
Texto Completo: https://hdl.handle.net/10316/113871
https://doi.org/10.3389/fnins.2022.1065366
Resumo: Complexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to) understand a set of programs, while the complexity of such programs is assessed through different methods and perspectives: (a) classic code complexity metrics such as McCabe and Halstead metrics, (b) cognitive complexity metrics based on scored code constructs, (c) cognitive complexity metrics from state-of-the-art tools such as SonarQube, (d) human-centered metrics relying on the direct assessment of programmers' behavioral features (e.g., reading time, and revisits) using eye tracking, and (e) cognitive load/mental effort assessed using electroencephalography (EEG). The human-centered perspective was complemented by the subjective evaluation of participants on the mental effort required to understand the programs using the NASA Task Load Index (TLX). Additionally, the evaluation of the code complexity is measured at both the program level and, whenever possible, at the very low level of code constructs/code regions, to identify the actual code elements and the code context that may trigger a complexity surge in the programmers' perception of code comprehension difficulty. The programmers' cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from SonarSource tools deviate considerably from the programmers' perception of code complexity and often do not show the expected monotonic behavior. The article summarizes the findings in a set of guidelines to improve existing code complexity metrics, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools.
id RCAP_c98ae30742e089e48d5dd7f20f13997b
oai_identifier_str oai:estudogeral.uc.pt:10316/113871
network_acronym_str RCAP
network_name_str Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
repository_id_str https://opendoar.ac.uk/repository/7160
spelling On the accuracy of code complexity metrics: A neuroscience-based guideline for improvementcode complexity metricscode comprehensionEEGcognitive loadmental effortcode refactoringcode constructsComplexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to) understand a set of programs, while the complexity of such programs is assessed through different methods and perspectives: (a) classic code complexity metrics such as McCabe and Halstead metrics, (b) cognitive complexity metrics based on scored code constructs, (c) cognitive complexity metrics from state-of-the-art tools such as SonarQube, (d) human-centered metrics relying on the direct assessment of programmers' behavioral features (e.g., reading time, and revisits) using eye tracking, and (e) cognitive load/mental effort assessed using electroencephalography (EEG). The human-centered perspective was complemented by the subjective evaluation of participants on the mental effort required to understand the programs using the NASA Task Load Index (TLX). Additionally, the evaluation of the code complexity is measured at both the program level and, whenever possible, at the very low level of code constructs/code regions, to identify the actual code elements and the code context that may trigger a complexity surge in the programmers' perception of code comprehension difficulty. The programmers' cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from SonarSource tools deviate considerably from the programmers' perception of code complexity and often do not show the expected monotonic behavior. The article summarizes the findings in a set of guidelines to improve existing code complexity metrics, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools.This work was funded in part by the BASE (Biofeedback Augmented Software Engineering) project under Grant POCI- 01-0145-FEDER-031581, by the Centro de Informática e Sistemas da Universidade de Coimbra (CISUC), and in part by Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), and the University of Coimbra under Grant PTDC/PSI-GER/30852/2017 | CONNECT-BCI.Frontiers Media S.A.2022info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articlehttps://hdl.handle.net/10316/113871https://hdl.handle.net/10316/113871https://doi.org/10.3389/fnins.2022.1065366eng1662-4548Hao, GaoHijazi, HaythamDurães, JoãoMedeiros, JulioCouceiro, RicardoLam, Chan TongTeixeira, César A.Castelhano, JoãoCastelo-Branco, MiguelCarvalho, Paulo deMadeira, Henriqueinfo:eu-repo/semantics/openAccessreponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiainstacron:RCAAP2024-10-03T15:01:17Zoai:estudogeral.uc.pt:10316/113871Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireinfo@rcaap.ptopendoar:https://opendoar.ac.uk/repository/71602025-05-29T06:06:43.149080Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiafalse
dc.title.none.fl_str_mv On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
spellingShingle On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
Hao, Gao
code complexity metrics
code comprehension
EEG
cognitive load
mental effort
code refactoring
code constructs
title_short On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_full On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_fullStr On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_full_unstemmed On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
title_sort On the accuracy of code complexity metrics: A neuroscience-based guideline for improvement
author Hao, Gao
author_facet Hao, Gao
Hijazi, Haytham
Durães, João
Medeiros, Julio
Couceiro, Ricardo
Lam, Chan Tong
Teixeira, César A.
Castelhano, João
Castelo-Branco, Miguel
Carvalho, Paulo de
Madeira, Henrique
author_role author
author2 Hijazi, Haytham
Durães, João
Medeiros, Julio
Couceiro, Ricardo
Lam, Chan Tong
Teixeira, César A.
Castelhano, João
Castelo-Branco, Miguel
Carvalho, Paulo de
Madeira, Henrique
author2_role author
author
author
author
author
author
author
author
author
author
dc.contributor.author.fl_str_mv Hao, Gao
Hijazi, Haytham
Durães, João
Medeiros, Julio
Couceiro, Ricardo
Lam, Chan Tong
Teixeira, César A.
Castelhano, João
Castelo-Branco, Miguel
Carvalho, Paulo de
Madeira, Henrique
dc.subject.por.fl_str_mv code complexity metrics
code comprehension
EEG
cognitive load
mental effort
code refactoring
code constructs
topic code complexity metrics
code comprehension
EEG
cognitive load
mental effort
code refactoring
code constructs
description Complexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to) understand a set of programs, while the complexity of such programs is assessed through different methods and perspectives: (a) classic code complexity metrics such as McCabe and Halstead metrics, (b) cognitive complexity metrics based on scored code constructs, (c) cognitive complexity metrics from state-of-the-art tools such as SonarQube, (d) human-centered metrics relying on the direct assessment of programmers' behavioral features (e.g., reading time, and revisits) using eye tracking, and (e) cognitive load/mental effort assessed using electroencephalography (EEG). The human-centered perspective was complemented by the subjective evaluation of participants on the mental effort required to understand the programs using the NASA Task Load Index (TLX). Additionally, the evaluation of the code complexity is measured at both the program level and, whenever possible, at the very low level of code constructs/code regions, to identify the actual code elements and the code context that may trigger a complexity surge in the programmers' perception of code comprehension difficulty. The programmers' cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from SonarSource tools deviate considerably from the programmers' perception of code complexity and often do not show the expected monotonic behavior. The article summarizes the findings in a set of guidelines to improve existing code complexity metrics, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools.
publishDate 2022
dc.date.none.fl_str_mv 2022
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv https://hdl.handle.net/10316/113871
https://hdl.handle.net/10316/113871
https://doi.org/10.3389/fnins.2022.1065366
url https://hdl.handle.net/10316/113871
https://doi.org/10.3389/fnins.2022.1065366
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv 1662-4548
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.publisher.none.fl_str_mv Frontiers Media S.A.
publisher.none.fl_str_mv Frontiers Media S.A.
dc.source.none.fl_str_mv reponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
instacron:RCAAP
instname_str FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
instacron_str RCAAP
institution RCAAP
reponame_str Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
collection Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
repository.name.fl_str_mv Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
repository.mail.fl_str_mv info@rcaap.pt
_version_ 1833602579602341888