Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters
Main Author: | |
---|---|
Publication Date: | 2015 |
Other Authors: | , , , |
Format: | Article |
Language: | eng |
Source: | Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
Download full: | http://hdl.handle.net/10400.17/3010 |
Summary: | BACKGROUND AND PURPOSE: Because evidence-based articles are difficult to recognize among the large volume of publications available, some journals have adopted evidence-based medicine criteria to classify their articles. Our purpose was to determine whether an evidence-based medicine classification used by a subspecialty-imaging journal allowed consistent categorization of levels of evidence among different raters. MATERIALS AND METHODS: One hundred consecutive articles in the American Journal of Neuroradiology were classified as to their level of evidence by the 2 original manuscript reviewers, and their interobserver agreement was calculated. After publication, abstracts and titles were reprinted and independently ranked by 3 different radiologists at 2 different time points. Interobserver and intraobserver agreement was calculated for these radiologists. RESULTS: The interobserver agreement between the original manuscript reviewers was -0.2283 (standard error = 0.0000; 95% CI, -0.2283 to -0.2283); among the 3 postpublication reviewers for the first evaluation, it was 0.1899 (standard error = 0.0383; 95% CI, 0.1149-0.2649); and for the second evaluation, performed 3 months later, it was 0.1145 (standard error = 0.0350; 95% CI, 0.0460-0.1831). The intraobserver agreement was 0.2344 (standard error = 0.0660; 95% CI, 0.1050-0.3639), 0.3826 (standard error = 0.0738; 95% CI, 0.2379-0.5272), and 0.6611 (standard error = 0.0656; 95% CI, 0.5325-0.7898) for the 3 postpublication evaluators, respectively. These results show no-to-fair interreviewer agreement and a tendency to slight intrareviewer agreement. CONCLUSIONS: Inconsistent use of evidence-based criteria by different raters limits their utility when attempting to classify neuroradiology-related articles. |
id |
RCAP_8c83df433b99017e7f9e5b24241006db |
---|---|
oai_identifier_str |
oai:repositorio.chlc.pt:10400.17/3010 |
network_acronym_str |
RCAP |
network_name_str |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
repository_id_str |
https://opendoar.ac.uk/repository/7160 |
spelling |
Evidence Levels for Neuroradiology Articles: Low Agreement Among RatersEvidence-Based MedicineHumansObserver VariationPeriodicals as TopicCHLC NRADBACKGROUND AND PURPOSE: Because evidence-based articles are difficult to recognize among the large volume of publications available, some journals have adopted evidence-based medicine criteria to classify their articles. Our purpose was to determine whether an evidence-based medicine classification used by a subspecialty-imaging journal allowed consistent categorization of levels of evidence among different raters. MATERIALS AND METHODS: One hundred consecutive articles in the American Journal of Neuroradiology were classified as to their level of evidence by the 2 original manuscript reviewers, and their interobserver agreement was calculated. After publication, abstracts and titles were reprinted and independently ranked by 3 different radiologists at 2 different time points. Interobserver and intraobserver agreement was calculated for these radiologists. RESULTS: The interobserver agreement between the original manuscript reviewers was -0.2283 (standard error = 0.0000; 95% CI, -0.2283 to -0.2283); among the 3 postpublication reviewers for the first evaluation, it was 0.1899 (standard error = 0.0383; 95% CI, 0.1149-0.2649); and for the second evaluation, performed 3 months later, it was 0.1145 (standard error = 0.0350; 95% CI, 0.0460-0.1831). The intraobserver agreement was 0.2344 (standard error = 0.0660; 95% CI, 0.1050-0.3639), 0.3826 (standard error = 0.0738; 95% CI, 0.2379-0.5272), and 0.6611 (standard error = 0.0656; 95% CI, 0.5325-0.7898) for the 3 postpublication evaluators, respectively. These results show no-to-fair interreviewer agreement and a tendency to slight intrareviewer agreement. CONCLUSIONS: Inconsistent use of evidence-based criteria by different raters limits their utility when attempting to classify neuroradiology-related articles.American Society of NeuroradiologyRepositório da Unidade Local de Saúde São JoséRamalho, JTedesqui, GRamalho, MAzevedo, RSCastillo, M2018-08-06T14:53:23Z2015-062015-06-01T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articleapplication/pdfhttp://hdl.handle.net/10400.17/3010eng10.3174/ajnr.A4242info:eu-repo/semantics/openAccessreponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiainstacron:RCAAP2025-03-06T16:46:45Zoai:repositorio.chlc.pt:10400.17/3010Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireinfo@rcaap.ptopendoar:https://opendoar.ac.uk/repository/71602025-05-29T00:17:41.180586Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiafalse |
dc.title.none.fl_str_mv |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters |
title |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters |
spellingShingle |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters Ramalho, J Evidence-Based Medicine Humans Observer Variation Periodicals as Topic CHLC NRAD |
title_short |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters |
title_full |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters |
title_fullStr |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters |
title_full_unstemmed |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters |
title_sort |
Evidence Levels for Neuroradiology Articles: Low Agreement Among Raters |
author |
Ramalho, J |
author_facet |
Ramalho, J Tedesqui, G Ramalho, M Azevedo, RS Castillo, M |
author_role |
author |
author2 |
Tedesqui, G Ramalho, M Azevedo, RS Castillo, M |
author2_role |
author author author author |
dc.contributor.none.fl_str_mv |
Repositório da Unidade Local de Saúde São José |
dc.contributor.author.fl_str_mv |
Ramalho, J Tedesqui, G Ramalho, M Azevedo, RS Castillo, M |
dc.subject.por.fl_str_mv |
Evidence-Based Medicine Humans Observer Variation Periodicals as Topic CHLC NRAD |
topic |
Evidence-Based Medicine Humans Observer Variation Periodicals as Topic CHLC NRAD |
description |
BACKGROUND AND PURPOSE: Because evidence-based articles are difficult to recognize among the large volume of publications available, some journals have adopted evidence-based medicine criteria to classify their articles. Our purpose was to determine whether an evidence-based medicine classification used by a subspecialty-imaging journal allowed consistent categorization of levels of evidence among different raters. MATERIALS AND METHODS: One hundred consecutive articles in the American Journal of Neuroradiology were classified as to their level of evidence by the 2 original manuscript reviewers, and their interobserver agreement was calculated. After publication, abstracts and titles were reprinted and independently ranked by 3 different radiologists at 2 different time points. Interobserver and intraobserver agreement was calculated for these radiologists. RESULTS: The interobserver agreement between the original manuscript reviewers was -0.2283 (standard error = 0.0000; 95% CI, -0.2283 to -0.2283); among the 3 postpublication reviewers for the first evaluation, it was 0.1899 (standard error = 0.0383; 95% CI, 0.1149-0.2649); and for the second evaluation, performed 3 months later, it was 0.1145 (standard error = 0.0350; 95% CI, 0.0460-0.1831). The intraobserver agreement was 0.2344 (standard error = 0.0660; 95% CI, 0.1050-0.3639), 0.3826 (standard error = 0.0738; 95% CI, 0.2379-0.5272), and 0.6611 (standard error = 0.0656; 95% CI, 0.5325-0.7898) for the 3 postpublication evaluators, respectively. These results show no-to-fair interreviewer agreement and a tendency to slight intrareviewer agreement. CONCLUSIONS: Inconsistent use of evidence-based criteria by different raters limits their utility when attempting to classify neuroradiology-related articles. |
publishDate |
2015 |
dc.date.none.fl_str_mv |
2015-06 2015-06-01T00:00:00Z 2018-08-06T14:53:23Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://hdl.handle.net/10400.17/3010 |
url |
http://hdl.handle.net/10400.17/3010 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
10.3174/ajnr.A4242 |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
application/pdf |
dc.publisher.none.fl_str_mv |
American Society of Neuroradiology |
publisher.none.fl_str_mv |
American Society of Neuroradiology |
dc.source.none.fl_str_mv |
reponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia instacron:RCAAP |
instname_str |
FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia |
instacron_str |
RCAAP |
institution |
RCAAP |
reponame_str |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
collection |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
repository.name.fl_str_mv |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia |
repository.mail.fl_str_mv |
info@rcaap.pt |
_version_ |
1833600476314075136 |