Detalhes bibliográficos
Ano de defesa: |
2023 |
Autor(a) principal: |
Barguil, João Marcos de Mattos |
Orientador(a): |
Não Informado pela instituição |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Tese
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
eng |
Instituição de defesa: |
Biblioteca Digitais de Teses e Dissertações da USP
|
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: |
|
Link de acesso: |
https://www.teses.usp.br/teses/disponiveis/45/45134/tde-02012024-175653/
|
Resumo: |
Can persons without disabilities be good evaluators of accessibility? This question, often posed by persons with disabilities when looking at crowdsourced accessibility maps, is related to one of the most important unresolved issues of crowdsourcing: data quality control. Many of the recent ground-breaking advancements in machine learning depend on data annotation done by humans. Existing approaches for managing inaccuracies in crowdsourcing are based on validating output against preset gold standards, but they are unsuitable for subjective contexts such as sentiment analysis, semantic annotation, or measuring accessibility. While existing accessibility maps are largely centered in Europe and the United States, we built the largest database of such kind in Latin America. We detail techniques used for engaging over 27,000 volunteers who generated more than 300,000 data points over the course of 90 months, and a novel method for validating data quality in a context that lacks a definite ground truth. We tested it by applying concepts of serious games for exposing biases of different demographic profiles, and crowdsourced a different dataset for validating data quality. We found that persons without disabilities did not have worse performance than persons with disabilities, strong evidence that crowdsourcing can be a reliable source for accessibility data. |