A convolutional neural network approach for speech quality assesment

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: ALBUQUERQUE, Renato Quirino de
Orientador(a): MELLO, Carlos Alexandre Barros de
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal de Pernambuco
Programa de Pós-Graduação: Programa de Pos Graduacao em Ciencia da Computacao
Departamento: Não Informado pela instituição
País: Brasil
Palavras-chave em Português:
Link de acesso: https://repositorio.ufpe.br/handle/123456789/38524
Resumo: An important aspect of speech understanding is quality, which can be defined as the fidelity of the signal in relation to its original (or idealized) version when a comparison is allowed. Despite being a subjective issue, there are approaches to measuring speech quality. The most effective approach consists of applying subjective tests, in which individuals evaluate the quality of the speech samples, associating them with quality indexes. However, there are automatic measurement applications that operate at lower costs and generate faster responses. Such solutions can be divided into methodologies that use only the sample to be evaluated (non-reference) and those that use the degraded and reference versions of the speech sample (full-reference). Unfortunately, for many current applications, it is impossible to obtain the original speech sample, requiring the development and application of non-reference techniques. Thus, this dissertation presents a model of convolutional neural network for speech quality assessment (CNN-SQA). This is a non-reference methodology that applies fully convolutional layers as extractors of characteristics for speech representation. In addition, fully-connected layers are used to perform the quality assessment step. For the entry of the model, some visual characteristics were evaluated, despite the use of MFCC coefficients having presented the best results. Other parameters of the new model were obtained through an iterative and incremental parameter selection process. The performance of the model was evaluated by comparing it with the PESQ, ViSQOL and P.563 methodologies. Other experiments present analyzes of the model’s behavior in isolated situations of speech and noise. The experiments were carried out on publicly available databases, as well as on a new database built to evaluate the new methodology in the context of background noise. Finally, the results were analyzed using correlation measures and statistical descriptions.