Analyzing natural language inference from a rigorous point of view

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: Salvatore, Felipe de Souza
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/
Resumo: Natural language inference (NLI) is the task of determining the entailment relationship between a pair of sentences. We are interested in the problem of verifying whether the deep learning models current used in NLI satisfy some logical properties. In this thesis, we focus on two properties: i) the capacity of solving deduction problems based on some specific logical forms (e.g., Boolean coordination, quantifiers, definite description, and counting operators); and ii) the property of having the same conclusion from equivalent premises. For each one of these properties we develop a new evaluation procedure. For i) we offer a new synthetic dataset that can be used both for inference perception and inference generation; and for ii) we propose a null hypothesis test constructed to represent the different manners that the inclusion of sentences with the same meaning can affect the training of a machine learning model. Our results show that although deep learning models have an outstanding performance on the majority of NLI datasets, they still lack some important inference skills such as dealing with counting operators, predicting which word can form an entailment given an specific context, and presenting the same deductions for two different text inputs with the same meaning. This indicates that despite the high prediction power of these new models, they do present some inference biases that cannot be easily removed. Future investigations are needed in order to understand the scope of this bias. It is possible that by increasing the training sample size in the fine-tuning phase, this bias can be reduced.