A method for adapting large language models for communication card prediction in augmentative and alternative communication systems

Detalhes bibliográficos
Ano de defesa: 2023
Autor(a) principal: PEREIRA, Jayr Alencar
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Universidade Federal de Pernambuco
UFPE
Brasil
Programa de Pos Graduacao em Ciencia da Computacao
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://repositorio.ufpe.br/handle/123456789/52149
Resumo: Augmentative and Alternative Communication (AAC) systems assist individuals with com- plex communication needs to express themselves. Communication cards are a popular method used in AAC, where users select cards and arrange them in sequence to form a sentence. How- ever, the limited number of cards displayed and the need to navigate multiple pages or folders can hinder users’ communication ability. To overcome these barriers, various methods, such as vocabulary organization, color coding systems, motor planning, and predictive models, have been proposed to aid message authoring. Predictive models can suggest the most probable next cards based on prior input. Recent advancements in Artificial Intelligence (AI) and Machine Learning (ML) have shown potential for improving the accessibility and customization of AAC systems. This study proposes adapting large language models to communication card predic- tion in AAC systems to facilitate message authoring. The proposed method involves three main steps: 1) adapting a text corpus to the AAC domain by either converting it into a corpus of telegraphic sentences or incorporating features that enable the exploration of visual cues; 2) fine-tuning a transformer-based language model using the adapted corpus; and 3) replacing the language model decoder weights with an encoded representation of the user’s vocabulary to generate a probability distribution over the user’s vocabulary items during inference. The proposed method leverages that transformers-based language models, such as Bidirectional Encoder Representations from Transformers (BERT), share the weights of the input embed- dings layer with the decoder in the language modeling head. Therefore, the plug-and-play method can be used without additional training for zero-shot communication card prediction. The method was evaluated in English and Brazilian Portuguese using a zero-shot setting and a few-shot setting, where a small text corpus was used for fine-tuning. Additionally, the im- pact of incorporating additional features into the training sentences by labeling them with the Colourful Semantics structure was assessed. The results demonstrate that the proposed method’s models outperform models pre-trained for the task. Moreover, the results indicate that incorporating Colourful Semantics improves the accuracy of communication card predic- tion. Thus, the proposed method utilizes the transfer learning ability of transformers-based language models to facilitate message authoring in AAC systems in a low-effort setting.