Location prediction from external sensors trajectories

Detalhes bibliográficos
Ano de defesa: 2022
Autor(a) principal: Cruz, Lívia Almada
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Não Informado pela instituição
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://www.repositorio.ufc.br/handle/riufc/68174
Resumo: This thesis proposes a multi-task deep learning-based scheme to predict the next location from trajectories captured by external sensors (eg traffic surveillance cameras or speed cameras). The positions reported in these trajectories are sparse, due to the distribution of the sensors, and incomplete because the sensors can fail to register the passage of objects. This framework includes different pre-processing steps to align the representation of trajectories and deal with the problem of missing data. We present a multitasking learning approach based on recurrent neural networks. This approach uses time and space information in the training phase to learn more meaningful representations. The multi-task learning model jointly with the pre-processing step substantially improves the prediction performance. This thesis also deals with the problem of representation learning for trajectory data. Representation learning concerns the problem of learning low-dimensional representation from complex data, and it is an essential task in machine learning. We evaluate how natural language processing models capture the representation of sensors and trajectories. The empirical evaluation shows that the space of features identified by such models can capture the spatial similarity relationships for sensors and trajectories within a given neighborhood. We also evaluate how these representations improve a location prediction model.