Scalable and interpretable kernel methods based on random Fourier features
Ano de defesa: | 2023 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Dissertação |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Universidade Federal de São Carlos
Câmpus São Carlos |
Programa de Pós-Graduação: |
Programa Interinstitucional de Pós-Graduação em Estatística - PIPGEs
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: | |
Palavras-chave em Inglês: | |
Área do conhecimento CNPq: | |
Link de acesso: | https://repositorio.ufscar.br/handle/20.500.14289/17579 |
Resumo: | Kernel methods are a class of statistical machine learning models based on positive semidefinite kernels, which serve as a measure of similarity between data features. Examples of kernel methods include kernel ridge regression, support vector machines, and smoothing splines. Despite their widespread use, kernel methods face two main challenges. Firstly, due to operating on all pairs of observations, they require a large amount of memory and calculation, making them unsuitable for use with large datasets. This issue can be solved by approximating the kernel function via random Fourier features or preconditioners. Secondly, most used kernels consider all features to be equally relevant, without considering their actual impact on the prediction. This results in decreased interpretability, as the influence of irrelevant features is not mitigated. In this work, we extend the random Fourier features framework to Automatic Relevance Determination (ARD) kernels and proposes a new kernel method that integrates the optimization of kernel parameters during training. The kernel parameters reduce the effect of irrelevant features and might be used for post-processing variable selection. The proposed method is evaluated on several datasets and compared to conventional algorithms in machine learning. |