Detalhes bibliográficos
Ano de defesa: |
2020 |
Autor(a) principal: |
Vasconcelos, Thiago de Paula |
Orientador(a): |
Não Informado pela instituição |
Banca de defesa: |
Não Informado pela instituição |
Tipo de documento: |
Dissertação
|
Tipo de acesso: |
Acesso aberto |
Idioma: |
eng |
Instituição de defesa: |
Não Informado pela instituição
|
Programa de Pós-Graduação: |
Não Informado pela instituição
|
Departamento: |
Não Informado pela instituição
|
País: |
Não Informado pela instituição
|
Palavras-chave em Português: |
|
Link de acesso: |
http://www.repositorio.ufc.br/handle/riufc/54240
|
Resumo: |
Bayesian Optimization (BO) is a framework for black-box optimization that is especially suitable for expensive cost functions. Among the main parts of a BO algorithm, the acquisition function is of fundamental importance, since it guides the optimization algorithm by translating the uncertainty of the regression model in a utility measure for each point to be evaluated. Considering such aspect, selection and design of acquisition functions are one of the most popular research topics in BO. As no single acquisition function was proved to have better performance in all tasks, a well-established approach consists of selecting different acquisition functions along the iterations of a BO execution. In such approach, the GP-Hedge algorithm is a widely used option given its simplicity and good performance. Despite its success in various applications, GP-Hedge shows an undesirable characteristic of accounting on all past performance measures of each acquisition function to select the next function to be used. In this case, good or bad values obtained in an initial iteration may impact the choice of the acquisition function for the rest of the algorithm. This fact may induce a dominant behavior of an acquisition function and may impact the final performance of the method. To overcome such limitation, this work proposes a variant of GP-Hedge, named Normalized Portfolio Allocation Strategy BO (No-PASt-BO), that reduces the influence of far past evaluations. Moreover, this method presents a built-in normalization that avoids the functions in the portfolio to have similar probabilities, thus improving the exploration. However, such an improvement has been achieved at the cost of including two hyperparameters. To improve that method, it is proposed a second one which samples from the posterior of these portfolio hyperparameters during the optimization via Thompson sampling. We can update the posteriors analytically at each iteration by carefully choosing the corresponding priors. The later approach, named Self-Tunning Portfolio-based Bayesian Optimization (SeTuP-BO), maintains the advantages of the original No-PASt-BO method without needing manually tuning hyperparameters. We evaluated both methods and their competitors across several tasks achieving promising results, indicating that the proposed methods ares competitive with the available alternatives. |