Aplicação de técnicas de aprendizado por reforço na solução do problema de corte de estoque multiperíodo estocástico

Detalhes bibliográficos
Ano de defesa: 2021
Autor(a) principal: Murta, Arthur Hermont Fonseca
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Não Informado pela instituição
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://www.repositorio.ufc.br/handle/riufc/70010
Resumo: The cutting stock problem is a combinatorial optimization problem that consists of cutting larger objects in order to produce smaller pieces to meet a given demand in order to minimize material losses. This dissertation addresses up a multiperiod stochastic variant in which the problem is solved in multiple periods of time and we do not know exactly what the future demand will be, which is modeled as a random variable. This problem variant corresponds more closely to the reality of companies, which usually do not know in advance the demand for each time period. First, the stochastic multiperiod cutting stock problem was modeled as a Markovian decision process. A solution to the problem corresponds to an optimal decision policy, which is defined as what action to be taken every time to minimize the expected total cost. Exact algorithms to calculate an optimal policy require large computational effort when the problem size grows, then reinforcement learning techniques were used through an approximate policy iteration algorithm using a Bayesian filter. Computational experiments were performed to illustrate the application of the approach to real data on cutting steel bars in construction industry. The results indicate that the performance of the policy obtained by the proposed approach was up to fifty times better than the performance using a short-sighted policy, which does not take into account the future impact of decisions taken in the present.