Método iterativo para constância de cor em vídeos utilizando cores identificadas nas cenas

Detalhes bibliográficos
Ano de defesa: 2016
Autor(a) principal: Simão, Josemar
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Tese
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Universidade Federal do Espírito Santo
BR
Doutorado em Engenharia Elétrica
Centro Tecnológico
UFES
Programa de Pós-Graduação em Engenharia Elétrica
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://repositorio.ufes.br/handle/10/9718
Resumo: Color constancy algorithms try to correct the colors of images captured under unknown lighting and present them as if they had been captured under a known lighting. When working with images sequences, there are lighting variations between the images, which hinders the application of various computer vision algorithms. Correcting the images and presenting them as if they were taken under the same lighting enables the application of various of these algorithms. In this work, the colors of images in a sequence are corrected individually so that they have the same lighting of a reference frame, in general, the first frame. To do that, general linear transformation with nine parameters is used, called here color mapping matrix. An optimization process, for example, least squares method, is necessary to obtain the color mapping matrix when the number of reference colors is higher than the number of color channels. This is achieved through the calculation of the pseudo-inverse matrix. An iterative process can use the colors of an image to correct the color of the next image, using regions common to both images. A set of colors from the former image composes the reference sample, and the colors of the next image, from regions corresponding to the those of the reference image, compose the captured sample. These samples are used to obtain the color mapping matrix. As the visual field varies throughout the sequence, the regions common to images in the sequence must be adjusted for each iteration. Three color correction methods based on this approach are shown in this thesis. The first method, the reference method, is called Method using Reference Samples from the Previous Image (MSPI), and can only be applied to image sequences with lighting variation and no relative movement between the camera and the scene, i.e., no need to adjust the regions. A temporal filter that uses a set of previous images to produce more stable reference samples is applied to attenuate the noise effects on images, giving origin to the Method of the Temporal Filter for Reference Samples (MTFS). This method, coupled with a scheme for tracking the regions with identified colors, allows color correction in sequences of images with relative movement between the camera and the scene. However, the method still presents a high sensibility to the image noise leading to a degradation of the corrected images. Using reference samples and the captured samples of a set of previous iterations and a temporal filter, the third method called Method of the Temporal Filter for Transformations (MTFT), presents results that allow its use in various areas, including Mobile Robotic