Sistema de visión computacional estereoscópico aplicado a un robot cilíndrico accionado neumáticamente

Detalhes bibliográficos
Ano de defesa: 2017
Autor(a) principal: Ramirez Montecinos, Daniela Elisa
Orientador(a): Lorini, Flavio Jose
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: spa
Instituição de defesa: Não Informado pela instituição
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Palavras-chave em Inglês:
Palavras-chave em Espanhol:
Link de acesso: http://hdl.handle.net/10183/165141
Resumo: In the industrial area, robots are an important part of the technological resources available to perform manipulation tasks in manufacturing, assembly, the transportation of dangerous waste, and a variety of applications. Specialized systems of computer vision have entered the market to solve problems that other technologies have been unable to address. This document analyzes a stereo vision system that is used to provide the center of mass of an object in three dimensions. This kind of application is mounted using two or more cameras that are aligned along the same axis and give the possibility to measure the depth of a point in the space. The stereoscopic system described, measures the position of an object using a combination between the 2D recognition, which implies the calculus of the coordinates of the center of mass and using moments, and the disparity that is found comparing two images: one of the right and one of the left. This converts the system into a 3D reality viewfinder, emulating the human eyes, which are capable of distinguishing depth with good precision.The proposed stereo vision system is integrated into a 5 degree of freedom pneumatic robot, which can be programmed using the GRAFCET method by means of commercial software. The cameras are mounted in the lateral plane of the robot to ensure that all the pieces in the robot's work area can be observed.For the implementation, an algorithm is developed for recognition and position measurement using open sources in C++. This ensures that the system can remain as open as possible once it is integrated with the robot. The validation of the work is accomplished by taking samples of the objects to be manipulated and generating robot's trajectories to see if the object can be manipulated by its end effector or not. The results show that is possible to manipulate pieces in a visually crowded space with acceptable precision. However, the precision reached does not allow the robot to perform tasks that require higher accuracy as the one is needed in manufacturing assembly process of little pieces or in welding applications.