A deep learning approach to visual servo control and grasp detection for autonomous robotic manipulation

Detalhes bibliográficos
Ano de defesa: 2020
Autor(a) principal: Ribeiro, Eduardo Godinho
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Biblioteca Digitais de Teses e Dissertações da USP
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: https://www.teses.usp.br/teses/disponiveis/18/18153/tde-25092020-134758/
Resumo: The development of the robotics and artificial intelligence fields has not yet allowed robots to execute, with dexterity, simple actions performed by humans. One of them is the grasping of objects by robotic manipulators. Aiming to explore the use of deep learning algorithms, specifically Convolutional Neural Networks, to approach the robotic grasping problem, this work addresses the visual perception phase involved in the task. That is, the processing of visual data to obtain the location of the object to be grasped, its pose and the points at which the robot\'s grippers must make contact to ensure a stable grasp. For this, the dataset Cornell Grasping is used to train a convolutional neural network capable of considering these three stages simultaneously. In other words, having an image of the robot\'s workspace, containing a certain object, the network predicts a grasp rectangle that symbolizes the position, orientation and opening of the robot\'s parallel grippers in the instant before its closing. In addition to this network, capable of processing images in real-time, another network is designed so that it is possible to deal with situations in which the object moves in the environment. In this way, the second convolutional network is trained to perform a visual servo control which ensures that the object remains in the robot\'s field of view. This network predicts the proportional values of the linear and angular velocities that the camera must have so that the object is always in the image processed by the grasp network. The dataset used for training was generated, with reduced human supervision, by a Kinova Gen3 robotic manipulator with seven degrees of freedom. The robot is also used to evaluate the applicability in real-time and obtain practical results from the designed algorithms. In addition, the offline results obtained through validation sets are also analyzed and discussed taking into account their efficiency and processing speed. The results for grasping exceed 90% accuracy with state-of-the-art prediction speed. Regarding visual servoing, one of the designed models achieves millimeter positioning accuracy for a first-seen object. In a small evaluation, the complete system performed successful tracking and grasping of first-seen dynamic objects in 85% of attempts. So, this work presents a new system for autonomous robotic manipulation, able to generalize to different objects and with high processing speed, which allows its application in real-time and real-world robotic systems.