Red neuronal convolucional para discriminar herramientas en robótica asistencial

Convolutional neural network for discrimination of tools for assistance robotics

Palabras clave: machine learning, deep learning, pattern recognition, CNN, assisted robotics, machine vision (en_US)
Palabras clave: aprendizaje de máquina, aprendizaje profundo, reconocimiento de patrones, RNC, robótica asistencial, visión de máquina (es_ES)

Resumen (es_ES)

En el presente artículo se expone el entrenamiento de una Red Neuronal Convolucional (RNC) para discriminación de herramientas de uso común en tareas de mecánica, electricidad, carpintería y similares. Para el caso, se toman como objetivos de entrenamiento pinzas, destornilladores, tijeras y alicates, los cuales puedan ser identificados por la red, y permite dotarle a un brazo robótico la facultad de identificar una herramienta deseada - de entre las anteriores - para su posible entrega a un usuario. La arquitectura neuro convolucional empleada para la red presenta un porcentaje de acierto del 96% en la identificación de las herramientas entrenadas.

Resumen (en_US)

In the present article, the training of a Convolutional Neuronal Network (CNN) for discrimination of tools commonly used in mechanical, electrical, carpentry and similar tasks, is exposed. For this purpose, training objectives include nippers, screwdrivers, scissors and pliers, in order to be classified by the network, allowing a robotic arm to identify a desired tool for its possible delivery to a user. The CNN architecture used for the network presents a 96% success rate in the identification of tools


La descarga de datos todavía no está disponible.


P. Sincak, P. Smolar y J. Ondo, “The major challenges in intelligent robotics (Towards cloud robotics and its impact to crowdsourced image processing for intelligent agents serving for robots)” 2015 13th International Conference on Emerging eLearning Technologies and Applications (ICETA), Stary Smokovec, 2015, pp. 1-6,

S. Hu, S. Chen, S. Chen, G. Xu y D. Sun, “Automated Transportation of Multiple Cell Types Using a Robot-Aided Cell Manipulation System With Holographic Optical Tweezers” in IEEE/ASME Transactions on Mechatronics, vol. 22, n°. 2, pp. 804-814, April 2017.

S. D. Lee y J. B. Song, “Collision detection of humanoid robot arm under model uncertainties for handling of unknown object” 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, 2015, pp. 718-721,

S. K. Pilli, B. Nallathambi, S. J. George y V. Diwanji, “eAGROBOT — A robot for early crop disease detection using image processing” 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, 2015, pp. 1684-1689,

S. Z. Murshed et al., “Controlling an embedded robot through image processing based object tracking using MATLAB” 2016 10th International Conference on Intelligent Systems and Control (ISCO), Coimbatore, 2016, pp. 1-6

S. Jia, Z. Ju, T. Xu, H. Zhang y X. Li, “Mechanical arm grasping and target recognition strategy based on the coarse-to-fine algorithm” 2016 IEEE International Conference on Information and Automation (ICIA), Ningbo, 2016, pp. 1812-1817,

Y. Y. Aye, K. Watanabe, S. Maeyama y I. Nagai, “Image-based fuzzy control of a car-like mobile robot for parking problems” 2016 IEEE International Conference on Mechatronics and Automation, Harbin, 2016, pp. 502-507,

M. M. Ali, H. Liu, N. Stoll y K. Thurow, “Intelligent arm manipulation system in life science labs using H20 mobile robot and Kinect sensor” 2016 IEEE 8th International Conference on Intelligent Systems (IS), Sofia, 2016, pp. 382-387,

A. Krizhevsky, I. Sutskever y G. E. Hinton, “Imagenet classification with deep convolutional neural networks”, NIPS'12 Proceedings of the 25th International Conference on Neural Information Processing Systems, 2012 – vol. 1 Pages 1097-1105.

M.D., Zeiler, R. Fergus. “Visualizing and Understanding Convolutional Networks”. In: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8689. Springer, Cham

Z. Wang, Z. Li, B. Wang, H. Liu. “Robot grasp detection using multimodal deep convolutional neural networks”. Advances in Mechanical Engineering, vol. 8, Issue 9: September-23-2016.

M. Halicek, G. Lu, J. V. Little et al; “Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging”. J. Biomed. Opt. vol. 22, n° 6,

Z. Xiaolong, J. Beth, C. Rosa, T. Chung “Self-Recalibrating Surface EMG Pattern Recognition for Neuroprosthesis Control Based on Convolutional Neural Network”. Frontiers in Neuroscience. Vol. 11, 2017, p 379,

S. Vieira, H. L. Walter, A. Mechelli, “Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications”, Neuroscience & Biobehavioral Reviews, vol. 74, 2017, pp. 58-75,

Z. Deng, L. Lei, H. Sun, H. Zou, S. Zhou y J. Zhao, “An enhanced deep convolutional neural network for densely packed objects detection in remote sensing images” 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, 2017, pp. 1-4,

A. Ding, Q. Zhang, X. Zhou y B. Dai, “Automatic recognition of landslide based on CNN and texture change detection” 2016 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC), Wuhan, 2016, pp. 444-448,

I. Marras, P. Palasek y I. Patras, “Deep Refinement Convolutional Networks for Human Pose Estimation” 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), Washington, DC, 2017, pp. 446-453,

Cómo citar
Jiménez Moreno, R., Avilés, O., & Ovalle, D. M. (2018). Red neuronal convolucional para discriminar herramientas en robótica asistencial. Visión electrónica, 12(2), 208-214.
Publicado: 2018-10-27
Visión Investigadora

Artículos más leídos del mismo autor/a