TY - GEN
T1 - Human identification for human-robot interactions
AU - Burns, Brian
AU - Samanta, Biswanath
N1 - Publisher Copyright:
Copyright © 2014 by ASME.
PY - 2014
Y1 - 2014
N2 - In co-robotics applications, the robots must identify human partners and recognize their status in dynamic interactions for enhanced acceptance and effectiveness as socially interactive agents. Using the data from depth cameras, people can be identified from a person's skeletal information. This paper presents the implementation of a human identification algorithm using a depth camera (Carmine from PrimeSense), an open-source middleware (NITE from OpenNI) with the Java-based Processing language and an Arduino microcontroller. This implementation and communication sets a framework for future applications of human-robot interactions. Based on the movements of the individual in the depth sensor's field of view, the program can be set to track a human skeleton or the closest pixel in the image. Joint locations in the tracked human can be isolated for specific usage by the program. Joints include the head, torso, shoulders, elbows, hands, knees and feet. Logic and calibration techniques were used to create systems such as a facial tracking pan and tilt servomotor mechanism. The control system presented here sets groundwork for future implementation into student built animatronic figures and mobile robot platforms such as Turtlebot.
AB - In co-robotics applications, the robots must identify human partners and recognize their status in dynamic interactions for enhanced acceptance and effectiveness as socially interactive agents. Using the data from depth cameras, people can be identified from a person's skeletal information. This paper presents the implementation of a human identification algorithm using a depth camera (Carmine from PrimeSense), an open-source middleware (NITE from OpenNI) with the Java-based Processing language and an Arduino microcontroller. This implementation and communication sets a framework for future applications of human-robot interactions. Based on the movements of the individual in the depth sensor's field of view, the program can be set to track a human skeleton or the closest pixel in the image. Joint locations in the tracked human can be isolated for specific usage by the program. Joints include the head, torso, shoulders, elbows, hands, knees and feet. Logic and calibration techniques were used to create systems such as a facial tracking pan and tilt servomotor mechanism. The control system presented here sets groundwork for future implementation into student built animatronic figures and mobile robot platforms such as Turtlebot.
UR - http://www.scopus.com/inward/record.url?scp=84926442178&partnerID=8YFLogxK
U2 - 10.1115/IMECE2014-38496
DO - 10.1115/IMECE2014-38496
M3 - Conference article
AN - SCOPUS:84926442178
T3 - ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)
BT - Dynamics, Vibration, and Control
PB - American Society of Mechanical Engineers (ASME)
T2 - ASME 2014 International Mechanical Engineering Congress and Exposition, IMECE 2014
Y2 - 14 November 2014 through 20 November 2014
ER -