Learning Robotic Control from Humans

In cooperation with the NASA Robonaut team, we have begun determining how to learn motion behaviors from a human motion. This human motion is translated into the robot's knowledge by recording Robonaut's sensory-motor responses to a teleoperated trial. This teleoperated trial is particularly useful because Robonaut is designed to be controlled by what is basically a virtual-reality system. The teleoperator wears a headset that shows him the view from Robonaut's cameras, and wears special gloves that make the robot's arms and hands mimic his own motions.

Thus, the data recorded from a session is the robot's interpretation of a human behavior.

It has proven possible, with quite a bit of human intervention in the data-segmentation process, to make Robonaut repeat the teleoperator's motion even when parameters of the motion are changed. For example, after being shown how to reach for an object, Robonaut can reach for the object even when it is in a place that was not specifically shown to the robot.

Current research in this area includes attempting to implement some of these abilities on ISAC, increasing the number of parameters that can be changed (such as allowing for the object to be grasped from above instead of from one side) and decreasing the amout of human input needed in the data-segmentation process. The current goal is to make it possible for the robot to be shown a task, and then be able to immediately repeat it even when several parameters are changed.


Robonaut*


Teleoperation*

*images courtesy of NASA Johnson Space Center


Link to NASA Johnson Space Center Robonaut Page