Sensory Motor Coordination (SMC) Application
One primary focus of our SMC application involves extracting salient features using a vision system mounted on a mobile ATRV-Jr robot which we plan to implement this summer in three steps:
Tele-operation. Robot is guided along a path in an environment for several times. All sensory data, along with motor commands, are saved.
Off-line association. First, the motor commands sequences are partitioned into episodes according to the command’s variations. The episodes from various trials are split and merged so that each trial has the same number of episodes and so that the same episode from different trials corresponds to the same phase of the task. Each episode was time-normalized (i.e., resampled to have the same signal length) then averaged across the trials. If the difference between original sequences and the exemplar sequence is less than a pre-defined threshold, this feature is considered salient. Note: Two classes of visual features are used as the candidates for salient features: attention focuses and color histogram. Attention focuses are clustered and consistent clustered are used as the salient features.
Evaluation. A finite state machine will be constructed using the salient features and motor episode. The robot is asked to navigate in the environment it has learned. Performance of salient-feature-based-navigation is then evaluated.
Experiments are planned for this summer through next fall. The robot will compare the current features with the salient feature sequence and decide what to do next until it finally reaches the target position.