Acquisition of Autonomous Behaviors by Robotic Assistants

Statement of Work

This research is being conducted by a team of researchers at Vanderbilt University under the direction of Prof. Kazuhiko Kawamura. Co-PIs are Professors Alan Peters, Nilanjan Sarkar and Bobby Bodenheimer, and Research Associate Dr. Juyi Park.

Tasks will be conducted at the Cognitive Robotics Lab at Vanderbilt and the Robonaut Lab at NASA/JSC. Other universities are members of this team consortium include the University of Massachusetts and the University of Southern California.

Background

The Sensory Ego-Sphere (SES) is a computational data structure with four essential capabilities: declarative short-term memory (STM), spatio-temporal coincidence detection of sensory-motor events, egocentric navigation, and homeomorphic data sharing. It receives data from a number of sensory processing modules that operate independently, in parallel. It also receives task-dependent control data from long-term memory (LTM). The SES has been implemented previously on both Robonaut and ISAC as a STM. The implementation on ISAC also includes data sharing.

The mathematical equivalence of Sensory-Motor Coordination (SMC) events, the Basic Behaviors of a behavior-based robot, and the competency modules (CM) of a spreading activation network (SAN) is the principle behind the proposed control architecture. In essence, SMC forms the basis for intelligent behavior by the robot. The detection of spatio-temporally coincident SMC events is essential to the control architecture.

The sheer volume of sensor data resulting from an agent's interactions with the world precludes the processing of it all. Yet, a subset of those data is necessary for the successful completion of any task. A task-dependent mechanism for the filtering of sensory information is therefore requisite for intelligent behavior. The proposed architecture uses attentional weighting on the SES to judge the relative importance of sensory information. (Attention plays a further role in action selection.)

Learning Action Maps through Teleoperation

Action Maps

A set of actions linked in sequence by sensory events. Through spreading activation, an Action map (AM) can plan multiple action sequences simultaneously without high-level reasoning. The multiple drafts are intertwined through the sensory events. This results in an exponential increase in possible solution strategies over the number of actions in the draft sequences. The action sequence consists of basic behaviors (simple actions tightly coupled to sensory input). This embeds in the robot into the environment and gives the robot short-term reactivity while maintaining focused on longer-term goals.

Human-Robot Interaction

Human-robot interface (HRI) issues are becoming an increasingly important part of robotics research. However, the focus of the robotics community is still 'robot-centric', emphasizing the technical challenges of achieving intelligent control and mobility. Only recently has the state-of-the-art in human-robot interfaces improved to such a degree that humans can now interact with robots in user-friendly multi-media interfaces

Before intelligent robots are fully developed and integrated as team members, we need to look more carefully at the nature of human-robot relationships. One way to do this is to consider the work that has been going on in the community of human-computer interaction (HCI). The HCI community recently has adopted a strong emphasis on 'human-centered' computing which uses a variety of cognitive modeling techniques.

Partnership between a human and a robot could be enhanced

  • if the robot were intelligent enough to recognize human intentions and adapts its behaviors,
  • if the human could see/understand robot?s intentions and make necessary modifications in task assignment and execution and
  • if the human could more easily assimilate and quickly comprehend the 'big picture' of what all the robots sensors are providing.

Interface between ISAC and remote humans

We proposed to design and implement a variety of graphical displays of the SES content on a remote screen, so that a human could interact with a robot in a remote location better by issuing high-level commands such as 'pick up wrench'. Issues such as a 'robot cockpit of the future' will be investigated as well. This task is being conducted exclusively at Vanderbilt using ISAC.

Interface between Robonaut and remote humans

This task will investigate how the Robonaut could be used to generate the SES graphics remotely. A communication and control system will be designed to connect the Robonaut site with the Cognitive Robotics Lab at Vanderbilt to remotely operate the Robonaut from Vanderbilt to perform a variety of task using the SES.

Mix Autonomy into the Teleoperation Approach

Pure teleoperation is tedious, slow and stressful and requires such a high level of attention from the operator that one operator is generally limited to operate only one robot. As a result, human productivity becomes limited. In order to break away from pure teleoperation paradigm, the shared control architecture is proposed, which lies in between teleoperation and pure autonomy. These states can be used as indicators for shared control. Therefore, we have designed the following tasks for this project.

  • Design intelligent autonomous behaviors (e.g., reactive behaviors) that can be used by the robots in autonomous mode.
  • Use affective computing to detect stress, boredom, lack of attention etc.
    • We have developed new tools to detect stress using wavelet transform and fuzzy logic.
    • These states will be used to make a decision on how to switch control.
  • Develop fault detection tools as well as self-agents that can monitor any developing fault in the robots.
  • Integrate all tasks with the adaptive user interface and human-robot interaction modeling
    • Integrate stress detection algorithms with Robonaut so that it can detect an operator?s stress and switch autonomy.
    • Develop a self-agent for Robonaut and fault detection and isolation tools.
    • Self-diagnose sensor and actuator faults and alert the operator (using Robonaut as the platform).
  • Design intelligent autonomous behaviors (e.g., reactive behaviors).

Experimentation and Integration Plan

Results will be integrated, and experiments performed, at several levels demonstrating the general applicability of the techniques and technologies we propose.

First, synergistic interaction of our technologies is a cornerstone of this proposal. We expect the integration of these technologies to enable new levels of dynamic autonomy interaction between the human and the humanoid team. Several of the researchers will work at JSC during the summer months and the final demonstration will be conducted in Houston as well.

Second, we expect to integrate this work with other teams within the DARPA MARS program. This will help demonstrate that these techniques are applicable to other robotic efforts. Our integrated system will be developed as a set of modules using an interface that will help to enable integration by other contractors. Experimentation plans will support our integration plans. The Vanderbilt team efforts will help support full integration efforts, and help to demonstrate integration of different platforms in situations that are realistic.

This work is being sponsored by a grant from DARPA administered through NASA-JSC.