AMCOM

Recently, robots have begun to make their way into the field for military missions. Additionally, the DoD is supporting advanced research in robots as evidenced by the DARPA programs in Mobile Autonomous Robot Software (MARS) and Tactical Mobile Robots (TMR), as well as the DoD effort in Future Combat Systems (FCS).
One thing that has become clear in such robotic applications is the importance of creating an effective team out of the humans and robots involved.

Figure 1

Some of the capabilities that are needed for effective teaming between man and robot are:
  • Cooperative integration (i.e., via mixed initiative systems) of the human's superior abilities in perception into the robot's activity, without resorting to teleoperation.
  • Novel Perception-Based Navigation capability that can incorporate such human perception along with machine perception to achieve robust performance
  • Sharing knowledge of the local tactical situation between the robots on the team
  • A practical intuitive user interface facilitating effective two-way cooperation between man and machine

In the Intelligent Robotics Laboratory at Vanderbilt University we are developing enabling technologies for providing these capabilities. We are developing the Sensory EgoSphere (SES), the Landmark EgoSphere (LES), Perception-Based Navigation (PBNav), and portable Graphical User Interfaces (GUIs) based on hand-held computers or PDAs.

Illustrative Scenario

Consider the following scenario as a practical example of human-robot teaming in the field. The mission involves secure perimeter surveillance and reporting. There is a command post where a human commander can observe the progress of the mission as well as communicate with his teams. There are soldiers in the field, each supervising the activity of a mobile robot assigned to him. PDAs are used by the soldiers as the interfaces to the robots. In this scenario we have two soldier-robot teams in the field. The first team has observed something unusual, i.e., a possible intruder, which must be checked out further. This team will need the assistance of the second team to encircle the location of the possible intruder prior to moving in to evaluate the situation. The robot in the first team has gathered valuable local tactical data that the robot in the second team will need to assist in this mission. The first robot shares this knowledge with the second robot and then guides the second robot into position using a perception-based navigation approach. Once the robots are in position, the second robot moves in to evaluate the situation in greater detail.

Clearly, such a scenario will require the integration of cooperative decision making, perception (both by man and by machine),Clearly, such a scenario will require the integration of:

  • cooperative decision making,
  • perception (both by man and by machine),
  • world modeling and theatre map representation,
  • cooperative robot behavior, and
  • human-robot interfaces.

We propose to accomplish such integration using our SES, LES, PBNav and GUI technologies described below. In the scenario described above we will use one of our robots, a Pioneer 2-AT robot, as the robot for team 1, and a track driven all-terrain Matilda robot as the team 2 robot.

Enabling Technologies

Sensory Egosphere

Our robot navigation system comprises local and global paradigms. Local navigation is reactive and based solely on information within the immediate sensing region of the robot. Global navigation uses information from beyond the limits of local sensors (the sensory horizon) and is deliberative. This division implicitly organizes the robot's memory into long-term and short-term components. Long-term memory is, in part, spatially organized to support global navigation. It includes a map of landmarks that should be expected by the robot, and objects that it has detected during its current and previous forays.

Short-term memory (STM) for reactive navigation is provided by the SES. Sensory processing modules write information directly to the SES. Short-term memory is robo-centric, sparse, and of limited duration.

At any specific location in the environment, the sensory horizon defines the region in which the robot can sense. Only those objects within the region have the potential to be sensed and stored on the SES. During navigation, the robot updates its STM periodically, in effect creating instances of SES structures at discrete locations. Each instance of the SES is a snapshot of the environment (as the robot sensed it) at a specific space-time location. These instances form a chain of SES representations which define a topological map - a collection of nodes and connecting arcs. At the end of navigation, a series of SES regions are stored in the short-term memory.

Landmark EgoSphere

The robot's long-term memory contains global layout information. The structural layout and major landmarks are indicated on the rough a priori map that is loaded into the robot. This map may be subsequently modified by the topological data stored in the STM. At any given position, landmarks presumed to be surrounding the robot are represented by a long-term memory structure called the Landmark EgoSphere (LES).

The LES is a robo-centric representation of environmental features expected at the current position. Within our navigational scheme, the directional information provided by the LES is of prime importance to the robot control system. In addition to long-term and short-term memory, the robot uses a working memory. This holds the descriptors of key locations (via-regions) that indicate transition points for navigation.

Perception-Based Navigation

Achieving perception-based navigation is our goal and towards doing this we have developed a simple algorithm. This algorithm, without requiring any range information moves the robot towards a target location based on the current perception. Assume that the robot is at point A and needs to move to point B as shown in figure 4. The icons shown in the figure represent the objects provided to the robot a priori.

Since range information is not explicitly sought, the robot represents its current perception by using the angular separation between perceived objects. From point A the world appears to the robot as shown in Figure 2.


Figure 2: Robots perception of the world from point A (SES).

This angular representation is the basic element of the whole architecture, and depending on the source of the information, it will be referred either as an SES or as an LES. In this case, if it is created via perception, it is an SES. If it is extracted from the a priori map provided to the robot, or if it is given to the robot as a target location by a user or an other robot, it is an LES. Therefore, the distribution presented in Figure 5 is an SES representation. Since point B is the target location that the robot is expected to reach, the expected distribution at the target will be an LES representation as shown in Figure 3.


Figure 3: Worldview from the target position that the robot is trying to reach (LES).

This algorithm enables the robot to move to a target region as soon as the robot starts to see at least one of the landmarks in the description of the target region. This is actually a simple behavior in which the robot takes a step towards changing the current distribution of objects around it to make it closer to the one that is given in the target distribution. When this is iteratively done the robot reaches the target location. This will be referred as the "perception based navigation" behavior.

In the example above (and typically in realistic situations), reaching the target location requires passing through several key locations represented by distinguishing landmarks. With the depth of vision this robot has, it is not possible for the robot to move to point B right away from point A. Therefore, the robot will need a series of LESs describing via locations until it can reach the target location. It will reach the target by following these regions and using the perception-based navigation. Figure 4 illustrates the intermediate regions that the robot will pass through while navigating to point B.


Figure 4: Via regions that the robot has to pass to reach point B.

REFERENCES

[1] M.O. Franz and H. Mallot, "Biomimetic robot navigation," Robotics and Autonomous Systems, v. 30, pp. 133-153, January, 2000.

[2] T.S. Levitt and D.T. Lawton, "Qualitative navigation for mobile robots," Artificial Intelligence, 44, pp. 305-360, 1990.

[3] D. Dai and D.T. Lawton, "Range-free qualitative navigation," Proceedings of the IEEE International Conference on Robotics and Automation, pp. 783-790, 1993.

[4] R.T. Pack, IMA: The Intelligent Machine Architecture, Ph.D. Dissertation, Vanderbilt University, May 1999.

[5] R.T. Pack, D.M. Wilkes, and K. Kawamura, "A Software Architecture for Integrated Service Robot Development", 1997 IEEE Conf. On Systems, Man, and Cybernetics, Orlando, pp. 3774-3779, September, 1997.

[6] J.S. Albus, "Outline for a theory of intelligence", IEEE Transactions on Systems, Man and Cybernetics, vol. 21, pp. 473-509, 1991.

[7] K. Kawamura K., R.A. Peters II, C. Johnson, P. Nilas, and S. Thongchai, "Supervisory Control of Mobile Robots using Sensory EgoSphere," Proceedings of 2001 IEEE International Symposium on Computational Intelligence in Robotics and Automation July 29 - August 1, 2001, Banff, Alberta, Canada P523-529.

[8] K. Kawamura, R.A. Peters II, D.M. Wilkes, W.A. Alford, and T.E. Rogers, "ISAC: Foundations in Human-Humanoid Interaction," IEEE Intelligent Systems, pp. 38-45, July/August 2000.