Perception of the Environment for Human-Robot Teaming and Robot-Robot Teaming


Teams of humans and robots working together can provide effective solutions to problems. In such applications, effective human-robot teaming relies on being able to communicate information about the current perception or understanding of the environment. Two knowledge sharing methods have been developed in the CIS, called the Sensory EgoSphere (SES) and the Landmark EgoSphere (LES). Information from robots communicating about their current perception or understanding of the environment is very important for effective teaming. Our research focus is on the representations of the state of the world used by the humanoid robot called ISAC, the Segway and also the human user as special team members. Representation plays a very important role in perceptual knowledge sharing in a robot-robot and a robot-human team. In our Center for Intelligent Systems (CIS) at Vanderbilt University, we have two labs exploring these issues. The Cognitive Robotics Lab (link) and the Intelligent Robotics Lab (link) have been using egocentric representations for the perceptual knowledge of the robot. The main egocentric representation we have developed and currently use is called the Sensory EgoSphere. The second representation is called the Landmark EgoSphere (LES). The LES contains information about the angular distribution of the landmarks that the robot expects to see at a goal position, and is similar to the SES in structure [1]. For navigational tasks we often use a simplified EgoSphere structure, in which the original 3-D shape of the EgoSphere is projected down to a 2-D structure as shown (Figure 1).

Figure 1: Geodesic dome around the robot and mapping of objects to egosphere [1]

Given that the sensors on a robot are discrete with regard to angular positioning, there is nothing to be gained by defining the SES to be a continuous structure. Moreover, the computational complexity of using the SES increases with its size which is, in turn, dependent on its density (number of points on its surface). We use a (virtual) geodesic dome structure for the SES since it provides a uniform tessellation of vertices such that each vertex is equidistant (along geodesics) to six neighbors.

Sensory EgoSphere (SES)

In our current research using ISAC-Segway teaming, the SES is used to represent the robot?s perception of its environment at its current state, as a result of the robots? perceptions. The SES was derived from Albus? earlier research using an egosphere [2]. In the visual perception correction method, the robot shares its understanding of the environment, via its current SES, with the human so that the human may correct possible misperceptions by the robot. As the robot operates within its environment, events, both external and internal, stimulate the robot?s sensors. Upon receiving a stimulus the associated sensory processing module writes its output data (including the time of detection) to the SES at the node that is closest to the direction from which the stimulus arrived. Because the robot?s sensory processing modules are independent and concurrent, multiple sensors stimulated by the same event will register the event to the SES at about the same time. If the event is directional, the different modules will write their data at the same location on the SES. Hence, sensory data of different modalities coming from similar directions at similar times will register close to each other on the SES.

Multiple-linked List of Pointers to Data Structures

In the SES, there is one pointer for each vertex on the dome. Each pointer record has seven links, one to each of its six nearest neighbors and one to a tagged-format data structure. The latter comprises a terminated list of alphanumeric tags each followed by a time stamp and another pointer. A tag indicates that a specific type of sensory data is stored at the vertex. The corresponding time stamp indicates when the data was stored. The pointer associated with the tag points to the location of a data object that contains the sensory data and any function specifications (such as links to other agents) associated with it. The type and number of tags on any vertex of the dome is completely variable. Often in practice, the SES is not a complete geodesic dome, instead it is restricted to only those vertices that fall within the directional sensory field of the robot. Imagery or image features can be stored at the vertex closest to the direction of the object identified in the image as shown previously in Figure 1.

Landmark EgoSphere (LES)

The LES may be obtained in several ways. For example, it may be the result of the robot?s perception on a previous navigational mission, the result of another robot?s perception at the target point, or it may be derived from a rough map of the area. Knowledge sharing method in a team of two heterogeneous robots with the emphasis given on sharing LES information between the robots. Our approach takes inspiration from the qualitative navigation method of Levitt and Lawton [3] and Dai and Lawton [4], among others.

Robot-Robot Team Knowledge Sharing

Knowledge sharing in mobile robots is an significant aspect in mobile robotic research. The content of the knowledge shared depends on the task that the robots must perform. In a heterogeneous robot team, sharing one robot?s raw sensory data would not make sense to the other robots. For this reason, the representation plays an important role for knowledge sharing in a robot team [5]. Thus, it should be at a higher level of abstraction than raw sensory data. The LES is a very suitable representation that can be shared among robots because it is a natural representation of the objects in the robot?s environment [6].


[1] K. Kawamura, A.B. Koku, D.M. Wilkes, R.A. Peters II and A. Sekmen, "Toward Egocentric Navigation", Int. J. of Robotics and Automation, vol. 17, no. 4, pp. 135-145, 2002.
[2] J.S Albus, "Outline for a Theory of Intelligence", IEEE Transactions on Systems, Man and Cybernetics, Vol 21, pp. 473-509, 1991.
[3] T.S. Levitt, & D.T. Lawton, "Qualitative navigation for mobile robots", Artificial Intelligence, Vol. 44, pp. 305-360, 1990.
[4] D. Dai and D.T. Lawton, "Range-free qualitative navigation", Proc. of the IEEE Int. Conf. on Robotics and Automation, Atlanta, GA, pp. 783-790, 1993.
[5] Keskinpala, T., D.M. Wilkes, K. Kawamura, and A.B. Koku, "Knowledge Sharing techniques for Egocentric Navigation", Proceedings of the 2003 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Washington, DC, October 2003.
[6] A.B. Koku, "Egocentric Navigation and Its Applications", Ph.D. Dissertation, May 2003.