Egocentric Navigation (ENav) is a basic navigational behavior designed to operate based only on egocentric representations, namely, the Sensory EgoSphere (SES) and the Landmark EgoSphere (LES). The ENav algorithm moves the robot to a target location based on the current perception of the robot without explicitly requiring any range information. In the absence of range information, the robot uses the SES to represent its current perception by using the angular separation between perceived objects. Figure 1 shows sample egocentric representations.
Figure 1. Sample egocentric representations. The current representation in sensory data is labeled as an SES, the target representation is labeled as an LES
Navigation depends heavily on perception since the goal is to move the robot to a location where the perception of the robot closely matches a target representation defined by LES. The navigation algorithm takes an SES and an LES as input and compares the SES created by the perception process with the LES. This comparison results in an error term that represents the difference between the current SES and the target LES. A heading is computed using the SES and the LES depending on the error’s being satisfactorily above a threshold. As the robot iteratively moves along this heading, it is taken towards the target point at which the error drops below the threshold.
Error and heading computations are based on pair-wise analysis of landmarks. First, landmarks not common to both SES and LES are removed. Next, from the common landmarks in both representations, landmark pairs are formed and these pairs are compared between the LES and SES. In this comparison, the smaller magnitude angle between two landmarks is used to define the separation between these landmarks. A unit vector along the bisector of these two landmarks is created by each pair depending on this comparison. The unit vector can either be toward or away from the landmarks. Finally, the final heading vector for the situation defined by the SES and LES pair is created by adding all the resulting unit vectors.
ENav algorithm can also be described by vector algebra. First, a unit vector is created pointing to every landmark on the SES and the LES. The landmarks on the SES are represented with unit vectors uci (the superscript “C” indicates current perception) and the landmarks on the LES are represented with unit vectors uti as shown in Figure 2 (the superscript “t” indicates a target location). Variables ‘i’ and ‘j’ index over individual landmarks in distinct pairs formed from the SES and LES. The pair-wise vector analysis is carried out as follows:
dcij = uci . ucj Cij= uci x ucj, where i≠j
dtij = uti . utj Tij= uti x utj
Aij = sgn(dcij – dtij) (1)
Bij = [sgn(Cij . Tij) + 1] / 2 (2)
uij = (1 + Bij(Aij -1) )(uci + ucj / || uci + ucj ||) (3)
h = uij (4)
Figure 2. Computation of heading based on SES and LES
There is a relationship between ENav, EgoSpheres and memory models. The navigation system is composed of local and global paradigms. Local navigation uses only the information within the immediate sensing region of the robot, and is reactive.
On the other hand, global navigation is deliberative and uses information beyond the robot’s sensory horizon. Dividing the system this way implicitly organizes the robot’s memory into long-term and short-term memory structures. The SES provides short-term memory (STM) for reactive navigation. On the other hand, long-term memory (LTM) contains global layout information and supports global navigation. In ENav, landmarks assumed to be around the robot are represented on the LES, which is a LTM structure. Within the ENav scheme, there is also a task-specific memory module which is task-dependent and holds descriptors of via regions that point to transition points for navigation.
This knowledge sharing method can be further analyzed using the three memory structures used in ENav. Short-term memory holds robo-centric topological regions (SES), long-term memory holds global layout information, and task specific memory holds robo-centric topological regions in terms of LES representations that indicate transition points, i.e., a sequence of waypoints, for navigation. In this sense, what is really shared between the robots for one robot to navigate by using the knowledge of the other one is task specific memory.
Blue Cone, Pink Cone, Yellow Cone
Green Cone, Blue Cone, Pink Cone
Figure 3. Illustration of LES sharing method