Generating 3-D Cognitive Maps Using a 2-D Laser Scanner
An important part of robotic research is data collection and sensor processing. In order to enable a robot with the ability to interact with its environment, the robot must be able to perceive information from its environment. Many institutions have researched new and inventive ways to acquire information about an environment. Since 3-D laser scanners are extremely expensive many institutions have adapted much cheaper 2-D laser scanners for this task. The Center for Intelligent Systems is no different. Recent work has focused on panning a single 2-D laser scanner through an environment, collecting vast amounts of data points, and extracting useful perceptual information.
Using a Segway Robotic Mobility Platform, provided by a grant from DARPA, along with a SICK LMS200 laser data collection was performed on both indoor and outdoor environments. Noting that even in outdoor environments a large percentage of objects can be associated with a smooth plane the collected data sets were processed to identify, for each point, how well a particular point and its nearest neighbors matched to a smooth flat plane. Points that matched to vertical planes were assigned a green color, those that matched to horizontal planes were assigned a white color, those that didn't match any were assigned a blue color, and those that were out of range were assigned a red color.
Using this simple information in combination with a recursive technique to identify similar nearest neighbor points separate objects were identified in each environment and simple block world was constructed within the robot for use in localization and navigation. The internal representation of the environment within the robot is termed a cognitive map or ''map in the head''
Once the robot had constructed this cognitive map the robot attempted to use the map for localization. Pairs of blocks from a random 3-D scan were compared to pairs of blocks from the cognitive map. If a pair from the random scan was found to closely resemble a pair from the cognitive map, the robot calculated a possible position and orientation within the map and then calculated a goodness factor associated with this pose. The goodness factor for a particular pose was simply a measure of what the robot did detect from the random 3-D scan vs. what the robot should have detected given the pose was indeed correct.