Intentional Vision in Humans and Robots


In conjunction with the Vanderbilt Psychology Department, research studies are being conducted to understand how people perceive the intentionality of robots and computers. The approach uses many CIS robots (in a variety of situations) combined with human subject testing and analysis from members of the Psychology Department. From an engineering standpoint, this work has allowed robotics researchers to collaborate closely with researchers in human psychology when developing the building blocks of artificial cognitive systems. In addition, the collaboration has provided a staging ground to test different aspects of these cognitive building blocks using actual human subjects. From a psychological standpoint, this collaboration allows insight into the methods for creating artificial cognition and access to hardware systems with which to perform human-robot experiments.

Previous experiments have investigated how much intentionality humans are willing to assign to artificial systems, such as computers and robots. Experiments have also been performed to test the effectiveness of an interactive emotion meter (Figure 1) displayed on a robot’s “chest”. Finally, current research is aimed at discovering how people expect robotic systems to associate preference and similarity across situations.



Figure 1. a) ISAC and b) Two-Dimensional Graphic Display of “Excitement”


Related publications include Thinking About Thinking in Computers, Robots, and People.

The work done by the Vanderbilt CIS mobile robots lab for the Intentional Vision in Humans and Robots Project has been focused on analyzing data of subjects to be used in robotic vision applications. One of the goals was to use significant moments (breakpoints) in task videos specified by different viewers to train and develop the robotic system to detect similar breakpoints in different task videos. The robotics lab has used a high dimensional approximate nearest neighbor system to provide segmentation to the video stream. From these segmented video streams, multiple characteristics of these videos are extracted from this stream and analyzed to detect correlations between the specified breakpoints of the viewers and the data. Using the highly correlated data, the robotic system is to specify significant breakpoints within a new video stream and test its own choices against the choices of the viewer. This information was being processed from two sets of videos. Both sets contain 10 tasks done by 10 different subjects. Breakpoints were recorded from 10 different viewers for these tasks. The robotics researchers are currently analyzing the data and developing results from the robotic system.

 

Another goal was to use the viewer specified breakpoints to determine moments to be focused on in the video. This was done by taking the complete video of a task and either blurring the video or blacking the video out except around the breakpoint moments. These new videos would be shown to other viewers to determine how well they could identify the task taking place. The algorithms to create these videos have been completed and example videos have been created. The current work is to complete the rest of the videos and test them with viewers to identify the tasks.


Other research performed under this project covers Object Recognition using Fuzzy Membership Rules.