Verbs and Adverbs: Multidimensional Motion Interpolation
Verbs and Adverbs was a technique originally developed by Charles Rose, Michael F. Cohen, and Bobby Bodenheimer in 1998c. to create believable animation in humanoid characters. Verbs and Adverbs attempts to deal with the difficulties of creating believable motion by interpolating between a few example motions, or verbs, which are described with a virtually indefinite number of adverbs. One of the examples given in their paper, published in the September/October 1998 issue of "IEEE Computer Graphics and Applications" (pg. 32-40), is that of walking (shown right). In this example, walking is described over two axes as happy-sad, as well as how knowledgeable the character is. This example is shown on the right with happiness increasing along the vertical axes and knowledge increasing from left to right. Obviously, how happy a character is and how knowledgeable the character is cannot be described on a discrete scale. It is also not practical to represent the range of human emotion by choosing examples along these scales, as it would necessitate thousands of example motions to develop a reasonable representation.
The "learning" part of the Verbs and Adverbs algorithm consists of four major parts, which are as follows:
- Motion Segmentation - Before it can be normalized, all example motion data must be divided in segments designated by KeyTimes representing important points in the motion data such as the beginning and end of the motion as well as major changes in the velocity of the motion.
- Time Normalization - Once the KeyTimes have been determined, the data must be normalized so that all motion takes place in the time from zero - one seconds.
- Least Squares Approximation - Using the Adverb and KeyTime definitions of the example motions, the coefficients of a simple linear polynomial are calculated. This establishes the general linear trend of the data and augments the Radial Basis Function.
- Radial Basis Function - A matrix of exponential functions is calculated. The error from the Least Squares Approximation is divided by this matrix in order to calculate the Radial Basis Function Coefficients.
After the software has been "trained" for the example motions, the real-time key-times of the new motion can be calculated using the coefficients calculated in the previously stated processes. While these new key-times are not vital to the next few steps of the technique, they will be required for the inverse time-warping which expands the new motion from canonical, or normalized, time, to real-time.
By dividing the desired real-time length of the new motion by the known hardware sampling rate, we can calculate the number of data points required for the new motion. Once this is known, it is possible to calculate the normalized time index of the new motion. The next major problem is that for the motion interpolation technique to work, all of the examples motions are required to have the same number of data points. This is not very feasible because the data is recorded from a teleoperated robot with a fixed sampling rate. In order to resolve this, we have to make the example motions shorter or longer (as far as data points go). To do this, we assume that for a fast sampling rate, the data will remain relatively linear between samples. We then "re-sample" the data using a simple constant slope linear interpolation. Once the example motions are all the same length, a new motion can be interpolated using a method similar to that of the KeyTime interpolation. This algorithm consists of:
- Least Squares Approximation - The Adverb and DOF information for the examples motions is used to calculate the coefficients of a simple linear polynomial, which approximates the general trend of the data.
- Radial Basis Function - The error from the least squares approximation is divided by a matrix of exponential functions in order to establish the coefficients of the radial basis function.
This process, similar to the KeyTime interpolation, "trains" the algorithm. The Adverb definition for the new motion is then plugged into the algorithm and the new motion data, in normalized time, is returned. An inverted time-warping algorithm is then applied which "stretches" the new motion to the desired real-time length. To date (7/10/03), most of our development has taken place in a humanoid simulator created in-house called PISAC. This has allowed for rapid development while avoiding many of the problems that can arise while working with hardware. We plan to begin porting the technique to the actual robot. Below is an animation of the simulator playing back both the recorded and interpolated motions.