Image analysis and graphics synthesis can be achieved with learning techniques using direct image examples without physically-based 3D models. We have developed novel techniques based on computer vision and on neural network algorithms: The mapping from novel images to a vector of "pose" and "expression" parameters can be learned from a small set of example images using a function approximation technique called an analysis network. The inverse mapping from input "pose" and "expression" parameters to output color images can be synthesized from a small set of example images and used to produce new images under real-time control using a similar learning network, called in this case a synthesis network. The two networks rely on a) using a computer vision optical flow algorithm that matches corresponding pixels among pairs of grey-level images and effectively "vectorizes" them and b) exploiting a class of learning techniques that approximate the nonlinear mapping between the vector input and the vector output. We will design a real-time architecture implementing the two networks using software on a standard digital platform, enhanced by ASIC VLSI chips for the real-time computation of the optical flow and of the neural network mapping. Anticipated
Benefits: The analysis and synthesis networks described here have several applications in computer graphics, special effects, interactive multimedia and object recognition systems. The analysis network can be regarded as a passive and trainable universal interface, that is a control device which may be used as a generalized computer mouse, instead of "gloves", "body suits" and joy sticks. The synthesis network is an unconventional and novel approach to computer graphics. The two techniques together can be used for human-computer interfaces and interactive simulations that are model-based in a very unconventional way.