Model-based Synthetic View Generation for Video Conferencing


This research is about a model-based multi-view image generation system for video conferencing. The system assumes that a 3-D model of the person in front of the camera is available. It extracts texture from a speaking person sequence images and maps it to the static 3-D model during the video conference session. Since only the incrementally updated texture information is transmitted during the whole session, the bandwidth requirement is very small. Based on the experimental results one can conclude that the proposed system is very promising for practical applications.

This work is done in cooperation with the Telecommunications Institute, University of Erlangen- Nuremberg, Erlangen, Germany.

Reference:
    Chun-Jen Tsai, Peter Eisert, Bernd Girod, and Aggelos K. Katsaggelos, "Model-based Synthetic View Generation from a Monocular Video Sequence," IEEE International Conference on Image Processing, p. I-444, Santa Barbara, Oct. 1997.

[IVPL]
tsai@ece.nwu.edu (Jan. 8, 1998)