Animating Humans from Static Images via an Entirely Image-Based Approach

Lead Research Organisation: University of Bedfordshire
Department Name: Computing and Information

Abstract

Images/videos have a promising future for figure animation, an entirely image/video-based approach would allow us to achieve high realism by directly utilising real images/videos. Unfortunately, making effective use of real world images/videos is not a simple task and it is very difficult to reconstruct 3D arbitrary views for human motion. Currently, the Image/Video Based Rendering(IVBR) achieves this by using either captured or adapted generic human geometry from 3D scanners or multi-cameras, which involves costly resources.In fact, the human brain has a strong in-built capacity to imagine motion from static objects. Given a few images of a human motion, we can easily interpret them by envisaging a virtual movement in our mind, without the need of any geometric information. However, existing computing technology still largely falls short of such a capability. Motivated by this observation, the proposed research is designed to take a highly speculative adventure which will explore novel techniques to equip computers with such an ability. Generally speaking, we shall look into the feasibility of making human characters alive from their static images, allowing arbitrary views of their movement to be directly reconstructed from a few key images without requiring their geometric models. While we target humans here, the methodology examined can be applicable to a broad range of articulated/non-articulated subjects. It will go beyond the current form of IVBR, which was mainly designed for objects with fixed shapes, and will aim to achieve what is traditionally feasible only with the assistance of geometric models. It could lead to an alternative that is fundamentally different from all current techniques.This feasibility study will concentrate only on the most fundamental issue of the entirely image based approach , which is the View Reconstruction for Humans (VRH), i.e. whether we can create images of a human movement under arbitrary viewpoints just from a few static images. To test the idea without losing generality, many datasets involved in our experiment will be created by computers. Once a solution to VRH is found, it will open the door to further investigation using real captured images for the training and also to work on other important issues concerning control, data organization & compression, image compositions, hair and cloth motion, etc., in follow-on projects. To allow for the completion of this feasibility study in a short period, we have designed a detailed research route. A learning-based approach will be taken to build statistical models for image sequences of human motion through training from existing examples. Subsequently, such models will be used to construct new sequences of human motion. While there are many potential ways to provide effective user controls, in order to stay focused on VRH, we shall take the most straightforward control strategy, which will use a selected number of images to indicate the key postures of the actor over the time. This is analogous to the key-frame control strategy that is widely adopted in animation.This research also has strong commercial potential in a broad range of entertainment-related businesses in areas such as image/video editing, computer games, the film industry, etc. They have a major presence in the UK and generate significant global income. It will be actively invovled by our industrial contacts at Antics Technologies and Cinesite. Cinesite is one of the largest companies in the production of computer visual effects and post production in the world, while Antics Technologies provides revolutionary software for full computer animation and has world-wide customers. They have recognized the potential market values of this research and will provide strong support through consultancy, evaluation and exploitation. Antics will provide their latest animation software release for this research at no cost.

Publications

10 25 50