Automatic Dynamic Texture Transformation Based on New Motion Coherence Metric

Kanoksak Wattanachote, and Timothy K. Shih, Member, IEEE

Department of Computer Science and Information Engineering
National Central University, Taoyuan, Taiwan


ABSTRACT

Changing dynamic texture appearances can create new looking in both motion and color appearance of videos. Dynamic textures with sophisticated shape and motion appearance are difficult to represent by physics models and are hard to predict, especially for transformation to new motion texture. We propose a dynamic texture transformation algorithm for video sequences based on motion coherence of patches. And we successfully apply the technology in many special effect videos, using our interactive tool developed. In this paper, we address the issues of 3D patch creation, motion coherence analysis, and patch matching for dynamic texture transformation. The main contribution includes two issues. First, we proposed a new metric for evaluating motion coherence, with solid tests to justify the usefulness (close to human eye perception). Second, the proposed algorithm for automatic dynamic texture transformation only needs users to segment textures on the first frame, by using an optional threshold to identify the texture area. The rest process is complete automatically. The experimental results show that the motion coherence index is effectively used to find the coherent motion region for patch matching and transformation.

EXPERIMENTAL RESULTS

We separate the experimental results into five parts:
1. Automatic segmentation results (.mov video files).
2. Motion coherence evaluation results (.mov video files).
3. Dynamic texture transformation results.
4. Demonstration of elapsed CPU time monitoring (.mov video files).
5. Dynamic texture transformation and evaluation.

Part 1: Automatic Segmentation


(a). Automatic fire segmentation.
 
(b). Automatic smoke segmentation.

(c). Automatic waterfall segmentation.
I. Demonstration of automatic segmentation by system (screen capture 1, 2 videos test). (Click)
II. Demonstration of automatic segmentation by system (screen capture 2, 2 videos test). (Click)

Part 2: Motion Coherence Evaluation

High Coherence Index


(a). Swans.

(b). Driving test.

(c). Jet plane.

(d). High-speed way.

(e). Rally cars.

Low Coherence Index


(a). Two lanes road.

(b). Driving test.

(c). Dancing fire.

(d). Music video.

(e). Flags.

Part 3: Dynamic Texture Transformation

For the better view, we recommend audiences to download and watch the result on your personal computer rather than online view.
Figure (a)-(y), (A)-(B) Demonstrates the experimental results of dynamic texture transformation. The first two columns show the original video clips. The last column shows the new look of dynamic texture in the video.
Figure (z-1)-(z-3) We demonstrate the transformation with the original resolution of the target videos and the results are demonstrated in (z-1)3-(z-3)3.

III. Demonstration of automatic dynamic texture segmentation. (Click)
IV. Demonstration of dynamic texture transformation 2 (with both automatic and user guidance for dynamic texture segmentation). (Click)
V. Demonstration of dynamic texture transformation 3 (with user guidance (connected-component filtering) for dynamic texture segmentation). (Click)
For more information and any parts of source code please contact: kaskonak@hotmail.com for authorization.
Available functions

Part 4: Elapsed CPU Time Monitoring


(a). Fire transformation.

(b). Smoke transformation.

(c). Waterfall transformation.

Part 5: Dynamic texture transformation and evaluation.