🌈 MoG: Motion-Aware Generative Frame Interpolation

1State Key Laboratory for Novel Software Technology, Nanjing University  2Platform and Content Group (PCG), Tencent  3Shanghai AI Lab

🌱 Introduction of our MoG

MoG is a generative video frame interpolation (VFI) model, designed to synthesize intermediate frames between two input frames.

MoG marks the first explicit incorporation of motion guidance between input frames to enhance the motion awareness of generative models. We demonstrate that the intermediate flow derived from flow-based VFI methods can effectively serve as motion guidance, and we propose a simple yet efficient approach to integrate this prior into the network. As a result, MoG achieves significant improvements over existing open-source generative VFI methods, excelling in both real-world and animated scenarios.



pipeline figure

 


🎬 Demos produced by our MoG

Input frames Interpolation results Input frames Interpolation results
Image 1 Image 1
Input frames Interpolation results Input frames Interpolation results
Image 1 Image 1
Input frames Interpolation results Input frames Interpolation results
Image 1 Image 1
Input frames Interpolation results Input frames Interpolation results
Image 1 Image 1
Input frames Interpolation results Input frames Interpolation results
Image 1 Image 1
Input frames Interpolation results Input frames Interpolation results
Image 1 Image 1

 


💥 Comparisons with existing generative VFI methods

Real-world scenes

Input frames GI DynamiCrafter MoG (Ours)
Image 1

Input frames GI DynamiCrafter MoG (Ours)
Image 1

Input frames GI DynamiCrafter MoG (Ours)
Image 1

Input frames GI DynamiCrafter MoG (Ours)
Image 1

Input frames GI DynamiCrafter MoG (Ours)
Image 1

Animated scenes

Input frames GI ToonCrafter MoG (Ours)
Image 1

Input frames GI ToonCrafter MoG (Ours)
Image 1

Input frames GI ToonCrafter MoG (Ours)
Image 1

Input frames GI ToonCrafter MoG (Ours)
Image 1

Input frames GI ToonCrafter MoG (Ours)
Image 1