In this section, we provide more qualitative results of our method on video generation.
In this section, we provide video comparison results of our method with others as mentioned in the paper.
SVD[2] 25 steps | SVD[2] 16 steps | SVD[2] 8 steps | AnimateLCM[1] 4 steps | Ours |
---|
In this section, we provide video ablation results as mentioned in the paper.
P_{mean} = -2 | P_{mean} = -1 |
P_{mean} = 0 | P_{mean} = 1 |
---|---|---|---|
P_{std} = -1 | P_{std} = -1 |
P_{std} = 1 | P_{std} = 1 |
In this section, we provide a screen recording demo showing the efficiency of our method.
[1] Wang, Fu-Yun, Zhaoyang Huang, Xiaoyu Shi, Weikang Bian, Guanglu Song, Yu Liu, and Hongsheng Li. "AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning." arXiv preprint arXiv:2402.00769 (2024).
[2] Blattmann, Andreas, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi et al. "Stable video diffusion: Scaling latent video diffusion models to large datasets." arXiv preprint arXiv:2311.15127 (2023).