Ho, J., Chan, W., Saharia, C., et al. (2022). Imagen video: High definition video generation with diffusion models. arXiv:2210.02303 .
In parallel, generative AI research in computer vision (Ho et al., 2022; Brooks et al., 2024) has demonstrated text-to-video synthesis with increasing temporal coherence. While early models (Runway Gen-1, Pika Labs) produced short, surreal clips, newer systems (OpenAI’s Sora, Google’s Lumiere) achieve minute-long sequences with causal continuity. For Filma25, these models are not auxiliary tools but central production engines. filma25
Casetti, F. (2021). The relocation of cinema. In S. Denson & J. Leyda (Eds.), Post-cinema: Theorizing 21st-century film (pp. 42–59). Reframe Books. arXiv:2210
We conclude that Filma25 is not a prediction but a provocation . It maps a plausible trajectory from today’s AI-assisted editing (e.g., Adobe Firefly video) and crypto-funded indie films (e.g., “The Milk of Dreams,” 2024) toward a fully generative, decentralized, and dynamic cinema. Scholars and practitioners would do well to engage with its implications before the paradigm arrives unbidden. Brooks, T., Holynski, A., & Efros, A. A. (2024). Video generation models as world simulators. OpenAI Technical Report . For Filma25, these models are not auxiliary tools
Shaviro, S. (2016). Post-cinematic affect. In S. Denson & J. Leyda (Eds.), Post-cinema (pp. 289–308). Reframe Books. : This paper is a conceptual synthesis. No empirical data was collected. The term “Filma25” is used here as a theoretical construct; any resemblance to an existing trademark or product is coincidental.
Manovich, L. (2013). Software takes command . Bloomsbury.