This article is also available in: Français
At Adobe Max, Adobe announces several updates to its products. The most groundbreaking innovation is undoubtedly the introduction of a video model for Firefly. In other words: just as you could already use generative AI to create still images in Photoshop, it is now possible to create videos in Premiere Pro. Right now.
This “Firefly Video Model” is available today in Premiere Pro (beta version) and via the Firefly web application. Examples of what can be achieved have also been shared by Adobe. Practically speaking, it is possible to extend existing clips, but also to use prompts: the AI then generates a video from text or from an image. The potential use cases are enormous, raising ethical questions, and will undoubtedly shake up the market.
An ethical AI?
Adobe explains that its AI has been trained on “hundreds of millions of assets” and, as always, assures that the company has used royalty-free, licensed, or Adobe Stock content to offer an AI that can be commercially used without risk or copyright issues. Adobe stresses its commitment to responsibility, accountability and transparency.
We remain cautious here: Adobe made similar promises in the past for still image generation, only for it to later be revealed that the Adobe Stock content used for training included images generated by third-party AIs that did not respect copyright. Adobe continues to assert that this is not an issue.
We will seek further clarification from Adobe regarding the data used for this new video model.
It’s also worth noting that Adobe aims to avoid biases and problematic content: just like with generative AI in Photoshop, we can expect filters to prevent the creation of certain types of content.
What Are the Applications?
Regardless, this announcement brings Adobe’s vision to life: enabling its customers, particularly within Premiere Pro, to extend clips, refine transitions, and even create entire video sequences from scratch, whether realistic or not.
Adobe highlights that the clip extension feature also works on audio. Meanwhile, clip creation from prompts seems to offer quite precise control: users can specify the camera angle, focal length, camera movement, as well as provide details about tone and color grading. The precision of these controls will, of course, need to be evaluated in practice.
Adobe presents several use cases for clip generation from a prompt or an image: creating B-Roll from an image, generating graphic elements like a stylized title, and even producing 2D or 3D-style animations. In other words, Adobe aims to eliminate the need for traditional animation pipelines in specific use cases, allowing users to directly generate animated elements within Premiere. The company emphasizes the versatility of its generative AI, capable of handling both realistic and stylized elements.
This is certain to reignite debates around the use of generative AI. The quality of this new tool will obviously need to be tested in real-world scenarios, but as it stands, it seems clear that it could replace the use of stock footage, as well as the creation of some 2D/3D elements that were once outsourced to studios.
Since this AI will be available to all Premiere users, it’s likely we will soon see the first commercials using these features. Just as generative AI is already present in still ad projects and posters, it’s clear that generative AI will also be adopted for video content.