Adobe has officially entered the generative AI video scene with its powerful new AI video tool, now featuring the Firefly Video Model integrated directly into Premiere Pro. This innovative tool set, teased by Adobe earlier this year, is finally here, empowering video creators with new ways to extend footage, generate video from still images, and even create unique video content using text prompts.
New AI Video Features Inside Premiere Pro
Leading the charge is the “Generative Extend” tool, now available in beta within Premiere Pro. This tool allows users to add extra footage to the beginning or end of a clip and make subtle in-frame adjustments, like fixing eye-line shifts or compensating for unintended movements. With these AI capabilities, small tweaks are easier than ever, eliminating the need for reshoots to correct minor issues. Currently, the feature allows for a two-second extension at either 720p or 1080p and 24 frames per second, making it ideal for fine-tuning edits.
Generative Extend also offers a limited audio extension capability, smoothing edits by adding up to ten seconds of ambient sounds or room tone—though it doesn’t extend spoken dialogue or music just yet.
Here’s a demo on how its generative extend tool is being used:
Adobe Firefly’s Text-to-Video and Image-to-Video: Now on the Web
In addition to the Premiere Pro integration, Adobe is releasing two other AI-powered video creation tools, Text-to-Video and Image-to-Video, as part of a public beta on the Firefly web app. These tools are designed to take creativity to the next level, allowing users to generate short video clips from text or images.
The Text-to-Video tool functions similarly to popular video generators like Runway and OpenAI’s Sora, allowing users to create unique video content by simply typing a description. It can produce a variety of visual styles, including realistic film, 3D animation, and stop-motion. Users can even tweak camera angles, motion, and distance for added creative control.
Meanwhile, Image-to-Video lets users upload an image alongside a text prompt for even more customization, opening up exciting possibilities for generating b-roll or visualizing potential reshoots. However, the maximum clip length is capped at five seconds with a resolution limit of 720p at 24 frames per second, so it’s not yet ready to replace full-scale video production.
Adobe vs. OpenAI: How Do They Stack Up?
While Adobe’s tools are already in public beta, OpenAI’s Sora promises longer clips (up to a minute) with a focus on quality and accuracy, but it hasn’t been released to the public yet. Adobe’s Firefly Video Model may have its limitations, but it represents a big leap forward in AI-powered video editing for everyday creatives.
With these new tools, Adobe is proving that AI video editing is no longer a thing of the future—it’s here, and it’s ready to transform how we create and edit videos. Whether you’re looking to extend footage or generate entirely new clips from scratch, Adobe’s Firefly Video Model is pushing the boundaries of what’s possible in Premiere Pro and beyond.