Adobe just launched its new genAI Firefly Video Model in public beta

Nearly a year after Adobe first teased video AI features, the company is finally bringing its new video AI model to market.

Today, the company is launching its Firefly Video Model in public beta. The model comes alongside a new Firefly web application, which essentially gathers all of Adobe’s AI tools, including existing features like Text to Image and Generative Fill, under one roof. Users can access Firefly’s web app through two subscription tiers—Firefly Standard and Firefly Pro—which retail for $9.99 and $29.99 per month, respectively.

What is Adobe’s new Firefly video model?

Firefly Video Model is Adobe’s answer to existing video models like Open AI’s Sora and Meta’s new Movie Gen. Using the brand’s new suite of tools, creators can turn a written prompt into a video clip, convert an existing image into a video, and even translate audio and video into multiple different languages.

Adobe’s video AI capabilities are late to market, but that’s par for the course for a brand that got into generative AI nearly a year after its main competitors back in 2022. In the past, Adobe has set itself apart in the AI space with stringent IP protections (it only trains Firefly on licensed content and bills its new video model as the industry’s “first commercially safe” video AI) and by making significant improvements to its new features over time. It remains to be seen whether Firefly Video Model will follow a similar upward trajectory.

What can it do?

Text to Video

Firefly’s Text to Video feature is most comparable to OpenAI’s Sora. Users enter a specific prompt in a text box, which is then converted to a five-second video clip. The feature incorporates Adobe’s signature easy-to-follow UI with a drop-down menu that allows creatives to tweak aspects of its output like shot size, camera angle, and motion.

Image to Video

Adobe is positioning its Image to Video feature as a kind of brainstorming tool for video editors. It’s similar to Text to Video, except the user can input an image alongside a written prompt to bring a specific frame to life.

In a demo video shared by the company, an editor takes a still frame of an astronaut flipping a switch and asks Image to Video to create a shot of the astronaut unplugging a cord instead. It’s an example of a quick edit that could help an editor better convey their project’s intended mood to supervisors, she says.

Translate Video

Translate Video—available in 20 different languages—is Adobe’s offering to help creators cut down on translation and dubbing services. Per a press release, “With voice, tone, cadence and acoustic match when translating video content into different languages, creators can [spend] less time on dubbing performance and audio mixing.”

Right now, Firefly Video Model isn’t especially groundbreaking, but it will help plenty of Adobe creators streamline their production processes without turning to an outside video AI application, especially when just about every design platform wants to be creatives’ only platform.

In an interview with Fast Company back in September, Adobe CTO Ely Greenfield noted that, for a generative AI tool that produces common stock images to make it into an Adobe product, the result should be acceptable 10/10 times. However, he added, for results with more specificity, “getting it right 1/10 times is still a huge savings. It can be a little frustrating in the moment, but if we can give people good content 1/10 times that saves them from going back to reshoot something on deadline; that’s incredibly valuable.”

As Adobe continues to iterate on its Firefly Video Model, that success rate is only bound to go up.

No comments

Read more