HomeAI News
Adobe's "Firefly" AI has another "brilliant trick": video creation is not difficult, and materials and scripts can be played!

Adobe's "Firefly" AI has another "brilliant trick": video creation is not difficult, and materials and scripts can be played!

Hayo News
Hayo News
April 18th, 2023
View OriginalTranslated by Google
Only simple text commands are required, and now it is even possible to create animations.

A month ago, Adobe announced Firefly, its foray into the field of generative artificial intelligence. Initially, Firefly's focus was on generating commercially safe images, but now the company is pushing its technology beyond still images. As the company announced today, it will soon be bringing Firefly to its Creative Cloud video and audio apps.

To clarify, you cannot currently create custom videos with Firefly. Instead, the focus is on making it easier for anyone to edit video, grade color, add music and sound effects, and create title titles with animated fonts, graphics, and logos in just a few words. However, Firefly also promises to automatically convert scripts into storyboards and preview images, and recommends B-roll for more dynamic videos.

Perhaps the most notable highlight of these new features is that you can describe the color grading a video should appear in a few simple words, such as "golden hour" or "brighten faces."

As video production moves towards shorter, digestible forms, generative AI offers a lot of potential for workflow and creative development.

This is where Adobe Firefly promises to help content creators streamline the process and work faster, smarter and ultimately more creatively.

In Adobe's NAB presentation, they mentioned some applications of Firefly with video editing:

  • Text-to-Color Enhancement: Change the mood and scene of a video by changing the color scheme, changing the time of day or season, and with a simple cue, "make this scene feel warm and welcoming," it can literally eliminate the gap between the idea and the final product. interval time between.
  • Advanced Music and Sound Effects: Royalty-free custom sounds and music can be easily generated to reflect a specific emotion or scene, for both temp and final tracks.
  • Eye-catching fonts, text effects, graphics, and logos: With just a few simple words and minutes, creators can generate subtitles, logos, title titles, and custom contextual animations for videos.
  • Powerful scripting and B-roll capabilities: With script-to-text AI analysis, storyboards and preview images can be automatically created and B-roll clips suggested for drafts or final cuts, greatly accelerating pre-production, production and post-production work process.
  • Creation Assistant and Co-Driver: With a personalized AI-powered "how-to guide," users can develop new skills and accelerate the process from initial idea to creation and editing.

We all know that color grading is an art, and most people are not good at it. Now, anyone can describe the desired mood and tone of a scene, and Adobe's video tools will follow their description to set it up. In many ways, this democratization of skills is at the heart of Adobe's use of Firefly in its creative tools.

Other new AI-based features include generating custom sounds and music. Firefly will also help editors create subtitles, logos and credits by letting them describe their appearance. These are also more specialized skills and require some familiarity with software such as After Effects and Premiere.

The real game-changer, however, is that Adobe also plans to use Firefly to read scripts and automatically generate storyboards and preview images. This could be a huge time saver, and don't be surprised if you see these videos popping up on TikTok.

It's worth noting that, so far, we've only seen demos of these features from Adobe. It remains to be seen how they perform in practice. Adobe's goal is to ensure that all of its generative artificial intelligence tools are safe to use in business settings. For its generative image creators, that means it can only train on a limited number of images that are in the public domain or its Adobe Stock service. However, this also means that it is more limited than products like Midjourney or Stable Diffusion. How it will perform in the future remains to be experienced by everyone.


no dataCoffee time! Feel free to comment