HomeAI News
The developer of Stable Diffusion launched an animation version SDK, opening the era of AI animation generation

The developer of Stable Diffusion launched an animation version SDK, opening the era of AI animation generation

Hayo News
Hayo News
May 12th, 2023
View OriginalTranslated by Google

Stable Diffusion brings people into the era of "AI painting". Today, SD developer Stability AI released the Stable Animation SDK, a tool designed for artists and developers to implement state-of-the-art Stable Diffusion models for stunning animations.

That's right, another "AI creation" revolution is coming again, this time from "static" to "dynamic", AI animation is no longer unfamiliar.

According to the official website, Stable Animation (SA for short) users can currently create animations in different ways through the SDK: through text prompts (without using images), source images or source videos.

Currently through Stability AI's animation endpoint ( animation endpoint ), artists can use all Stable Diffusion models, including Stable Diffusion 2.0 and Stable Diffusion XL, to generate animations.

According to the official introduction, the Stable Animation SDK currently provides three ways to create animations:

  • Text generation animation: The user enters a text prompt (same as Stable Diffusion), and adjusts various parameters to generate animation.

  • Text Input + Initial Image Input: Users provide an initial image to use as the starting point for their animation. Use text cues with images to produce final output animations.

  • Input Video + Text Input: Users provide an initial video as the basis for their animation. By tweaking various parameters, they end up with a final output animation, guided by text cues.

Usage of SA SDK

The current Stable Animation has opened up the use of the SDK. For the time being, we cannot deploy and run it purely locally. Instead, we need to use the computing power of Stability AI and pay a certain amount of money to use it.

Judging from the official help provided, there are plenty of parameters that can be customized at present. In addition to the common prompts in SD, commonly used options such as models, style presets, and video sizes are already optional settings. There is already a level of usability for artists and developers.

It can be said that as long as you pay money, you can give full play to your creativity.

It is worth mentioning that we have seen SD1.5 in the optional model, which is the version that people still like to use today. Many LoRA and CheckPoint are also developed based on this version, which also makes SA's future expansion prospects brighter .

In terms of video generation, SA provides multiple parameter dimensions such as light and shadow, depth of field, 3D rendering, frame rate, mask, etc., which meets the needs of the professional field to a certain extent.

I believe that with the release of the Stable Animation SDK, developers will develop more interesting usages and gameplays for this new "Video SD" technology, which is worth looking forward to!


no dataCoffee time! Feel free to comment