HomeAI News
AI startup Runway launches text-generating video model with impressive results
17

AI startup Runway launches text-generating video model with impressive results

Hayo News
Hayo News
March 21st, 2023
View OriginalTranslated by Google

An artificial intelligence startup company called Runway announced a new text-to-video AI model called Gen-2, which means that the user can enter a description and the system automatically generates any The corresponding video of the style. Current technology isn't quite there yet, but Runway's new model is already showing some really good results.

The company Runway offers a web-based video editor that focuses on artificial intelligence tools such as background removal and pose detection. The company participated in the development of Stable Diffusion, an open source text-to-image model, and released Gen-1, its first AI video editing model, in February.

Gen-1 essentially converts existing video footage, letting users input a rough 3D animation or jittery cell phone footage and applying an AI-generated overlay. For example in the example below, a video of a cardboard package is combined with a picture of an industrial plant to produce a clip that can be used for a storyboard or to propose a more polished production.

In contrast, Gen-2 is more focused on generating video from scratch, but there are a lot of caveats. First, the demo clips shared by Runway were short, erratic, and not very realistic; second, access was limited, and users had to register and join a waiting list through Runway’s Discord platform to use the Gen-2 model; the company According to spokesperson Kelsey Rondenet, "We will be making broad access available in the coming weeks." In other words: For now we're limited to a demo and a handful of clips (most of which have already been advertised as Gen-1 models). Evaluating the Gen-2 model, but it looks like it works really well.

Text-to-text video technology is exciting, bringing new creative opportunities, but also new threats (such as disinformation, etc.).

Reprinted from 远洋View Original

Comments

no dataCoffee time! Feel free to comment