HomeAI News
Runway’s new feature “Motion Brush” once again amazes the AI ​​circle: Just paint it and the picture will start to move.
26

Runway’s new feature “Motion Brush” once again amazes the AI ​​circle: Just paint it and the picture will start to move.

Hayo News
Hayo News
November 13th, 2023
View OriginalTranslated by Google
Netizen: I can’t even imagine how advanced video technology will be in a year’s time.

A fifty-second preview video once again made the AI ​​circle excited.

Yesterday, Runway announced that it will soon launch the "Motion Brush" function in the video generation tool Gen-2, a new method to control the movement of generated content.

This time you don’t even need to enter text to play, just having your hands is enough.

Select any picture, and wherever you apply the brush, it will immediately move:

Whether it is water flow, clouds, flames, smoke or characters, their dynamics can be highly restored. Is this the legendary "Midas that turns stone into gold"?

After reading it, netizens said: I can’t even imagine how advanced video technology will be in one year’s time…

After all, in early 2023, generating video from text is still quite difficult.

Runway launched Gen-1 in February this year, which is very feature-rich, including stylization, storyboarding, masking, rendering, customization, and more. It seems that this is a tool focused on "editing" videos.

But in March this year, the advent of Gen-2 changed everything. It added the function of generating video from text and pictures. Users only need to enter text, images, or text plus image descriptions, and Gen-2 can generate relevant videos in a fraction of the time.

This is the first publicly available text-to-video model on the market. For example, if you enter a piece of plain text "The afternoon sun shines through the window of a New York loft", Gen-2 will directly "brain" the video:

Now, with just a few prompts and gestures, we can generate decent videos and edit them further. Gone are the days of complex video editing software and lengthy production processes.

If the Vincent picture artifact Midjourney is used in combination with Vincent's video artifact Gen-2, users can produce blockbusters without even writing.

Of course, Gen-2 also has competitors, namely Pika Labs, especially since the latter is free.

The above screen was generated by Pika Labs

Some users are looking forward to this crazy trend: "In 2024, the pull between Pika Labs and Runway will definitely be interesting."

Rumor has it that OpenAI also has technology related to video generation. Some netizens said: "This makes me wonder how good OpenAI's any-to-any model is at generating videos, because this company is usually ahead of others."

Will this disrupt the video and film production industry in the future?

Reprinted from 机器之心 蛋酱View Original

Comments

no dataCoffee time! Feel free to comment