HomeAI News
From video synthesis to soundtrack editing, everything is done by AI: The trailer of the first AI sci-fi movie "Genesis" debuted shockingly
48

From video synthesis to soundtrack editing, everything is done by AI: The trailer of the first AI sci-fi movie "Genesis" debuted shockingly

Hayo News
Hayo News
July 31st, 2023
View OriginalTranslated by Google
Let us see, what is the sci-fi movie "Genesis" in the eyes of AI?

In the past two days, a sci-fi movie trailer "Trailer: Genesis" ("Genesis"), which is less than a minute long, has become popular on social media.

Is it very sci-fi? What's more "sci-fi" is that everything from image and video synthesis, music to editing, is done by AI .

Producer Nicolas Neubert lists the corresponding AI tools, among which Midjourney processes images, Runway processes videos, Pixabay processes music, and CapCut clips videos .

Midjourney is a familiar AI drawing artifact, and it has been updated to version 5.2. Runway is an AI-based video production tool, and Gen-2 is currently available for free trial. The CapCut editing tool is free for everyone, but you can also choose to edit in Adobe Premier or Final Cut Pro.

picture

Twitter @iamneubert

It is understood that Neubert spent 7 hours, Midjourney generated 316 prompts, enlarged 128 images, Runway generated 310 videos, and a video with text. A total of 44 videos were used in the trailer.

Today, Neubert even wrote a long article, detailing the production process of "Genesis", including the specific process and how to use the above AI tools. Let's take a look at them one by one.

picture

Regarding the idea of ​​the movie, he said that the idea of ​​his dystopian theme came from several movies he had watched, and he wrote a story based on them.

The first step in official production is building the world and story .

For the trailer's Genesis storyline, Neubert wanted to build up the tension step by step. The following 3 phases are therefore defined:

  1. set the scene
  2. introduce threat
  3. Orgasm in CTA

Specifically, Neubert worked on the first draft of the trailer’s copy, which included “Share It All, Live the Consequences, and Call Humanity to Action.”

Having thus defined the overall tone, he proceeded to generate scenes around these themes. Neubert scrolled through tons of human and sci-fi clips on themes of environments, military technology, and combat, and gleaned a story from them all.

Also to add some depth, he added shots of three kids with glowing amulets, hinting at a deeper storyline.

picture

The second step is to generate consecutive images in Midjourney .

Pay special attention to the prompt here. Neubert refined the stabilization cues he'd gotten in previous posts to create a template so he could reuse it for every shot in the trailer. The template is as follows:

___________, star wars, styled as detailed crowd scenes, earthy naturalism, teal and yellow, frostpunk, interior scenes, cinestill 50d—ar 21:9— original style

For each scene, he would fill in the blanks with his desired scene, making sure that the other tokens maintained maximum continuity of theme, color and lighting.

Additionally, the Strong Variations feature makes it easier to create different scenes while retaining the previous color palette. A scene of a female warrior can be transformed into a scene of a normal citizen, a cyber hacker, or a fight without generating new prompts.

picture

The third step is to generate animated images in Runway .

Neubert found this step to be the easiest. On the setup, he always tries to activate the Upscaled mode. However, this mode often has problems with faces, so for portrait shots, he usually uses standard quality.

It is worth noting that instead of using a combination of text prompts and image prompts, he dragged and dropped an image and regenerated it until he got the result he was satisfied with.

The final step is post-editing in CapCut .

While Midjourney and Runway generate output, Neubert starts by placing key scenes that he knows will play a big role. As far as the trailer goes, he thinks the exterior shots will be the opening.

Then start planning the text. It is possible that there are no clips in the timeline when the text is positioned according to the music. In less than an hour, he compiled the content according to the timeline and fixed the location. This is great for generating images where you need an extra fixed point to account for which scenes are still missing.

The specific steps become very simple, generate clips → pull them into CapCut → place them on the timeline, and slowly piece together the story. He also color-matched 2 or 3 edit packages to make them look more like grand movie sets.

The only skill required to use CapCut is tempo-syncing the clips. When "BWAAA" comes up in the music, he's always trying to connect the action within the clip or line up the clips that follow. This makes the entire sequence feel more immersive.

In addition, Neubert considered how to incorporate a lot of motion into the clip . Here he uses two tricks to add movement.

picture

First trick: Runway takes an image and calculates which parts should be animated based on the model. He reverse-engineered this idea, trying to output images in Midjourney that suggested motion. This means motion blur can be added to footage or still images of moving heads or people can be captured.

Second tip: When you analyze the Runway video, you will find that in the 4 second clip, the scene often changes greatly. So in the trailer scene, he only used the full 4 second cut twice. All other clips are 0.5-2 seconds long and speed up clips by a factor of 1.5-3. The reason for this is that, as a viewer, you only see a short clip and therefore perceive more motion in the scene, essentially fast-forwarding that part.

After some operations, what we finally presented to everyone is the shocking "Genesis" trailer at the beginning. The trailer also received rave reviews, with some saying it was the best runway generation video they'd seen so far.

picture

In fact, after Runway Gen-2 was available for free trial, many netizens opened their minds and combined it with Midjourney to create boldly.

Midjourney+Runway: A Magical Combination of AI Creation Tools

Here are some other generation use cases to share with you.

Runway's grasp of the details of the character's movements is also relatively accurate. In the video of netizen Shencheng, it can be seen that the details of the eyes of the characters make the dynamic video more vivid, and it can also be said to add a bit of "acting skills".

picture

Source: https://twitter.com/OrctonAI/status/1682420932818661378

After the picture moves, the movements of the man and the horse in the night are very natural, and there is more room for imagination of the characters and even the follow-up actions.

picture

Source: https://twitter.com/OrctonAI/status/1682420932818661378

The combination of Midjourney and Runway looks invincible, and it can portray a proper sense of story in the key actions of the characters.

picture
picture

Twitter: @ai_insight1

There are also some variations that are richer and more creative in generating results.

picture

Twitter @kkuldar

picture

Twitter: @Akashi30eth

Some netizens also use a series of AI tools to generate video clips, but the results seem to be unsatisfactory.

If only AI is allowed to participate, or only to produce works based on the generation of AI, it is obviously not possible to produce high-quality results. It is their application and adjustment by humans that seem to reveal the true value of these tools.

Reprinted from 机器之心 杜伟、泽文View Original

Comments

no dataCoffee time! Feel free to comment