Midjourney+Gen2, complete "Barbenheimer" in 7 steps, causing Karpathy to exclaim | With actual test experience
7 Steps to Complete "Barpenheimer", Is Film Industry 2.0 Really Coming?
After Barbie Highmore went viral on the Internet two days ago, netizens continue to "repeat" the magical skill of using MidJourney+Gen-2 to make movies!
A netizen updated his own tutorial, which only requires 7 steps to make Barpenheimer. Karpathy praised it as "Film Production Industry 2.0".
A 20-second animated short film with a complete plot and 6 sub-shots, completed in 7 steps, Cao Zhi would call him an expert after watching it!
7 steps to achieve Barbie Highmore with amazing results
Let me give you a detailed demonstration below:
1. ChatGPT helps you write the storyboard script and also helps you write the subtitles.
2. According to the shot script, use Midjourney to generate a picture at the beginning of each shot.
This may be the only step in the 7-step filmmaking process that is a bit difficult. You have to create the prompt words for each picture yourself.
But after you click on the picture to enlarge it, you can see that the prompt words are not very long. Friends with a little basic English can do it with a little trial.
Here are the starting images for several other scenes in the short film, all generated using Midjourney.
In order to ensure that the tones of the scenes in the short film are consistent, you need to use photo editing software to adjust the tones of each picture.
For example, the color tone in the short film is relatively retro, and the original image generated by Midjourney may not match it.
After adjusting with any photo retouching software, the style of all scenes can be made more consistent.
Use Gen-2 to animate every photo and turn 6 photos into 6 shots.
Then use ElevenLabs to generate the voice for the subtitles.
Then use FinalCut Pro to combine animation, sound, and special effects, and the short film is basically completed.
Then use Capcut to add subtitles, and you’re done!
Experience discovered after actual testing
Although it looks simple, after a few hands-on tests, I found that without a clear plan, it is still not easy to make a movie that you are satisfied with.
The biggest obstacle is that because the animation of Gen-2 can only be randomly generated from a certain picture, there will be more distortion when it comes to facial images.
I originally wanted to make a video of Boss Ma rubbing a starship with his hands, but I found that the distortion of the character's face was so severe that only the first frame of each video showed Boss Ma himself, and it was hard to tell who he would become after one second. .
So if you want to use Gen-2 to make a celebrity-related video, it is almost impossible so far.
Moreover, these facial distortions and character movements are completely random and completely beyond the control of the user.
A netizen made a video and took a screenshot of the last frame of every 4 seconds to let Gen-2 continue to generate new animations.
After about 40 seconds, the characters in the picture changed from American comic-style animated characters at the beginning to almost deformed into sculptures.
Even though there are some obvious dynamic prompt effects in the picture, Gen-2 cannot understand them yet, so the generated effects may not meet your expectations.
There is no way to control the character's movements very well. In order to generate an animation that you are satisfied with, you may need to try for a long time with the same picture.
Therefore, judging from the effects that Gen-2 can currently achieve, it is unlikely that it will be able to adapt to more complex scripts at will.
Therefore, it is required that when writing scripts, scenes such as close-ups of faces and close-ups should be avoided as much as possible.
But if Gen-2 can completely combine prompts and pictures in the future, so that pictures can move with the description of prompts, the effect will be a huge leap forward.
Other Midjourney+Gen-2 effects
The master went into hiding and used Gen-2 to create several scenes from "Oppenheimer", which were enough to look real.
At the same time, he also compared the differences between similar scenes and those generated by Pika Lab.
The picture below is the effect of Pika Lab.
In addition to the "Barbie Highmore" at the beginning, many netizens have also used the golden pair of Gen-2 and Midjouney to start a side business in animation development.
Let’s enjoy a few sets of animation effect demonstrations with good effects.
It can be seen that the deformation of the face in the demonstration is not very large, which is very different from the actual measurement. It is probably because the author found the effect of small facial distortion after trying many times.
Some netizens generated a trailer of a famous Marvel character. The effect is very realistic. With some animation special effects and light and shadow, it really feels like an official Marvel work.
This is a trailer of a horror movie generated by a netizen. It makes good use of the distortion of the Gen-2 characters to increase the atmosphere of the horror movie.
This video is also a very artistic video made by netizens using MJ+Gen-2, with the addition of post-production special effects by the new director.
This is an animation generated from a still image of an oil painting. Although it is a little distorted, the effect is really good.
References: