【SD advanced skills】How to adjust the image action/posture

【SD advanced skills】How to adjust the image action/posture

March 13th, 2023
View OriginalTranslated by Google

ControlNet: Drawing according to skeleton actions

ControlNet is a technique that allows an AI to paint after referring to the motion/lines/depth of field of a given image. Compared to the built-in "image-to-image" technology, ControlNet allows AI to paint the way the user wants. In combination with ControlNet's additional 3D modeling or skeletons, the problem of poor hand-drawn detail can also be alleviated.

In the Extensions page, select "Install From URL" and enter the URL https://github.com/Mikubill/sd-webui-controlnet.git , then press "Install".

If you cannot access Github normally due to network reasons, you can download the ControlNet integration package at https://www.123pan.com/s/sKd9-mzJc.html . After downloading, unzip it to the .\stable-diffusion-webui\extensions directory Just download it. Please note that if you download this integration package, you do not need to download the model as described below. You can use it directly ( if you have downloaded and installed ControlNet before, please manually change .\stable-diffusion-webui \extensions\sd-webui-controlnet folder is deleted.)

Click on "Installed", then click on "Apply and restart UI".

You can choose to download it from the official website (https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main) or in the network disk (https://www.123pan.com/s/sKd9-kzJc.html ) Download all models and place the model files into stable-diffusion-webui\extensions\sd-webui-controlnet\models under the Stable Diffusion WebUI folder.

Then enter the drawing page, enter the prompt word and upload the image. Next, click Enabled under the ControlNet interface to enable ControlNet, and then upload the image again.

Then select the Preprocessor and Model to be used. The two need to match. It is recommended to test the effect by yourself.

  • Canny detects image edges

  • Scribbles detection lines

  • Openpose detection action

  • Depth detection normal

Taking Scribbles as an example, after selecting, click Generate in the upper right corner to draw the result and attach the detected lines.

PoseX: Adjust your own movements

PoseX is an extended function of Stable Diffusion WebUI, which allows users to directly drag the character skeleton and then combine it with ControlNet technology to draw images of corresponding poses. Please install ControlNet extended functions before using PoseX.

On the Extensions page of the WebUI, select "Install From URL", enter the URL address https://github.com/hnmr293/posex.git , and click "Install". Then restart the WebUI.

Open the Vincent diagram page, click PoseX in the lower right corner, and then click "Send this image to ControlNet".

On the ControlNet page below, click "Enabled", select "none" for preprocessor, and "openpose" for model, no need to upload images.

Go back to the PoseX page above and adjust the pose of the character.

Fill in the positive and negative prompt words, and the image can be drawn according to the pose of PoseX.


no dataCoffee time! Feel free to comment