HomeAI News
[Stable Diffusion Repair] Beginner’s guide to inpainting (step-by-step examples)

[Stable Diffusion Repair] Beginner’s guide to inpainting (step-by-step examples)

AI  Learning Assistant No 1
AI Learning Assistant No 1
July 28th, 2023

No matter how good your prompt and model are, it is rare to get a perfect image in one shot.

Inpainting is an indispensable way to fix small defects. In this post, I will go through a few basic examples to use inpainting for fixing defects.

If you are new to AI images, you may want to read the beginner’s guide first.

Image model and GUI

We will use Stable Diffusion AI and AUTOMATIC1111 GUI. See my quick start guide for setting up in Google’s cloud server.

Basic inpainting settings

In this section, I will show you step-by-step how to use inpainting to fix small defects.

I will use an original image from the Lonely Palace prompt:

[emma watson: amber heard: 0.5], (long hair:0.5), headLeaf, wearing stola, vast roman palace, large window, medieval renaissance palace, ((large room)), 4k, arstation, intricate, elegant, highly detailed (Detailed settings can be found here.)

Original image

It’s a fine image but I would like to fix the following issues

  • The face looks unnatural.
  • The right arm is missing.

Use an inpainting model (optional)

Do you know there is a Stable Diffusion model trained for inpainting? You can use it if you want to get the best result. But usually, it’s OK to use the same model you generated the image with for inpainting.

To install the v1.5 inpainting model, download the model checkpoint file and put it in the folder

stable-diffusion-webui/models/Stable-diffusion In AUTOMATIC1111, press the refresh icon next to the checkpoint selection dropbox at the top left. Select sd-v1-5-inpainting.ckpt to enable the model.

Creating an inpaint mask

In AUTOMATIC1111 GUI, Select the img2img tab and select the Inpaint sub-tab. Upload the image to the inpainting canvas.

We will inpaint both the right arm and the face at the same time. Use the paintbrush tool to create a mask. This is the area you want Stable Diffusion to regenerate the image.

Create mask use the paintbrush tool.

Settings for inpainting


You can reuse the original prompt for fixing defects. This is like generating multiple images but only in a particular area.

Image size

The image size needs to be adjusted to be the same as the original image. (704 x 512 in this case).

Face restoration

If you are inpainting faces, you can turn on restore faces. You will also need to select and apply the face restoration model to be used in the Settings tab. CodeFormer is a good one.

Caution that this option may generate unnatural looks. It may also generate something inconsistent with the style of the model.

Mask content

The next important setting is Mask Content.

Select original if you want the result guided by the color and shape of the original content. Original is often used when inpainting faces because the general shape and anatomy were ok. We just want it to look a bit different.

In most cases, you will use Original and change denoising strength to achieve different effects.

You can use latent noise or latent nothing if you want to regenerate something completely different from the original, for example removing a limb or hiding a hand. These options initialize the masked area with something other than the original image. It will produce something completely different.

Denoising strength

Denoising strength controls how much change it will make compared with the original image. Nothing will change when you set it to 0. You will get an unrelated inpainting when you set it to 1.

0.75 is usually a good starting point. Decrease if you want to change less.

Batch size

Make sure to generate a few images at a time so that you can choose the best ones. Set the seed to -1 so that every image is different.

Inpainting results

Below are some of the inpainted images.

Inpainted images

One more round of inpainting

I like the last one but there’s an extra hand under the newly inpainted arm. Follow similar steps of uploading this image and creating a mask. Masked content must be set to latent noise to generate something completely different.

The hand under the arm is removed with the second round of inpainting:

Use inpainting to remove the extra hand under the arm.

And this is my final image.

A side-by-side comparison

Left: original. Right: inpainted 2 times.

Inpainting is an iterative process. You can apply it as many times as you want to refine an image.

See this post for another more extreme example of inpainting.

See the tutorial for removing extra limbs with inpainting.

Adding new objects

Sometimes you want to add something new to the image.

Let’s try adding a hand fan to the picture.

First, upload the image to the inpainting canvas and create a mask around the chest and right arm.

Add the prompt “holding a hand fan” to the beginning of the original prompt. The prompt for inpainting is

(holding a hand fan: 1.2), [emma watson: amber heard: 0.5], (long hair:0.5), headLeaf, wearing stola, vast roman palace, large window, medieval renaissance palace, ((large room)), 4k, arstation, intricate, elegant, highly detailed Adding new objects to the original prompt ensures consistency in style. You can adjust the keyword weight (1.2 above) to make the fan show.

Set masked content as latent noise.

Adjust denoising strength and CFG scale to fine-tune the inpainted images.

After some experimentation, our mission is accomplished:

Adding a hand fan with inpainting.

Explanation of inpainting parameters

Denoising strength

Denoising strength controls how much respect the final image should pay to the original content. Setting it to 0 changes nothing. Setting to 1 you got an unrelated image.

Set to a low value if you want small change and a high value if you want big change.

Changing denoising strength. Set to low value if you want small change and high value if you want big change. CFG scale

Similar to usage in text-to-image, the Classifier Free Guidance scale is a parameter to control how much the model should respect your prompt.

1 – Mostly ignore your prompt. 3 – Be more creative. 7 – A good balance between following the prompt and freedom. 15 – Adhere more to the prompt. 30 – Strictly follow the prompt.

Masked content

Masked content controls how the masked area is initialized.

  • Fill: Initialize with a highly blurred of the original image.
  • Original: Unmodified.
  • Latent noise: Masked area initialized with fill and random noise is added to the latent space.
  • Latent nothing: Like latent noise except no noise is added to the latent space.

Below are the initial mask content before any sampling steps. This gives you some idea of what they are.

Masked content.

Tips for inpainting

Successful inpainting requires patience and skill. Here are some take homes for using inpainting

  • One small area at a time.
  • Keep masked content at Original and adjust denoising strength works 90% of the time.
  • Play with masked content to see which one works the best.
  • If nothing works well within AUTOMATIC1111’s settings, use photo editing software like Photoshop or GIMP to paint the area of interest with the rough shape and color you wanted. Upload that image and inpaint with original content.

To see more content about Stable Diffusion from zero click:https://www.hayo.com/article/64c21001ef669957a0d21e63

Reprinted from View Original
Collection Navigation
# 1 [Beginner's Guide] Stable Diffusion Absolute beginner’s guide (+online demo)
# 2 [Beginner's Guide] How to use img2img to turn an amateur drawing to professional with Stable Diffusion (Image-to-image)
# 3 [Stable Diffusion Repair] Beginner’s guide to inpainting (step-by-step examples)
# 4 [Beginner's Guide] How to remove undesirable objects with AI inpainting
# 5 [Beginner's Guide] Video to video with Stable Diffusion (step-by-step)
# 6 [Beginner's Guide] How do you use Stable Diffusion AI?
# 7 [Beginner's Guide] Top Free Stable Diffusion AI image generator sites
# 8 [Beginner's Guide] Stable Diffusion prompt: a definitive guide
# 9 [Beginner's Guide] Instruct Pix2Pix: Edit and stylize photos with text
# 10 【Beginner's Guide to Stable Diffusion】How to build a good prompt?
# 11 【Beginner's Guide to Stable Diffusion】Rules of thumb for building good prompts
# 12 【Beginner's Guide to Stable Diffusion】What is ChatGPT?
# 13 【Beginner's Guide to Stable Diffusion】Sorting out high-quality prompt words from ChatGPT
# 14 【Beginner's Guide to Stable Diffusion】How to use negative prompts?
# 15 【Beginner's Guide to Stable Diffusion】Negative prompt is important for v2 models
# 16 【Beginner's Guide to Stable Diffusion】Boilerplate negative prompts in v2 model
# 17 【Beginner's Guide to Stable Diffusion】What are those parameters, and should I change them?
# 18 【Beginner's Guide to Stable Diffusion】How many images should I generate?
# 19 【Beginner's Guide to Stable Diffusion】Common ways to fix defects in images
# 20 【Beginner's Guide to Stable Diffusion】Garbled faces and eyes problems
# 21 【Beginner's Guide to Stable Diffusion】How to use VAE to improve eyes and faces
# 22 [Beginner's Guide] How to install Stable Diffusion on Windows (AUTOMATIC1111)
# 23 【Beginner's Guide to Stable Diffusion】Beginner’s guide to inpainting (step-by-step examples)
# 24 【Beginner's Guide to Stable Diffusion】What are custom models?
# 25 【Beginner's Guide to Stable Diffusion】How to control image composition?
# 26 【Beginner's Guide to Stable Diffusion】Generating specific subjects
# 27 [Beginner's Guide] How to install and run Stable Diffusion on Apple Silicon M1/M2 Macs
# 28 [Beginner's Guide] Try the Stable Diffusion online demo
# 29 [Beginner's Guide] Beginner’s guide to Stable Diffusion models and the ones you should know
# 30 [Beginner's Guide] What Can Stable Diffusion Do?
# 31 [Beginner's Guide] Realistic human street portrait
# 32 [Beginner's Guide] How to generate animals in Stable Diffusion


no dataCoffee time! Feel free to comment