【Beginner's Guide to Stable Diffusion】How to use negative prompts?

【Beginner's Guide to Stable Diffusion】How to use negative prompts?

AI Learning  Assistant NO 2
AI Learning Assistant NO 2
July 28th, 2023

Negative prompt gives you an additional way to control text-to-image generation. Many people treat it as an optional feature in Stable Diffusion 1.4 or 1.5 models. Things changed with release of Stable Diffusion v2. Negative prompt becomes indispensable.

In this post, I will walkthrough a few use cases of negative prompt including modifying contents and modifying style. Then I will demonstrate the importance of negative prompts in v2 models. I will demonstrate how to search for a universal negative prompt.

This is the second part of a series on negative prompt. Read the first part: How does negative prompt work.

Enter negative prompt

Many Stable Diffusion GUI or web services offer negative prompts. In AUTOMATIC1111, you enter negative prompt right under where you put in the prompt.

However, don’t be surprised if you cannot find a way to enter negative prompt in other GUI or services. It was an unofficial feature in v1 model.

Use cases

I will go through a few examples of using negative prompts, so you can get some idea what can be done and how to tweak it. I will be using the v1.5 base model in this section but the techniques are applicable to v2 models.

Removing things The first obvious usage is to remove anything you don’t want to see in the image. Let’s say you have generated a painting of Paris in a rainy day.

You want to generate another one but an empty street. What you can do is to use the same seed value, which specifies the image, and add the negative prompt “people”. You get an image with most people removed.

Note that the scene is very similar but not completely the same as the original one. If you really need the original one, you will need to use inpainting to painstakingly removing the people while keeping the scene coherent.

You may have noticed that there’s one person left in the above image. You can tell Stable Diffusion to try harder by adding emphasis to the negative prompt (people:1.3). That tells Stable Diffusion that the keyword people is 30% more important now.

Keep in mind that while you can use keyword emphasis in AUTOMATIC1111, it is not universally supported by all services. Be sure to check with the one you are using before writing me an angry email…

Modifying images You can nudge Stable Diffusion to make subtle changes with negative prompts. You don’t exactly want to remove anything but to make slight changes to the subjects.

Let’s work on this base image:

Looks like it’s windy and the hairs are floating. Let’s use the negative prompt “windy” to keep the hair down.

Emma in the original image looked a bit… underdeveloped. Using the negative prompt “underage” makes her look more adult-like.

What if we are ok with the wind but want the hair to cover the ear? Let’s add negative prompt “ear” with different emphasis factors. Below are with three increasing emphasis 1.3, 1.6 and 1.9.

The ears are covered more by hair with in all emphasis factors but when the factor reaches 1.9, the composition of the image changed. Negative prompt could affect the diffusion process strongly.

Negative prompt with keyword switching Now what if you really want to use a high emphasis (ear:1.9)? I don’t know what’s your problem with ears but I have a trick for you. You can use keyword switching to first use a meaningless word as negative prompt, and then switching to (ear:1.9) at a later sampling step.

Let’s pick the as the meaningless, dud negative prompt. You can verify its uselessness by putting it in the negative prompt. You will get the same image as if you didn’t put anything. Now use this as negative prompt:

[the: (ear:1.9): 0.5]

Since, I am using 20 sampling steps, what this means is using the as the negative prompt in steps 1 – 10, and (ear:1.9) in steps 11-20.

The reasoning behind this is that the diffusion process is most important in the beginning steps. Later steps are only finer adjustment to details, such as hairs covering ears.

Now what we have accomplished is nothing short of amazing.

  • We can now use the much stronger emphasis (ear:1.9) without changing the composition.

  • We get an image much closer to the original one.

  • The ear is covered.

Modifying styles

Negative prompts are not only useful for modifying the content but also modifying the style. Why use negative prompt to change style? Sometimes adding too much to positive prompt just confuses the diffuser. Imagine someone tell you to go to 77 (the token limit) places at the same time. It helps if they tell you what areas to avoid instead.

Sharpening Instead of using keywords “sharp”, ‘focused’ in prompt, you can use “blurry” in the negative prompt. The image does gets sharper.

Photorealistic Using the negative prompt painting, cartoon makes it more photo-like.

If you want to keep the original composition, you can experiment with keyword switching I mentioned earlier. Using [the: (painting cartoon:1.9): 0.3] we get:

It’s much closer to the original but with added photorealism style.

To see more content about Stable Diffusion from zero click:https://www.hayo.com/article/64c21001ef669957a0d21e63

Reprinted from View Original
Collection Navigation
# 1 [Beginner's Guide] Stable Diffusion Absolute beginner’s guide (+online demo)
# 2 [Beginner's Guide] How to use img2img to turn an amateur drawing to professional with Stable Diffusion (Image-to-image)
# 3 [Stable Diffusion Repair] Beginner’s guide to inpainting (step-by-step examples)
# 4 [Beginner's Guide] How to remove undesirable objects with AI inpainting
# 5 [Beginner's Guide] Video to video with Stable Diffusion (step-by-step)
# 6 [Beginner's Guide] How do you use Stable Diffusion AI?
# 7 [Beginner's Guide] Top Free Stable Diffusion AI image generator sites
# 8 [Beginner's Guide] Stable Diffusion prompt: a definitive guide
# 9 [Beginner's Guide] Instruct Pix2Pix: Edit and stylize photos with text
# 10 【Beginner's Guide to Stable Diffusion】How to build a good prompt?
# 11 【Beginner's Guide to Stable Diffusion】Rules of thumb for building good prompts
# 12 【Beginner's Guide to Stable Diffusion】What is ChatGPT?
# 13 【Beginner's Guide to Stable Diffusion】Sorting out high-quality prompt words from ChatGPT
# 14 【Beginner's Guide to Stable Diffusion】How to use negative prompts?
# 15 【Beginner's Guide to Stable Diffusion】Negative prompt is important for v2 models
# 16 【Beginner's Guide to Stable Diffusion】Boilerplate negative prompts in v2 model
# 17 【Beginner's Guide to Stable Diffusion】What are those parameters, and should I change them?
# 18 【Beginner's Guide to Stable Diffusion】How many images should I generate?
# 19 【Beginner's Guide to Stable Diffusion】Common ways to fix defects in images
# 20 【Beginner's Guide to Stable Diffusion】Garbled faces and eyes problems
# 21 【Beginner's Guide to Stable Diffusion】How to use VAE to improve eyes and faces
# 22 [Beginner's Guide] How to install Stable Diffusion on Windows (AUTOMATIC1111)
# 23 【Beginner's Guide to Stable Diffusion】Beginner’s guide to inpainting (step-by-step examples)
# 24 【Beginner's Guide to Stable Diffusion】What are custom models?
# 25 【Beginner's Guide to Stable Diffusion】How to control image composition?
# 26 【Beginner's Guide to Stable Diffusion】Generating specific subjects
# 27 [Beginner's Guide] How to install and run Stable Diffusion on Apple Silicon M1/M2 Macs
# 28 [Beginner's Guide] Try the Stable Diffusion online demo
# 29 [Beginner's Guide] Beginner’s guide to Stable Diffusion models and the ones you should know
# 30 [Beginner's Guide] What Can Stable Diffusion Do?
# 31 [Beginner's Guide] Realistic human street portrait
# 32 [Beginner's Guide] How to generate animals in Stable Diffusion


no dataCoffee time! Feel free to comment