【SD Tutorials】How to Use Text-to-image
Contents
Text-to-image tab Basic usage Image generation parameters Seed Extra seed options Restore faces Tiling Hires. fix. Buttons under the Generate button Image file actions
Text-to-image tab
The txt2img tag function is to turn text tips into images.

Basic usage
If you are using Stable Diffusion for the first time, these are the settings that you might want to consider modifying.

-Stable Diffusion Checkpoint: Select the base model you want. -Prompt: Describe what you want to see in the images. -Width and height: The size of the output image. To use a v1 model, ensure that the output image size has at least one side set to 512 pixels. -Batch size: Number of images to be generated each time. Once you have completed the necessary steps, click on the "Generate" button and wait for a short while to obtain your images. By default, you will get an additional image of composite thumbnails.You can save an image to your local storage. First, select the image using the thumbnails below the main image canvas. Right-click the image to bring up the context menu. You should have options to save the image or copy the image to the clipboard. That’s all you need to know for the basics!
Image generation parameters

Stable Diffusion checkpoint is a dropdown menu for selecting models. You need to put model files in the folder stable-diffusion-webui
> models
> Stable-diffusion
. See more about installing models.
The refresh button 🔄 next to the dropdown menu is for refreshing the list of models. It is used when you have just put a new model in the model folder and wish to update the list.
-Prompt text box: Put what you want to see in the images. HELP TOOL:https://promptboost.streamlit.app/
-Negative Prompt text box: Put what you don’t want to see. You should use a negative prompt when using v2 models. You can use a universal negative prompt.
-Sampling method: The algorithm for the denoising process. I use DPM++ 2M Karras because it balances speed and quality well. You may want to avoid any ancestral samplers (The ones with an a) because their images are unstable even at large sampling steps. This made tweaking the image difficult.
-Sampling steps: Number of sampling steps for the denoising process. The more the better, but it also takes longer. 25 steps work for most cases.
-Width and height: The size of the output image. You should set at least one side to 512 pixels for v1 models. Set at least one side to 768 when using the v2-768px model.
-Batch count: Number of times you run the image generation pipeline.
-Batch size: Number of images to generate each time you run the pipeline.
-CFG scale: Classifier Free Guidance scale is a parameter to control how much the model should respect your prompt.
1 – Mostly ignore your prompt.
3 – Be more creative.
7 – A good balance between following the prompt and freedom.
15 – Adhere more to the prompt.
30 – Strictly follow the prompt.
Seed
Seed: The seed value used to generate the initial random tensor in the latent space. Practically, it controls the content of the image. Each image generated has its own seed value. Stable Diffusion will use a random seed value if it is set to -1.

Use the recycle button♻️ to copy the seed value.

Extra seed options
Checking the Extra option will reveal the Extra Seed menu.

-Variation seed: An additional seed value you want to use. -Variation strength: Degree of interpolation between the seed and the variation seed. Setting it to 0 uses the seed value. Setting it to 1 uses the variation seed value.Here’s an example. Let’s say you have generated 2 images from the same prompt and settings. They have their own seed values, 1 and 3. FOR example:

If you intend to create a blend of two images, you should set the seed to 1 and the variation seed to 3. Additionally, you can modify the variation strength within the range of 0 to 1. This experiment enables you to generate a gradual transition of image content between the two seeds. For instance, by increasing the variation strength from 0 to 1 in the sample below, the girl's pose and background change gradually.
Restore faces
Restore faces applies an additional model trained for restoring defects on faces. This function is generally used to repair the Anime Picture, real people are not suitable. Tips:Restore faces requires you to select the face recovery model in the Settings, and choose a good restore face model such as CodeFormer.

Tiling
Use the Tiling option to produce a periodic image that can be tiled,for example Wallpaper and anything else.
Hires. fix.
The high-resolution fix option applies a upsacler to scale up your image. The native resolution of Stable Diffusion is 512 pixels, so you must first generate a small image of 512 pixels on either side. Then scale it up to a bigger one.

Hires steps: It is the number of sampling steps after upscaling the latent image. Denoising strength: Only applicable to latent upscalers. The denoising strength of the latent upscaler must be higher than 0.5. Otherwise, you will get blurry images.

You can avoid the trouble of setting the correct denoising strength by using an AI upscalers like BSRGAN. And it is the best one.

Buttons under the Generate button

↙️ Read the last parameters 🗑 Delete the current prompt and the negative prompt. 🎴 Model icon: Show extra networks. This button is for inserting hypernetworks, embeddings, and LoRA phrases into the prompt. 📋 Load style: You can select multiple styles from the style dropdown menu below. Use this button to insert them into the prompt and the negative prompt. 💾 Save style: Save the prompt and the negative prompt. You will need to name the style.
Image file actions

📂 Open the image output folder. Save:Save an image. Zip:Zip up the image to download. Send to img2img:Send the selected image to the img2img tab. Send to inpaint: Send the selected image to the inpainting tab in the img2img tab. Send to extras:Send the selected image to the Extras tab.
About How to Use Img2img click here , Stable Diffusion Tutorial Guide click here.