[SD Advanced Tutorial]Stable Diffusion SDXL Beta Model

[SD Advanced Tutorial]Stable Diffusion SDXL Beta Model

AI  Learning Assistant No 1
AI Learning Assistant No 1
August 22nd, 2023

Stability AI has released a preview of a new model called SDXL Beta (Stable Diffusion XL Beta). They didn’t tell us much about the model, but it is available for anyone who wants to test it.

What’s new about this Stable Diffusion SDXL model? What are its strengths and weaknesses? Let’s find out.

What is SDXL model

The SDXL model is a new model currently in training. It is not a finished model yet. In fact, it may not even be called the SDXL model when it is released.

All we know is it is a larger model with more parameters and some undisclosed improvements. It is a v2, not a v3 model (whatever that means).

How to use SDXL model

The SDXL model is currently available at DreamStudio, the official image generator of Stability AI. To use the SDXL model, select SDXL Beta in the model menu.

You will need to sign up to use the model. You will get some free credits after signing up.


I will highlight some improvements in the SDXL model I have seen so far.

Legible text

Perhaps the most striking capability is the ability to generate legible text. This is not possible in the v1 or v2.1 models.

The text generated by SDXL is not always accurate (as you can see in the Stable Diffusion Text below). But it is much better than v2.1, not to mention v1 models.

Photo of a woman sitting in a restaurant holding a menu that says “Menu”

Photo of a man holding a sign that says “Stable Diffusion”

a young female holding a sign that says “Stable Diffusion”, highlights in hair, sitting outside restaurant, brown eyes, wearing a dress, side light

Better human anatomy

Stable Diffusion long has problems in generating correct human anatomy. It is common to see extra or missing limbs. You will usually use inpainting to correct them. Or, more recently, you can copy a pose from a reference image using ControlNet‘s Open Pose function.

I am pleased to see the SDXL Beta model has improvements in this area. Let’s look at an example.

The prompt is:

Photo of a woman in yoga outfit, triangle pose, beach in evening, rim lighting Here are the SDXL Beta images.

Compare with the v1.5 images below.

It’s not perfect, but human poses are much better in SDXL!

More aesthetic images

The images generated can be quite different. See the following images with the same prompt.


v2.2.2 SDXL Beta

Photo-style portraits are very good in SDXL Beta. I would say it is better than v1.5.

photo shot of a woman


v2.2.2 SDXL Beta


v2.2.2 SDXL Beta

More accurate images

The ability to understand the prompt improves over v1 models.

In the v1.5 model, the keyword duotone always generates black-and-white images. SDXL Beta generates duotone images with a variety of colors. This is an improvement. duotone portrait of a woman


v2.2.2 SDXL Beta

Since SDXL Beta is a v2 model, it equips with a larger text model. You can expect it to understand your prompt better than v1 models. Indeed that’s what we see.

Let’s look at the images generated by the following prompt with two subjects.

big robot friend sitting next to a human, ghost in the shell style, anime wallpaper


v2.2.2 SDXL Beta

The v1.5 model consistently ignores there are two subjects, the robot and the human, in the prompt. But the SDXL Beta model is able to understand the prompt and generates a more correct image. (I hope the robot can be bigger, but that’s a step forward.)

Likewise, photo-style images are more accurate. See the following prompt and images.

a young man, highlights in hair, brown eyes, in white shirt and blue jean on a beach with a volcano in background


v2.2.2 SDXL Beta

Artistic styles

I checked a few artistic styles. There are some subtle changes, but I can neither say they are better nor worse. It’s just different.

Both v1.5 and SDXL Beta generates Edward Hopper‘s style. Although they are consistently different. New York city by Edward Hopper


v2.2.2 SDXL Beta

v1.5 generates Leonid Afremov‘s style accurately. The unmistakable colorful board brushstrokes are missing in SDXL Beta. It generates an illustration style and, interestingly, still retains the distinct reflection on the ground.

New York city by Leonid Afremov


v2.2.2 SDXL Beta

Both v1.5 and SDXL Beta produce something close to William-Adolphe Bouguereau‘s style. SDXL Beta’s images are closer to typical academic paintings which Bouguereau produces. In general, portraits from SDXL Beta show more details on faces.

Portrait of beautiful woman by William-Adolphe Bouguereau


v2.2.2 SDXL Beta

Style Shift

Perhaps it’s a glitch in this preview model. Sometimes, the style can abruptly change with the addition of innocent keywords.

For example, I started with this prompt that generates a photo style.

a young man, highlights in hair, brown eyes, in white shirt and blue jean on a beach with a volcano in background

Now I want to add a yellow scarf.

a young man, highlights in hair, brown eyes, wearing a yellow scarf, in white shirt and blue jean on a beach with a volcano in background

Suddenly, the images change to anime style. This happens to a few keywords. It is almost like the model has blended with some cartoon styles and is eager to switch to that.

Hope this issue will be resolved in the release version.


Here are what I think about the SDXL Beta model:

  • Stable Diffusion finally generates correct text!
  • More aesthetic than the v2.1 model and (to a lesser extent) the v1.5 model.
  • The images are more accurate as described in the prompts.
  • Human anatomy is getting better.
  • Do not need negative prompts as much as v2.1.
  • Particularly strong in portraits.
  • Some peculiar glitches in the model to be fixed before release.

Finally, a few more images from the SDXL beta model.

Reprinted from View Original
Collection Navigation
# 1 [SD Advanced Tutoria]How to fix parameters?
# 2 [SD Advanced Tutorial]How to automatically fix faces and hands
# 3 [SD Advanced Tutorial]How to run SDXL models
# 4 [SD Advanced Tutorial]Speed up Stable Diffusion
# 5 [SD Advanced Tutorial]3 ways to control lighting in Stable Diffusion
# 6 [SD Advanced Tutorial]Video to video with Stable Diffusion
# 7 [SD Advanced Tutorial]3 methods to upscale images in SD
# 8 [SD Advanced Tutorial]Control image composition in SD
# 9 [SD Advanced Tutorial]How to generate animals in SD
# 10 [SD Advanced Tutorial]How to make a video with SD
# 11 [SD Advanced Tutorial]How to generate realistic people in SD
# 12 [SD Advanced Tutorial]Stable Diffusion SDXL Beta Model
# 13 [SD Advanced Tutorial]SD Samplers: A Comprehensive Guide
# 14 [SD Advanced Tutorial]Midjourney vs SD: Which one should you pick?
# 15 [SD Advanced Tutorial]What is hypernetwork
# 16 [SD Advanced Tutorial]AUTOMATIC1111: A Beginner’s Guide
# 17 [SD Advanced Tutoria]What are LoRA models and how to use them in AUTOMATIC1111
# 18 [SD Advanced Tutoria]ControlNet v1.1: A complete guide
# 19 [SD Advanced Tutoria]How to remove extra limbs with Stable Diffusion inpainting
# 20 [SD Advanced Tutoria]Stable Diffusion prompt: a definitive guide
# 21 [SD Advanced Tutoria]Instruct Pix2Pix: Edit and stylize photos with text
# 22 [SD Advanced Tutoria]How to use outpainting to extend images
# 23 [SD Advanced Tutoria]ChatGPT: How to generate prompts for Stable Diffusion
# 24 [SD Advanced Tutoria]How to run Stable Diffusion on Google Colab (AUTOMATIC1111)
# 25 [SD Advanced Tutoria]Stable Diffusion Installation Tutorial (Mac M1/M2)
# 26 [SD Advanced Tutoria]How to use negative prompts?
# 27 [SD Advanced Tutoria]How does negative prompt work?
# 28 [SD Advanced Tutoria]How does Stable Diffusion work?
# 29 [SD Advanced Tutoria]Stable Diffusion Workflow
# 30 [SD Advanced Tutoria]Depth-to-image in SD 2
# 31 [SD Advanced Tutoria]How to install SD on Windows
# 32 [SD Advanced Tutoria]How to use embeddings in Stable Diffusion
# 33 [SD Advanced Tutoria]How to install Stable Diffusion 2.1 in AUTOMATIC1111 GUI
# 34 [SD Advanced Tutoria]How to cartoonize photo with Stable Diffusion
# 35 [SD Advanced Tutoria]How to use Dreambooth to put anything in Stable Diffusion (Colab notebook)
# 36 [SD Advanced Tutoria]How to use VAE to improve eyes and faces
# 37 [SD Advanced Tutoria]Turn amateur into professional with img2img
# 38 [SD Advanced Tutoria]How to use AI image upscaler to improve details
# 39 [SD Advanced Tutoria]How to run Stable Diffusion 2.0 and a first look
# 40 [SD Advanced Tutoria]A Beginner's Guide to SD Models
# 41 [SD Advanced Tutoria]How to stylize images using Stable Diffusion AI
# 42 [SD Advanced Tutoria]Basic information of Stable Diffusion
# 43 [SD Advanced Tutoria]Top Free Stable Diffusion AI Image Generator Sites
# 44 [SD Advanced Tutoria]Image AI generates fashion ideas
# 45 [SD Advanced Tutoria]Make an Animated GIF Using SD
# 46 [SD Advanced Tutoria]Common Problems in AI
# 47 [SD Advanced Tutoria]Change prompt word parameters
# 48 [SD Advanced Tutoria]AI removes unwanted objects
# 49 [SD Advanced Tutoria]Fine-tune AI images with tips
# 50 [SD Advanced Tutoria]What is Stable Diffusion?


no dataCoffee time! Feel free to comment