DreamBooth training example for Qwen Image

DreamBooth is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject.

The train_dreambooth_lora_qwen_image.py script shows how to implement the training procedure with LoRA and adapt it for Qwen Image.

This will also allow us to push the trained model parameters to the Hugging Face Hub platform.

Running locally with PyTorch

Installing the dependencies

Before running the scripts, make sure to install the library’s training dependencies:

Important

To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:

git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .

Then cd in the examples/dreambooth folder and run

pip install -r requirements_sana.txt

And initialize an 🤗Accelerate environment with:

accelerate config

Or for a default accelerate configuration without answering questions about your environment

accelerate config default

Or if your environment doesn’t support an interactive shell (e.g., a notebook)

from accelerate.utils import write_basic_config
write_basic_config()

When running accelerate config, if we specify torch compile mode to True there can be dramatic speedups. Note also that we use PEFT library as backend for LoRA training, make sure to have peft>=0.14.0 installed in your environment.

Dog toy example

Now let’s get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.

Let’s first download it locally:

from huggingface_hub import snapshot_download

local_dir = "./dog"
snapshot_download(
    "diffusers/dog-example",
    local_dir=local_dir, repo_type="dataset",
    ignore_patterns=".gitattributes",
)

This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform.

Now, we can launch training using:

export MODEL_NAME="Qwen/Qwen-Image"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="trained-qwenimage-lora"

accelerate launch train_dreambooth_lora_qwen_image.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --mixed_precision="bf16" \
  --instance_prompt="a photo of sks dog" \
  --resolution=1024 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --use_8bit_adam \
  --learning_rate=2e-4 \
  --report_to="wandb" \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --max_train_steps=500 \
  --validation_prompt="A photo of sks dog in a bucket" \
  --validation_epochs=25 \
  --seed="0" \
  --push_to_hub

For using push_to_hub, make you’re logged into your Hugging Face account:

hf auth login

To better track our training experiments, we’re using the following flags in the command above:

Notes

Additionally, we welcome you to explore the following CLI arguments:

We provide several options for optimizing memory optimization:

Refer to the official documentation of the QwenImagePipeline to know more about the models available under the SANA family and their preferred dtypes during inference.

Using quantization

You can quantize the base model with bitsandbytes to reduce memory usage. To do so, pass a JSON file path to --bnb_quantization_config_path. This file should hold the configuration to initialize BitsAndBytesConfig. Below is an example JSON file:

{
    "load_in_4bit": true,
    "bnb_4bit_quant_type": "nf4"
}