[!WARNING] This pipeline is deprecated but it can still be used. However, we won’t test the pipeline anymore and won’t accept any changes to it. If you run into any issues, reinstall the last Diffusers version that supported this model.
DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.
The abstract from the paper is:
Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.
The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo.
This pipeline was contributed by clarencechen. ❤️
~StableDiffusionDiffEditPipeline.generate_mask])
and a set of partially inverted latents (generated using [~StableDiffusionDiffEditPipeline.invert]) must be provided as arguments when calling the pipeline to generate the final edited image.~StableDiffusionDiffEditPipeline.generate_mask] exposes two prompt arguments, source_prompt and target_prompt
that let you control the locations of the semantic edits in the final image to be generated. Let’s say,
you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect
this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to
source_prompt and “dog” to target_prompt.invert, assign a caption or text embedding describing the
overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the
source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives.negative_prompt
and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to
the phrases including “cat” to negative_prompt and “dog” to prompt.source_prompt and target_prompt in the arguments to generate_mask.~StableDiffusionDiffEditPipeline.invert] to include “dog”.prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image.[[autodoc]] StableDiffusionDiffEditPipeline - all - generate_mask - invert - call
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput