![]() from_pretrained( "lllyasviel/sd-controlnet-canny", torch_dtype = torch. ![]() # load control net and stable diffusion v1-5 controlnet = ControlNetModel. Here's an example of how this new pipeline ( StableDiffusionControlNetInpaintPipeline) is used with the core backbone of "runwayml/stable-diffusion-inpainting": An upgrade to the latest version can be expected in the near future (currently, some breaking changes are present in 0.15.0 that should ideally be fixed on the side of the diffusers interface). This code is currently compatible with diffusers=0.14.0. Demos on □ HuggingFace Using ControlNetInpaint ✏️ Mask and SketchĬheck out the HuggingFace Space which allows you to scribble and describe how you want to recreate a part of an image:Ĭheck out the HuggingFace Space that reimagines scenes with human subjects using a text prompt: The key trick is to use the right value of the parameter controlnet_conditioning_scale - while value of 1.0 often works well, it is sometimes beneficial to bring it down a bit when the controlling image does not fit the selected text prompt very well. In this repository, you will find a basic example notebook that shows how this can work. ![]() □ The initial set of models of ControlNet were not trained to work with StableDiffusion inpainting backbone, but it turns out that the results can be pretty good! ControlNet has proven to be a great tool for guiding StableDiffusion models with image-based hints! But what about changing only a part of the image based on that hint?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |