Comments (5)
I don't quite understand your whole problem because you're mixing a lot of things in the same issue.
Compel or textual inversion are unrelated to the mask so they don't have anything to do with how the unmasked area is affected.
Also if you're changing the whole background, why are you using a controlnet? which controlnet are you using and why?
I think it all comes to how good or bad your mask is, for example if I use this mask I get this result:
mask | result |
---|---|
Doesn't look too bad, it was a just a quick test without controlnet or anything else, just inpainting with a simple mask. The clothes or the color doesn't change at all because of the mask.
Also just a technicality, but in my opinion when you're changing the whole background is outpainting and not inpainting and to get good results you'll need to do a little more than with inpainting.
from diffusers.
I don't quite understand your whole problem because you're mixing a lot of things in the same issue.
Compel or textual inversion are unrelated to the mask so they don't have anything to do with how the unmasked area is affected.
Also if you're changing the whole background, why are you using a controlnet? which controlnet are you using and why?
I think it all comes to how good or bad your mask is, for example if I use this mask I get this result:
mask result
Doesn't look too bad, it was a just a quick test without controlnet or anything else, just inpainting with a simple mask. The clothes or the color doesn't change at all because of the mask.Also just a technicality, but in my opinion when you're changing the whole background is outpainting and not inpainting and to get good results you'll need to do a little more than with inpainting.
this is text 2 image controlnet pipeline
controlnet is inpainting
before i use inpainting model without controlnet inpainting, but still got bad result
and finetune inpainting is hard with tool(onetrainer)
how possible to get result like yours
i will try your mask
do you use
output_image = pipe.image_processor.apply_overlay(mask_image1, input_image, output_image)
i use your mask still
# process mask
def make_inpaint_condition (image, image_mask):
image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0
assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size"
image[image_mask > 0.5] = -1.0 # set as masked pixel
image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return image
# pipe
pipe = StableDiffusionControlNetPipeline.from_single_file(
'/home/zznet/workspace/stable-diffusion-webui/models/Stable-diffusion/realisticVisionV60B1_v51VAE.safetensors',
torch_dtype = torch.bfloat16,
controlnet = [
depth_control,
inpaint_control,
],
)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas = True, algorithm_type = 'sde-dpmsolver++', euler_at_final = True)
pipe.enable_freeu(s1 = 0.9, s2 = 0.2, b1 = 1.2, b2 = 1.4)
pipe.to('cuda')
pipe.load_textual_inversion([
'easynegative.safetensors',
'./FastNegativeV2.pt'
], token = [
'wtf0',
'wtf1',
])
# sample
output_image = pipe(
image = [
depth_control,
inpaint_control,
],
prompt = prompt,
negative_prompt = 'wtf0, wtf1, dof, dummy, mannequin',
width = renderWidth,
height = renderHeight,
num_inference_steps = num_inference_steps,
guidance_scale = guidance_scale,
guidance_rescale = 0.5,
# generator = _generator,
clip_skip = 0,
controlnet_conditioning_scale = [
0.0, # disable for test
0.999,
],
).images[0]
# output_image = pipe.image_processor.apply_overlay(mask_image1, input_image, output_image)
from diffusers.
I'm literally using the the basics:
base = load_image("original.jpeg")
mask = load_image("mask.png")
inpaint = pipe(
prompt,
width=1024,
height=1280,
image=base,
mask_image=mask,
guidance_scale=10,
strength=0.99,
num_inference_steps=50,
).images[0]
Also as I don't see the need of controlnet, I'm using just StableDiffusionXLInpaintPipeline
. Are you using an inpainting model?
Also I'm using SDXL that's why you see that the quality it's better too, even though I reduced a bit the image size so it doesn't take too long to generate.
Edit: I now see with your edit that you're not using an inpainting model, probably that's your solution.
from diffusers.
re: Edit: I now see with your edit that you're not using an inpainting model, probably that's your solution.
i did use inpainting model, but the mask area still changed, i will try again
after i test with sdwebui(text2image + inpainting controlnet), so i switched
i did use sd 1.5 inpainting and sdxl inpainting
the result is degraded, anyway i will try again
sd 1.5 inpainting with controlnet depth map
sd 1.5 inpainting without controlnet depth map
from diffusers.
This is my test switching just the model to SD 1.5
So I think that's the difference between SD 1.5 and SDXL, the model is less capable of blending the background, also this should be expected, I had to lower the resolution to 512x640 which is half of what I used for SDXL.
from diffusers.
Related Issues (20)
- Combining community pipeline for image generation HOT 3
- custom_pipeline not being cached
- Support MuLan, a plug-and-play language adapter to adapt existing diffusion model for up to 110+ languages without additional training
- PyTorch 2.3.0 Incompatibility with Current Diffusers Library HOT 5
- Add support for custom CLIPTextModelWithProjection in SDXL for new language training
- how can i know what the model base are HOT 2
- ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preservation
- How to implement `IPAdapterAttnProcessor2_0` with xformers HOT 1
- [training examples] reduce complexity by running final validations before export
- SDXL Training Fails for Multi GPU Machine HOT 6
- Instruct-pix2pix pipeline: add ability to pass `cross_attention_kwargs` in call method HOT 2
- Attention in Motion Module of UNetMotionModel HOT 2
- Increasing RAM usage with enable_model_cpu_offload HOT 7
- Type mismatch for LEDITS++ HOT 3
- Bug: Inpaint with IP Adapter (face+style) HOT 2
- RuntimeError: operator torchvision::nms does not exist HOT 4
- config file is not a valid json file HOT 4
- A way to know from the documentation when a given class was introduced
- `from_pipe` method is not working with `MultiControlNetModel` pipeline component HOT 2
- IPAdapter not compatible to torch.compile HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from diffusers.