Giter Club home page Giter Club logo

Comments (36)

Dawgmastah avatar Dawgmastah commented on July 19, 2024 4

I understand this new mask might give better results in some cases, but maybe we should separate mask/inpainting?
Or add a way to pick between a hard mask and a smart one?

Let me illustrate with some pics I made yesterday of a friend where I want him to have a different shirt, a bit of chaged setting and holding a wine glass:

Base image

160052694_138121522143036_8256540069451018491_n

Mask

Mask

Result Before:

download

But after the new changes this is the result:

result

Too many changes (and subtle degradation of the image) on the masked area.

In Images where the subject face is smaller and facing forwards, the effect is greater, with noise, jumbled eyes and noses

example:

faceDistort

(you can reproduce by grabbing a base picture, masking the face and try to modify the scene, around the smaller, front facing masked subject)

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024 3

The reverse happens with "regenerate only masked area" which IMO is worse using the mask or areas around it to improve blending makes more sense when in keep masks mode even if i still dont agree with the change but why does the left of the image need to have the quality degraded because you made a change on the right side of the image when in regenerate only masks mode

ORG IMAGE
asian org

MASK AND SETTINGS
screenshot
https://user-images.githubusercontent.com/36141041/187972595-6f547575-b6f0-46f8-a538-c72c68f94810.mp4

IMAGE AFTER HAND EDIT
00508-50_k_lms_1733069044_asian_woman_arm

This was with just one change in "regenerate only masked area" as you can see the face quality is significantly degraded even though no changes were made to that area

from stable-diffusion.

gmaxwell avatar gmaxwell commented on July 19, 2024 3

Found my way here to report that masking in master works much less well than c0c2a7c. Having more control over masking would be good.. prior to finding your repo I was just hacking the source code to add in masking and often found I had to tweak it a fair bit.

Even under the old code I was often masking softer and narrower to improve blending then going back and gimping the original parts back into to recover.

Ideally the original image feed back loop would have an adjustment that let if follow the color/shading changes while preserving details, and an adjustment on how much. I think that would eliminate most of blending artifacts.

from stable-diffusion.

thojmr avatar thojmr commented on July 19, 2024 2

Like others above, I would suggest the default to only affect the masked area. People are going to expect the mask to ONLY affect the masked area, or the reverse. Making some hybrid mask logic the default is only going to cause a ton of new issue tickets down the road.

An unfortunate side affect of altering the whole image is blurring of text. If your image has any text in it, RIP text

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024 1

I've noticed this I think the quality of those sections is dropping which is confusing since I was under impression that it simply copied the pixels from that area since they were unchanged. I don't believe this happened in the last version don't know if this is related but here's and example of loopback generations on the current build

ImageGlass_irBHXjP3Yt.mp4

from stable-diffusion.

hlky avatar hlky commented on July 19, 2024

Can't really tell you more without seeing the images, but it could just be that the effect you want needs to be done in a different way. Making it easier to actually do what you want given everything that can affect the output is something me and @altryne are working on.

from stable-diffusion.

ddgond avatar ddgond commented on July 19, 2024

I bring this up because I was using it yesterday in this way without this issue, and after pulling from the main branch today was being affected by it. I can post example images of what I did yesterday vs today in a bit, and have so far checked that commit ef36e0a works as expected (this was the last commit message I remember seeing when I pulled originally).

from stable-diffusion.

ddgond avatar ddgond commented on July 19, 2024

Here you can see my workflow yesterday, where I masked certain parts of the image to change them, but always keeping the face the same (looking at the eyes makes this obvious).
Here is the same workflow after pulling today, again masking the parts I wished to replace. The face is always marked as an area that should not be changed, yet is notably changed by the time you get to the final image, and you can even see changes between individual steps.

from stable-diffusion.

ddgond avatar ddgond commented on July 19, 2024

I'm working with the earlier commit for now, but this seems like an important issue to bring up when it's such a useful feature.

from stable-diffusion.

anon-hlhl avatar anon-hlhl commented on July 19, 2024

This is a consequence of my PR - Inpainting #308 The unmasked portions of the initial image now go through the final sampler pass.

The remedy for this would be to re-apply the mask after the sampling is complete. This is how masking used to work, but I removed in favour of the significantly better edge blending as a result of this final pass.

Sygil-Dev/sygil-webui@6a2c7b6

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024

maybe I'm confused but I don't understand why the entire image has to be degraded like this if you look at the video the only part of the image that has any change is the masked face part and yet the entire unmasked image despite being unchanged degrades in quality significantly why not save the original image's unmasked parts of the image and reapply them after?

if the purpose was

removed in favour of the significantly better edge blending as a result of this final pass.

Then why not only sample the parts close to the edges or when adding back the original image add everything but the parts close to the edges

Regardless if you want to make multiple changes to an image this kills it and is especially noticeable in the eyes where only one change can be enough be noticeable

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024

an extreme example of using loopback with 10 images

example.mp4

from stable-diffusion.

anon-hlhl avatar anon-hlhl commented on July 19, 2024

When I originally implemented the inpainting functionality, I was not imagining its use as part of an iterative/loopback process where the lossy behaviour could be amplified with each iteration, and I didn't consider the degredation a significant problem compared to the improvements it provided to inpainting.

I think if this was previously used as part of a workflow where this degredation is unacceptible, then perhaps Inpainting & Masking should be different functions, or there should be an option to re-apply the unmasked areas verbatim as it previously worked, or this could simply be assumed if the loopback option is selected.

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024

The loopback was more meant to show an extreme example of what happens when you do multiple edits even after one change there's a color shift and the eyes are slightly worse

photo

00316-50_k_lms_1697107831_window

Is there a reason the original image data cant be added back to the to final unchanged with some distance or some blur/feathering as it worked before but with some additional distance to accommodate these new changes as you can see in the vid a tiny spec on the left of the screen was enough to degrade the quality on the right even though those parts of the image were otherwise unchanged I can't imagine any workflow where this would be desirable behavior

from stable-diffusion.

1blackbar avatar 1blackbar commented on July 19, 2024

yup its a thing, i noticed that too, unmasked stuff should be left intact, its a bug, just store original image +mask then paste it back on top of sunthesized result aligned

from stable-diffusion.

anon-hlhl avatar anon-hlhl commented on July 19, 2024

It was very intentional, see this example, where I masked some cat eyes and asked for a painting:

The inpainting behaviour, where the original masked eyes go through the sampler:
image

Or the inpainting, but with the original preserve behaviour, where the original masked sections are re-applied after sampling:
image

The second example shows where too much of the original photographic style is preserved, particularly around where the edges blend in, whereas the first example shows the colour and style matching more closely with the regenerated sections. The effect is more noticable when using lower Sampler Steps values, such as when used with euler ancestral.

Nevertheless, I appreciate everyone likes the choise, so I implemented a toggle here:
anon-hlhl/stable-diffusion-webui@a38dfe2

I'll await hlky's opinion before I make a PR.

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024

Or the inpainting, but with the original preserve behaviour, where the original masked sections are re-applied after sampling:

do you mean to say unmasked here? the problem isn't with the masked areas but the unmasked ones i'm especially asking as I don't see any quality degradation in the first image in unmasked areas/non eye areas vs the 2nd image which i imagine is re-adding the unmasked sections of the original which with the current version there's a difference in quality even on the first change if it worked like that how it is in you first image there'd be no need for a change

from stable-diffusion.

anon-hlhl avatar anon-hlhl commented on July 19, 2024

do you mean to say unmasked here?

No, I masked the eyes and selected the "Keep masked area" option. But you're right, it is confusing terminology as one is essentially inverted.

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024

Oh I see now what you mean now the cat was with the "Keep masked area" setting I've noticed this quality degradation way more with the "regenerate only masked area" setting as the non-masked areas when using "regenerate only masked area" weren't expected to undergo any change this was what my comments were more about

from stable-diffusion.

ddgond avatar ddgond commented on July 19, 2024

@hlky can we get some clarification on the expected behavior here? In my experience with other software masking does not tend to apply effects partially (or at all) to the masked area, and this current behavior is confusing and limiting.

from stable-diffusion.

1blackbar avatar 1blackbar commented on July 19, 2024

in other words- this should never happen, its a bug, who would want to degrade original image?

from stable-diffusion.

Dawgmastah avatar Dawgmastah commented on July 19, 2024

I can confirm significant arctifacting with this change in masked areas of faces

from stable-diffusion.

Dawgmastah avatar Dawgmastah commented on July 19, 2024

It was very intentional, see this example, where I masked some cat eyes and asked for a painting:

The inpainting behaviour, where the original masked eyes go through the sampler: image

Or the inpainting, but with the original preserve behaviour, where the original masked sections are re-applied after sampling: image

The second example shows where too much of the original photographic style is preserved, particularly around where the edges blend in, whereas the first example shows the colour and style matching more closely with the regenerated sections. The effect is more noticable when using lower Sampler Steps values, such as when used with euler ancestral.

Nevertheless, I appreciate everyone likes the choise, so I implemented a toggle here: anon-hlhl/stable-diffusion-webui@a38dfe2

I'll await hlky's opinion before I make a PR.

I tried the code and the behavior remains the same
(I also don't see a toggle in the UI, I forced in in the code and even forced the AND to execute the new code, and result with artifacted masked faces remains.

Maybe I'm doing something wrong?

from stable-diffusion.

1blackbar avatar 1blackbar commented on July 19, 2024

Yeah sadly the masking and inpainting its not useable at the moment , it makes changes to everything on the image, id use repo before that code was added in but not sure when it happened

from stable-diffusion.

ddgond avatar ddgond commented on July 19, 2024

@1blackbar I mentioned this earlier but if you git checkout ef36e0a you should get a commit with working masking and inpainting.

from stable-diffusion.

1blackbar avatar 1blackbar commented on July 19, 2024

yeh but k euler is so slow now , not sure whats going on with this repo, too many devs at once ?

from stable-diffusion.

hlky avatar hlky commented on July 19, 2024

I just ran my benchmark for k_euler and there is no difference in speed.

from stable-diffusion.

hlky avatar hlky commented on July 19, 2024

Update to main. img2img is working with masking, I think the default settings need to changed for cfg scale, and someone mentioned the default mask mode should be "regenerate masked area" as that's what most people are expecting to happen

Web capture_1-9-2022_145159_127 0 0 1

00443-50_k_euler_559845130_anime_girl_holding_a_giant_NVIDIA_Tesla_A100_GPU_graphics_card,_Anime_Blu-Ray_boxart,_super_high_deta

00169-50_k_euler_559845130_anime_girl_holding_a_giant_NVIDIA_Tesla_A100_GPU_graphics_card,_Anime_Blu-Ray_boxart,_super_high_deta

from stable-diffusion.

ddgond avatar ddgond commented on July 19, 2024

@hlky I don't think that's the original issue we were talking about (though I do agree that this is more intuitive default behavior for that toggle). Open those two images in separate tabs and switch rapidly between them, and you'll see the entire image gets changed. This causes problems if, for example, you want to change something in the background or a character's clothing, as the face will start to get artifacts.

from stable-diffusion.

altryne avatar altryne commented on July 19, 2024

Folks, we hear you, masking should be awesome and work.
@ddgond have you tried disabling the loopback mechanism?

from stable-diffusion.

ddgond avatar ddgond commented on July 19, 2024

I am not using the loopback feature, I refeed it back into img2img manually going one step at a time

from stable-diffusion.

TingTingin avatar TingTingin commented on July 19, 2024

Its not about loopback that just highlights the problem if you select "regenerate only masked area" the quality of unselected areas drop even in the image that hlky posed you can see that the quality dropped

from stable-diffusion.

Dawgmastah avatar Dawgmastah commented on July 19, 2024

Also..related issue.
When upscaling using the "Upscale images using RealESRGAN" toggle, it will upscale the raw output image without the mask.
Example, if Im using the mask as above, the upscaled image will be a distorted unknown face, not an upscaling of the final image.

from stable-diffusion.

siriux avatar siriux commented on July 19, 2024

Maybe an option is to include a parameter to control how much of the image outside of the mask it can affect. Set it to 0 to keep everything outside the mask intact (old behavior), or to something larger to expand and blur a second mask applied only to final diffusion pass (infinity would be equivalent to the new behavior).

Or for better control, we can instead include two parameters, one for expansion and another one for blur. And show this second mask with a different color to allow the user to see what's going to be affected.

This would bring the best of both worlds in a single configurable tool.

from stable-diffusion.

Dawgmastah avatar Dawgmastah commented on July 19, 2024

@hlky, what are your thoughts on the issue?

This is the only thing keeping me from upgrading to the newest shiny versions, as my workflow heavily uses masking

from stable-diffusion.

trufty avatar trufty commented on July 19, 2024

This is the only thing keeping me from upgrading to the newest shiny versions,...

Same here, because I know If I upgrade ill have to revert the parts of the code that cause the whole image to change when using a mask once again.

from stable-diffusion.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.