onethousand1000 / hairmapper Goto Github PK
View Code? Open in Web Editor NEW(CVPR 2022) HairMapper: Removing Hair from Portraits Using GANs.
(CVPR 2022) HairMapper: Removing Hair from Portraits Using GANs.
First of all, thank you for your work.
When using the separation boundary of InterfaceGAN, the gender score is 1 for boys and 0 for girls? Do I need to train a separate gender classifier to score? The hair score is trained by Resnet50 as a classifier, but do you also binarize the final score to 0 or 1?
good work !
Can you tell more about blending the clean face with some hairstyle templates ?
It's a funny work
Hey @oneThousand1000 ,
Sorry for bothering, But the result i'm getting has a lot differences between color map.
Here is one reference,
I though it's due to RGB image mode so i checked it by converting into BGR, But that's also not matching to the original image color and light.
Any suggestions?
Hey @oneThousand1000 ,
As you suggested, 3D face reconstruction should be improved.
Can you tell which model you used to test the 3D face reconstruction?
Also, suggest the steps to be used if possible.
Thanks
I want to ask a question, how is the file StyleGAN2-ada-Generator.pth generated? I have run the stylegan2-ada-pytorch project, and the generated weight file is pkl. If I use pkl directly, it seems not feasible. How can I generate StyleGAN2-ada-Generator.pth with my own data set?
Hi!
In reading your code, I found a detail that may not correspond to the paper. This detail is insignificant, but if possible, I still want to know if you have unique thinking.
Specifically, in '.\styleGAN2_ada_model\stylegan2_ada_generator.py', near line 384,
the [7-17] layers (starting from 0) of the edited W + are replaced by the original W +. That is, only the first 7 layers of W + are ''obviously'' manipulated.
Is this inconsistent with the first 8 layers of change mentioned in the paper, or do I make a low-level code understanding mistake?
Again, this is insignificant. I just want to know if you have a unique understanding and whether the open-source pre-trained HairMapper is also trained in this way.
How can i train so that the shoulder and neck part get good Reconstruction after using image aligning code. Please find the reference images of ORIGINAL and GENERATED. Btw, Fantastic work but needed to train by increasing the ratio value more than 1 so that it crops more but results are weird. How to train more than 1 ratio value? Even if I try going more than 0.5 might give an error because the box is going too much of boundaries.
In short, how to train to remove the more hair and reconstruct if i increase the boundaries more?
The model itself WASN'T trained much with images coming till shoulders? Is that the issue? If yes, how can I train the images till CHEST level so that it would have good Reconstruction?
Your project needs the support of more additional models, could you consider using the online demo for testing?
Hi, Thanks for your nice work!
I would like to ask why did u comment out id_loss during optimization (diffuse). Are Pixel-wise reconstruction loss and Perceptual loss enough to preserve identity?
Dear authors, in the step1 of Testing, whether I can only use the align code in the link you provide (image align code) ? I use the code in your link to crop my real images, but for some real images shot from specular angles, the part of head would be cut off. If I use other align code, whether it will affect the HairMapper?
Hi, There!
The bald should work on full body images but NOT only on cropped faces. I'm aware that you would have trained only on cropped faces. The thing here is, All the contributors worked on only cropped faces. It doesn't really excite on the output.
Could you let me know how to make workable for bald on full image?
Thankyou.
Thank you for your outstanding work!
I noticed that e4e is trained on stylegan2 and the bald latent code output by hairMapper feeds into stylegan2-ada. Are the potential spaces of these two generators the same, or does hairMapper implicitly do the migration of the two potential spaces? Can you please tell me why you are using stylegan2- ada instead of stylegan2?
I ask because I want to test the pre-trained hairMapper directly on the stylegan2 generator, however, I haven't gotten around to trying it yet.
Hi researchers of HairMapper,
Could you please give me some hints on how to blend other hairstyles like the Figure 1 shown on the front page of the paper?
thank you !
Thank you for your excellent work.
I wonder what the D_0 and D_noise for, why you generate two part of random images?
Dear all,
Is there a way to paralyze the mapper code? i.e., if I have a batch of latent codes, is it possible to run the line below on a batch?
mapper(mapper_input_tensor)
Thanks!
Hey @oneThousand1000 ,
Thanks for this amazing repo.
I was trying the given Google Colab and found an Assertion Error below,
WARNING:stylegan2_ada_generator:Load model from ../ckpts/StyleGAN2-ada-Generator.pth
WARNING:stylegan2_ada_generator:No pre-trained model will be loaded!
Initializing generator.
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-20-8b3677e0e584> in <module>
3 data_dir = '/content/drive/MyDrive/3DAI/HairMapper/test_data'
4 print(f'Initializing generator.')
----> 5 model = StyleGAN2adaGenerator(model_name, logger=None, truncation_psi=1.0)
6
7 mapper = LevelMapper(input_dim=512).eval().cuda()
../styleGAN2_ada_model/stylegan2_ada_generator.py in __init__(self, model_name, logger, truncation_psi, randomize_noise)
../styleGAN2_ada_model/base_generator.py in __init__(self, model_name, logger)
AssertionError:
And the code cell producing the above error is below,
model_name = 'stylegan2_ada'
latent_space_type = 'wp'
data_dir = '/content/drive/MyDrive/3DAI/HairMapper/test_data'
print(f'Initializing generator.')
model = StyleGAN2adaGenerator(model_name, logger=None, truncation_psi=1.0)
mapper = LevelMapper(input_dim=512).eval().cuda()
ckpt = torch.load('./mapper/checkpoints/final/best_model.pt')
alpha = float(ckpt['alpha']) * 1.2
mapper.load_state_dict(ckpt['state_dict'], strict=True)
kwargs = {'latent_space_type': latent_space_type}
parsingNet = get_parsingNet(save_pth='./ckpts/face_parsing.pth')
inverter = InverterRemoveHair(
model_name,
Generator=model,
learning_rate=0.01,
reconstruction_loss_weight=1.0,
perceptual_loss_weight=5e-5,
truncation_psi=1.0,
logger=None
)
code_dir = os.path.join(data_dir, 'code')
origin_img_dir = os.path.join(data_dir, 'actress_imgs')
res_dir = os.path.join(data_dir, 'mapper_res')
os.makedirs(res_dir, exist_ok=True)
Please suggest any way to tackle this issue.
Thanks once again.
Hi, another small questions:). I tried to verify the performance consistency for HairMapper and found the ediited images are different from horizontal flipped images, that is the models used in HairMapper (maybe the encoder) are not left-right symmetry.
More specifically, define HairMapper edition as H(), any components of face parsing as Seg(), and cv2.flip(*, 1) as Flip, there is:
H(aligned_image) and Flip(H(Flip(aligned_image))) are rather different and Seg(H(aligned_image)) and Seg(Flip(H(Flip(aligned_image)))) are different as well.
Maybe more horizontal flipping augmentation can fix it?
in parer loss hair is difine as
but in train_mapper.py this loss do not dot with hair mask.. wondering why
loss_l2_latent = self.latent_l2_loss(w_hat, res_w)
loss_dict['loss_l2_latent'] = float(loss_l2_latent)
loss += loss_l2_latent * self.latent_l2_lambda
loss_l2_img = torch.mean(((res_x - x_hat)) ** 2, dim=[0, 1, 2, 3]) ### do not dot with hair mask
loss_dict['loss_l2_res_img'] = float(loss_l2_img)
loss += loss_l2_img * self.img_l2_lambda_res
loss_l2_img = torch.mean(((origin_img - x_hat) * mask) ** 2, dim=[0, 1, 2, 3])
loss_dict['loss_l2_origin_img'] = float(loss_l2_img)
loss += loss_l2_img * self.img_l2_lambda_origin
Hi,
thanks for the nice work! What about beard removal? Can this work be adapted to that?
Thanks!
L.
Hi, all cv2.resize usages for face parsing mask are not right (height x width), should be (width x height) as cv2.resize(hair_mask,(img_shape[1], img_shape[0]))
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.