Giter Club home page Giter Club logo

hairmapper's Issues

separation boundary

First of all, thank you for your work.
When using the separation boundary of InterfaceGAN, the gender score is 1 for boys and 0 for girls? Do I need to train a separate gender classifier to score? The hair score is trained by Resnet50 as a classifier, but do you also binarize the final score to 0 or 1?

Output Color is changed

Hey @oneThousand1000 ,
Sorry for bothering, But the result i'm getting has a lot differences between color map.
Here is one reference,
aish_color_change

I though it's due to RGB image mode so i checked it by converting into BGR, But that's also not matching to the original image color and light.

Any suggestions?

How is the file StyleGAN2-ada-Generator.pth generated?

I want to ask a question, how is the file StyleGAN2-ada-Generator.pth generated? I have run the stylegan2-ada-pytorch project, and the generated weight file is pkl. If I use pkl directly, it seems not feasible. How can I generate StyleGAN2-ada-Generator.pth with my own data set?

About 'keeping the rest of the layers unchanged'.

Hi!
In reading your code, I found a detail that may not correspond to the paper. This detail is insignificant, but if possible, I still want to know if you have unique thinking.

Specifically, in '.\styleGAN2_ada_model\stylegan2_ada_generator.py', near line 384,
image
the [7-17] layers (starting from 0) of the edited W + are replaced by the original W +. That is, only the first 7 layers of W + are ''obviously'' manipulated.

Is this inconsistent with the first 8 layers of change mentioned in the paper, or do I make a low-level code understanding mistake?

Again, this is insignificant. I just want to know if you have a unique understanding and whether the open-source pre-trained HairMapper is also trained in this way.

Very Important Issue - Leftover hair after using align image code. How can i train even if i imcrease more croppin area till chest level so that neck, cloth, shoulder get good Reconstruction?

How can i train so that the shoulder and neck part get good Reconstruction after using image aligning code. Please find the reference images of ORIGINAL and GENERATED. Btw, Fantastic work but needed to train by increasing the ratio value more than 1 so that it crops more but results are weird. How to train more than 1 ratio value? Even if I try going more than 0.5 might give an error because the box is going too much of boundaries.

In short, how to train to remove the more hair and reconstruct if i increase the boundaries more?

The model itself WASN'T trained much with images coming till shoulders? Is that the issue? If yes, how can I train the images till CHEST level so that it would have good Reconstruction?

Please, let me know.
thumb_09c8c98KA002_1_01_res
thumb_rangriti_valleyo3611ss20blu_1_01_res
thumb_09c8c98KA002_1
thumb_rangriti_valleyo3611ss20blu_1

id_loss is not active

Hi, Thanks for your nice work!

I would like to ask why did u comment out id_loss during optimization (diffuse). Are Pixel-wise reconstruction loss and Perceptual loss enough to preserve identity?

How to test other images?

Dear authors, in the step1 of Testing, whether I can only use the align code in the link you provide (image align code) ? I use the code in your link to crop my real images, but for some real images shot from specular angles, the part of head would be cut off. If I use other align code, whether it will affect the HairMapper?

Bald should work on full image

Hi, There!

          The bald should work on full body images but NOT only on cropped faces. I'm aware that you would have trained only on cropped faces. The thing here is, All the contributors worked on only cropped faces. It doesn't really excite on the output.

Could you let me know how to make workable for bald on full image?

Thankyou.

About the latent space of stylegan2 and stylegan2-ada.

Thank you for your outstanding work!
I noticed that e4e is trained on stylegan2 and the bald latent code output by hairMapper feeds into stylegan2-ada. Are the potential spaces of these two generators the same, or does hairMapper implicitly do the migration of the two potential spaces? Can you please tell me why you are using stylegan2- ada instead of stylegan2?
I ask because I want to test the pre-trained hairMapper directly on the stylegan2 generator, however, I haven't gotten around to trying it yet.

How to blend other hair style?

Hi researchers of HairMapper,
Could you please give me some hints on how to blend other hairstyles like the Figure 1 shown on the front page of the paper?
thank you !

What is D_noise for?

Thank you for your excellent work.
I wonder what the D_0 and D_noise for, why you generate two part of random images?

Running Mapper on Batches

Dear all,
Is there a way to paralyze the mapper code? i.e., if I have a batch of latent codes, is it possible to run the line below on a batch?
mapper(mapper_input_tensor)
Thanks!

AssertionError : Stylegan_ada

Hey @oneThousand1000 ,
Thanks for this amazing repo.
I was trying the given Google Colab and found an Assertion Error below,

WARNING:stylegan2_ada_generator:Load model from ../ckpts/StyleGAN2-ada-Generator.pth
WARNING:stylegan2_ada_generator:No pre-trained model will be loaded!
Initializing generator.
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-20-8b3677e0e584> in <module>
     3 data_dir = '/content/drive/MyDrive/3DAI/HairMapper/test_data'
     4 print(f'Initializing generator.')
----> 5 model = StyleGAN2adaGenerator(model_name, logger=None, truncation_psi=1.0)
     6 
     7 mapper = LevelMapper(input_dim=512).eval().cuda()

../styleGAN2_ada_model/stylegan2_ada_generator.py in __init__(self, model_name, logger, truncation_psi, randomize_noise)

../styleGAN2_ada_model/base_generator.py in __init__(self, model_name, logger)

AssertionError: 

And the code cell producing the above error is below,

model_name = 'stylegan2_ada'
latent_space_type = 'wp'
data_dir = '/content/drive/MyDrive/3DAI/HairMapper/test_data'
print(f'Initializing generator.')
model = StyleGAN2adaGenerator(model_name, logger=None, truncation_psi=1.0)

mapper = LevelMapper(input_dim=512).eval().cuda()
ckpt = torch.load('./mapper/checkpoints/final/best_model.pt')
alpha = float(ckpt['alpha']) * 1.2
mapper.load_state_dict(ckpt['state_dict'], strict=True)
kwargs = {'latent_space_type': latent_space_type}
parsingNet = get_parsingNet(save_pth='./ckpts/face_parsing.pth')
inverter = InverterRemoveHair(
        model_name,
        Generator=model,
        learning_rate=0.01,
        reconstruction_loss_weight=1.0,
        perceptual_loss_weight=5e-5,
        truncation_psi=1.0,
        logger=None
)

code_dir = os.path.join(data_dir, 'code')
origin_img_dir = os.path.join(data_dir, 'actress_imgs')
res_dir = os.path.join(data_dir, 'mapper_res')

os.makedirs(res_dir, exist_ok=True)

Please suggest any way to tackle this issue.
Thanks once again.

Color cast in the result images

Hi, Thanks for you nice work!
When I use you inference code, I notice that there are very obvious color change in some results, just as follows. But in your paper and supplyment, there is no such problem. Is there anything wrong with me?
image

Results on horizontal flipped images

Hi, another small questions:). I tried to verify the performance consistency for HairMapper and found the ediited images are different from horizontal flipped images, that is the models used in HairMapper (maybe the encoder) are not left-right symmetry.

More specifically, define HairMapper edition as H(), any components of face parsing as Seg(), and cv2.flip(*, 1) as Flip, there is:
H(aligned_image) and Flip(H(Flip(aligned_image))) are rather different and Seg(H(aligned_image)) and Seg(Flip(H(Flip(aligned_image)))) are different as well.

Maybe more horizontal flipping augmentation can fix it?

About loss hair and loss face different from paper

in parer loss hair is difine as

image

but in train_mapper.py this loss do not dot with hair mask.. wondering why

    loss_l2_latent = self.latent_l2_loss(w_hat, res_w)
    loss_dict['loss_l2_latent'] = float(loss_l2_latent)
    loss += loss_l2_latent * self.latent_l2_lambda

    loss_l2_img = torch.mean(((res_x - x_hat)) ** 2, dim=[0, 1, 2, 3])  ### do not dot with hair mask
    loss_dict['loss_l2_res_img'] = float(loss_l2_img)
    loss += loss_l2_img * self.img_l2_lambda_res

    loss_l2_img = torch.mean(((origin_img - x_hat) * mask) ** 2, dim=[0, 1, 2, 3])
    loss_dict['loss_l2_origin_img'] = float(loss_l2_img)
    loss += loss_l2_img * self.img_l2_lambda_origin

What about beard?

Hi,
thanks for the nice work! What about beard removal? Can this work be adapted to that?

Thanks!
L.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.