Giter Club home page Giter Club logo

ciagan's People

Contributors

maxsqrtum avatar therevanchist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ciagan's Issues

Evaluation script

Great paper and great work, I'd like to ask you for the evaluation script used to generate the metrics used in the paper, in order to be able to compute them for comparison.

Thank you!

size mismatch

Hello , can anyone know how to fix this error :
RuntimeError: Error(s) in loading state_dict for Generator:
size mismatch for encode_one_hot.0.weight: copying a param with shape torch.Size([256, 5]) from checkpoint, the shape in current model is torch.Size([256, 1200]).

the error occurs while training

RuntimeError: Given groups=1, weight of size [3, 16, 3, 3], expected input[1, 32, 64, 64] to have 16 channels, but got 32 channels instead

Save anonymous images

Hi, I want to save the anonymous images generated by the pre-training model, I guess that the "save_images" function in the "train.py" file does this, but I don't know what the parameters "out, GT, target, INP" represent respectively. Can you provide more details? It will be better if you could give a code example of how a pre-trained model generates images!

step after training

Hi I enjoyed the paper and was able to run the training example. But what is the next step to actually 'run the code' on an image or video? Thanks!

Is it working correctly?

So, I downloaded the pre-trained model (link in README).

  • Working directory /ciagan/source
  1. Created a folder 0 inside mydata folder and added their my photos (000001.jpg, 000002.jpg ...)

  2. Created landmarks using this line:
    python process_data.py --input ../mydata --output ../output/

  3. And the run the test.py file
    python test.py --data ../output/ --out ../output

So here are the results:
image

I mean, one of the advantages of this paper was creating a new identity and another face detection algorithm will probably detect these faces, however the other advantage was realistic results, but the faces I get are a bit intimidating.

Am I doing everything correctly?
Also, how can I insert these new faces into original images?
Any help, pointer or code is appreciated!

hyperparams for training on 1200 labels

Hi, I enjoyed the paper and was able to run the training example.
I set up the CelebA dataset with 1200 identities provided in legit_indices.npy. However, I am not sure about the hyper-parameter setting:

  1. The learning rate is 0.0001 in train.py but 0.00001 in the paper(section 4.1)
  2. The iterations of the critic, generator, and siamese are 5, 1, 1 in train.py, and 1, 3, 1 in run_training.py.
  3. filter number, batch size, etc

Could you give me the detailed hyper-parameters for training on the dataset with 1200 identities? Thanks!

The current hyper-parameters I'm using, which yield an unsatisfactory result:
'TRAIN_PARAMS': {
'ARCH_NUM': 'unet_flex',
'ARCH_SIAM': 'resnet_siam',
'EPOCH_START': 0,
'EPOCHS_NUM': 120,
'LEARNING_RATE': 0.00001,
'FILTER_NUM': 32,
'ITER_CRITIC': 1,
'ITER_GENERATOR': 3,
'ITER_SIAMESE': 1,
'GAN_TYPE': 'lsgan',
'FLAG_SIAM_MASK': False,
},
'DATA_PARAMS':{
'LABEL_NUM': 1200,
'WORKERS_NUM': 4,
'BATCH_SIZE': 32,
'IMG_SIZE': 128,
'FLAG_DATA_AUGM': True,
},
'OUTPUT_PARAMS': {
'SAVE_EPOCH': 1,
'SAVE_CHECKPOINT': 60,
'LOG_ITER': 2,
'COMMENT': "Something here",
'EXP_TRY': 'check',
}

Train/test split

How do I use the legit_indices file to obtain the train/test split? As I got it they should be 1200 for training and 363 for testing, but the split is not stated neither here nor in the original paper

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.