Giter Club home page Giter Club logo

taesungp / contrastive-unpaired-translation Goto Github PK

View Code? Open in Web Editor NEW
2.1K 35.0 411.0 17.91 MB

Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV 2020, in PyTorch)

Home Page: https://taesung.me/ContrastiveUnpairedTranslation/

License: Other

Python 98.23% Shell 0.97% TeX 0.80%
pytorch computervision deeplearning cyclegan image-generation computer-vision computer-graphics image-manipulation gans generative-adversarial-network

contrastive-unpaired-translation's People

Contributors

junyanz avatar taesungp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

contrastive-unpaired-translation's Issues

How to Add Cycle Loss?

Hi. I want to add cycle loss in CUT.
I know this model doesn't need cycle loss. But I want to experiment with recon image!

Thanks Again.

Applying to 1024 x 1024 resolution

I need to apply this code to 1024 x 1024 images. Should I change the architecture of resnet_9blocks and/or the discriminator architecture to get better fidelity in the results or will the basic settings work fine?
I couldn't fit the default network in my GPU, so I made the following changes. Do they make sense?

  • increased n_downsampling in ResnetGenerator from 2 to 4
  • reduced ngf (#filters for G) from 64 to 32 but kept 9 resblocks
  • increased n_layers_D from 3 to 4

Masked fill error in NCE loss

Hi!

First of all, amazing work, and thanks for making it available on Github.

I have found what seems a mistake when calculating Patch NCE loss, since in the comment above this line you state that the diagonal should be replaced with small values like exp(-10) but then in the mask fill it gets replace by -10, without the exponential.

l_neg_curbatch.masked_fill_(diagonal, -10.0)

Error about multi-GPU training

Hi, thanks for the great work that can enable unpair image translation without the cycle mechanism.

When I try to train the CUT model in a multi-GPU manner, I just simply add the following code as you commented in line 212 of the file models/network.py
net = torch.nn.DataParallel(net, gpu_ids)

My training script is as follows.
python train.py --dataroot ./datasets/toy_dataset --name my_CUT --CUT_mode CUT --gpu_ids 0,1,2,3,4,5,6,7 --batch_size 8 --load_size 512 --crop_size 512

However, I encountered a type error as shown in the figure below. I am wondering how to solve the problem. Look forward to your reply.
Error

Training stops around ~50 epochs

Hi, thank you for advancing the state of the art and sharing your code with a well documented project.

I am training the standard CUT on my own dataset of ~1500 images. Unfortunately, training always simply stops around 47-50 epochs. I have tried both: python train.py --dataroot [x] --name=[x] --CUT_mode CUT and python -m experiments [name] train 0; I have tried running via nohup and screen.

The process does not exit: training simply stops. There's no additional error messages or logs. With nvidia-smi, I can see the process running but with no GPU utilisation.

My specs are Ryzen 3950X / 128GB RAM / 2080 Ti, so there shouldn't be any resource constraints.

I have had training failures at 47 epoch, 49 epoch, 50 epoch, etc. It's not a constant number; but always about 4 years. Any ideas?

Correct the error

In prepare_cityscapes_dataset.py, Example usage: python prepare_cityscapes_dataset.py --gitFine_dir ./gtFine/ --leftImg8bit_dir ./leftImg8bit --output_dir ./datasets/cityscapes/.

It should be 「--gtFine」not 「--gitFine」.

EOFError and Attribute error when attempting to train FastCUT

I am trying train FastCUT on the horse2zebra dataset. However, it crashes giving an Attribute error and an EOFError. I have included the options that I am using to run train.py below as well as the stack trace:

python train.py --dataroot "./datasets/horse2zebra" --name H2Z_FAST_CUT --CUT_mode FastCUT
----------------- Options ---------------
                 CUT_mode: FastCUT                              [default: CUT]
               batch_size: 1
                    beta1: 0.5
                    beta2: 0.999
          checkpoints_dir: ./checkpoints
           continue_train: False
                crop_size: 256
                 dataroot: ./datasets/horse2zebra               [default: placeholder]
             dataset_mode: unaligned
                direction: AtoB
              display_env: main
             display_freq: 400
               display_id: None
            display_ncols: 4
             display_port: 8097
           display_server: http://localhost
          display_winsize: 256
               easy_label: experiment_name
                    epoch: latest
              epoch_count: 1
          evaluation_freq: 5000
        flip_equivariance: True
                 gan_mode: lsgan
                  gpu_ids: 0
                init_gain: 0.02
                init_type: xavier
                 input_nc: 3
                  isTrain: True                                 [default: None]
               lambda_GAN: 1.0
               lambda_NCE: 10.0
                load_size: 286
                       lr: 0.0002
           lr_decay_iters: 50
                lr_policy: linear
         max_dataset_size: inf
                    model: cut
                 n_epochs: 150
           n_epochs_decay: 50
               n_layers_D: 3
                     name: H2Z_FAST_CUT                         [default: experiment_name]
                    nce_T: 0.07
                  nce_idt: False
nce_includes_all_negatives_from_minibatch: False
               nce_layers: 0,4,8,12,16
                      ndf: 64
                     netD: basic
                     netF: mlp_sample
                  netF_nc: 256
                     netG: resnet_9blocks
                      ngf: 64
             no_antialias: False
          no_antialias_up: False
               no_dropout: True
                  no_flip: False
                  no_html: False
                    normD: instance
                    normG: instance
              num_patches: 256
              num_threads: 4
                output_nc: 3
                    phase: train
                pool_size: 0
               preprocess: resize_and_crop
          pretrained_name: None
               print_freq: 100
         random_scale_max: 3.0
             save_by_iter: False
          save_epoch_freq: 5
         save_latest_freq: 5000
           serial_batches: False
stylegan2_G_num_downsampling: 1
                   suffix:
         update_html_freq: 1000
                  verbose: False
----------------- End -------------------
dataset [UnalignedDataset] was created
model [CUTModel] was created
The number of training images = 1334
Setting up a new session...
create web directory ./checkpoints\H2Z_FAST_CUT\web...
Traceback (most recent call last):
  File "train.py", line 31, in <module>
    for i, data in enumerate(dataset):  # inner loop within one epoch
  File "D:\Style Transfer\contrastive-unpaired-translation-master\data\__init__.py", line 95, in __iter__
    for i, data in enumerate(self.dataloader):
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\site-packages\torch\utils\data\dataloader.py", line 291, in __iter__
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\site-packages\torch\utils\data\dataloader.py", line 737, in __init__
    w.start()
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'Visdom.setup_socket.<locals>.run_socket'

(styletransfer) D:\Style Transfer\contrastive-unpaired-translation-master>Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Users\James\anaconda3\envs\styletransfer\lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

I get the same errors when I try to train the regular CUT model. It also may be worth noting that I can train CycleGAN with no issues.

I am running on windows, with Anaconda. I am using python version 3.7.7 and pytorch version 1.6.0.

Doubt in PatchNCE loss

Hi @taesungp , awesome work! Thank you for making the research code open source.

I am having a small question here in the definition of PatchNCE loss. Why is it required to detach the features here and what issue would arise if it is not detached? Thanks.

Questions about PatchNceLoss

Thanks for your released codes, it is a really interesting work.

During reading codes, I am confused about patchnceloss. In pseudo code, l_pos is calculated as

l_pos = (f_k * f_q).sum(dim=1)[:, :, None] # BxSx1

However, in nce.py, it is calculated as:

l_pos = torch.bmm(feat_q.view(batchSize, 1, -1), feat_k.view(batchSize, -1, 1)) l_pos = l_pos.view(batchSize, 1) # B x 1
Is it wrong here?

visdom server doesn't work in remote server...results in save dir and files not being shown in Jupyter

Thanks for developing the new CUT architecture.
I'm running on a remote server where the visdom setup doesn't work. That would be ok except it appears that the code is relying on visdom in some way to save out the training models?
Is there a way to break that link?
I'm training and can see the losses are moving along nicely, but it never saves out a model thus my training is wasted.

`dataset [UnalignedDataset] was created
model [CUTModel] was created
The number of training images = 352
Setting up a new session...
Exception in user code:

Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/util/connection.py", line 83, in create_connection
raise err
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 601, in urlopen
chunked=chunked)
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ubuntu/anaconda3/lib/python3.7/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ubuntu/anaconda3/lib/python3.7/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ubuntu/anaconda3/lib/python3.7/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ubuntu/anaconda3/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/home/ubuntu/anaconda3/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/connection.py", line 166, in connect
conn = self._new_conn()
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fc2c1ecfe50>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "/home/ubuntu/.local/lib/python3.7/site-packages/urllib3/util/retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc2c1ecfe50>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/visdom/init.py", line 711, in _send
data=json.dumps(msg),
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/visdom/init.py", line 677, in _handle_post
r = self.session.post(url, data=data)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 578, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc2c1ecfe50>: Failed to establish a new connection: [Errno 111] Connection refused'))
[Errno 111] Connection refused

Could not connect to Visdom server.
Trying to start a server....
Command: /home/ubuntu/anaconda3/bin/python -m visdom.server -p 8097 &>/dev/null &
create web directory ./checkpoints/nhspos_CUT/web...
[W TensorIterator.cpp:924] Warning: Mixed memory format inputs detected while calling the operator. The operator will output channels_last tensor even if some of the inputs are not in channels_last format. (function operator())
---------- Networks initialized -------------
[Network G] Total number of parameters : 11.378 M
[Network F] Total number of parameters : 0.560 M
[Network D] Total number of parameters : 2.765 M

saving the latest model (epoch 1, total_iters 50)
nhspos_CUT
(epoch: 1, iters: 100, time: 0.124, data: 0.116) G_GAN: 0.405 D_real: 0.169 D_fake: 0.213 G: 4.913 NCE: 4.447 NCE_Y: 4.570
saving the latest model (epoch 1, total_iters 100)
`

In the above I forced it to save repeatedly rather than waiting...anyway, it makes the ./checkpoints dir but neither a model nor (after letting it run a bit) are any .html or images ever saved out.
Is there a quick way to make visdom optional for those that don't have the permissions to run that on remote servers?

Thanks!

AttributeError: 'Namespace' object has no attribute 'G_n_downsampling' , options netG stylegan2

Thank you for your good research and I'm very interested in this research.
I have one question.

your last commit add base options like this

'''
parser.add_argument('--netD', type=str, default='basic', choices=['basic', 'n_layers', 'pixel', 'patch', 'tilestylegan2', 'stylegan2'], help='specify discriminator architecture. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator')
parser.add_argument('--netG', type=str, default='resnet_9blocks', choices=['resnet_9blocks', 'resnet_6blocks', 'unet_256', 'unet_128', 'stylegan2', 'smallstylegan2', 'resnet_cat'], help='specify generator architecture')

'''
But It still haven't solved that netG , net D param.
Can you tell me how to do it?
Thanks.

Minor issue : "entering finetuning phase" spammed

When finetuning phase is entered, "entering finetuning phase" gets printed multiple times. It only happens for the first epoch of the decay stage.

Input:

!python train.py --dataroot ./datasets/abr --name arrrbreee --CUT_mode FastCUT --n_epochs 5 --n_epochs_decay 5

Output (clipped to only the relevant parts):

saving the model at the end of epoch 5, iters 5830
End of epoch 5 / 10 	 Time Taken: 231 sec
learning rate = 0.0001667
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase

entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 70, time: 0.196, data: 0.002) G_GAN: 0.366 D_real: 0.809 D_fake: 0.097 G: 15.323 NCE: 14.957 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 170, time: 0.196, data: 0.002) G_GAN: 0.396 D_real: 0.022 D_fake: 0.444 G: 17.018 NCE: 16.622 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 270, time: 0.196, data: 0.002) G_GAN: 0.690 D_real: 0.287 D_fake: 0.027 G: 13.235 NCE: 12.545 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase

entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 370, time: 0.196, data: 0.002) G_GAN: 0.389 D_real: 0.083 D_fake: 0.357 G: 11.859 NCE: 11.470 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 470, time: 0.196, data: 0.002) G_GAN: 0.655 D_real: 0.439 D_fake: 0.012 G: 13.035 NCE: 12.380 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 570, time: 0.196, data: 0.002) G_GAN: 0.501 D_real: 0.234 D_fake: 0.148 G: 15.988 NCE: 15.488 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase

entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 670, time: 0.196, data: 0.002) G_GAN: 0.357 D_real: 0.049 D_fake: 0.415 G: 11.180 NCE: 10.823 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 770, time: 0.196, data: 0.002) G_GAN: 0.881 D_real: 0.020 D_fake: 0.023 G: 10.741 NCE: 9.860 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 870, time: 0.196, data: 0.002) G_GAN: 0.672 D_real: 0.154 D_fake: 0.070 G: 10.256 NCE: 9.584 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase

entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 970, time: 0.196, data: 0.002) G_GAN: 0.893 D_real: 0.086 D_fake: 0.046 G: 11.710 NCE: 10.817 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
(epoch: 6, iters: 1070, time: 0.196, data: 0.002) G_GAN: 0.673 D_real: 0.018 D_fake: 0.179 G: 14.356 NCE: 13.683 
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
entering finetuning phase
End of epoch 6 / 10 	 Time Taken: 231 sec
learning rate = 0.0001333
(epoch: 7, iters: 4, time: 0.196, data: 0.002) G_GAN: 1.352 D_real: 0.058 D_fake: 0.057 G: 14.939 NCE: 13.587 
(epoch: 7, iters: 104, time: 0.195, data: 0.003) G_GAN: 1.149 D_real: 0.028 D_fake: 0.025 G: 11.873 NCE: 10.724 
(epoch: 7, iters: 204, time: 0.195, data: 0.002) G_GAN: 0.418 D_real: 0.053 D_fake: 0.377 G: 23.477 NCE: 23.059 
(epoch: 7, iters: 304, time: 0.196, data: 0.002) G_GAN: 0.669 D_real: 0.013 D_fake: 0.164 G: 12.863 NCE: 12.194 
(epoch: 7, iters: 404, time: 0.195, data: 0.002) G_GAN: 0.977 D_real: 0.004 D_fake: 0.036 G: 11.621 NCE: 10.644 
(epoch: 7, iters: 504, time: 0.196, data: 0.002) G_GAN: 0.793 D_real: 0.011 D_fake: 0.032 G: 12.836 NCE: 12.042 
(epoch: 7, iters: 604, time: 0.196, data: 0.002) G_GAN: 0.879 D_real: 0.013 D_fake: 0.083 G: 11.280 NCE: 10.401 
(epoch: 7, iters: 704, time: 0.196, data: 0.002) G_GAN: 0.660 D_real: 0.231 D_fake: 0.066 G: 13.034 NCE: 12.374 
(epoch: 7, iters: 804, time: 0.195, data: 0.002) G_GAN: 0.370 D_real: 0.602 D_fake: 0.057 G: 12.710 NCE: 12.340 
(epoch: 7, iters: 904, time: 0.196, data: 0.002) G_GAN: 0.881 D_real: 0.047 D_fake: 0.015 G: 13.047 NCE: 12.166 
(epoch: 7, iters: 1004, time: 0.196, data: 0.002) G_GAN: 1.078 D_real: 0.018 D_fake: 0.090 G: 14.208 NCE: 13.131 
(epoch: 7, iters: 1104, time: 0.196, data: 0.002) G_GAN: 1.005 D_real: 0.008 D_fake: 0.014 G: 10.716 NCE: 9.711 
End of epoch 7 / 10 	 Time Taken: 231 sec
learning rate = 0.0001000
(epoch: 8, iters: 38, time: 0.195, data: 0.002) G_GAN: 0.890 D_real: 0.014 D_fake: 0.051 G: 17.602 NCE: 16.712 
(epoch: 8, iters: 138, time: 0.196, data: 0.002) G_GAN: 0.819 D_real: 0.087 D_fake: 0.039 G: 14.808 NCE: 13.988 
(epoch: 8, iters: 238, time: 0.196, data: 0.002) G_GAN: 0.719 D_real: 0.088 D_fake: 0.059 G: 12.959 NCE: 12.240 

What are your thoughts on CUT vs CycleGAN for medical images?

I'm thinking of trying out CUT for domain adaptation of medical images (e.g. MR to CT translation) - I'm interested if you have any thoughts on how CUT would compare against CycleGAN and, of course, if you have any tips. Thanks for you work!

How to use "detect_cat_face.py"?

Hi, I'm wondering how the script file CUT/datasets/detect_cat_face.py is used in the training or the test phase of the neural network model? From the repository, I couldn't find a part where detect_cat_face.py is called. Thanks!

About gan loss

Hello @taesungp @junyanz

self.loss = nn.BCEWithLogitsLoss()

I see you use both vanilla gan and nonsaturating with softplus

When you use BCEWithLogitsLoss:

For real samples:

L_{r} = BCE(logits1, 1) + BCE(logits, 0)
= -(1 * log(sigmoid(logits1)) + 0 * log(1 - sigmoid(logits1)) + 0 * log(sigmoid(logits2)) + 1 * (1 - sigmoid(logits2)))
= -(log(sigmoid(logits1)) + log(1 - sigmoid(logits2))))

For fake samples:

L_{f} = BCE(logits2, 1)
= -(log(sigmoid(logits2))

Obviously, it is else nonsaturating loss.

Thus, I don't understand your defination of for vanilla gan. Why do you both nonsaturating gan with two different funtions?
Maybe I misunderstand.

Resume training from specific epoch?

Hello, Is it possible to resume training from a certain epoch? I'm not seeing any arguments for this in cut_model.py, but is there perhaps some other way? thanks!

Edit——sorry, just saw train_options.py

Typo in Citation

Hello,

First, thanks for your great work :).
The only small error I have seen is a typo in your README.md file under the citation for your original cycleGAN paper.

You have one letter 's' at the end too much in your citation name. It should be "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks" and not "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networkss"

@inproceedings{CycleGAN2017, title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networkss},

Something Wrong input image!

Hi! I'm so glad to meet your Paper and Code! But I have a question.
I trained using image of size 1024 x 1024 in a pretreatment as a resize and crop.
But html is spitting out some weird results. Input image is blank? My dataset doesn't have any problem.
How Can I Solve this?

image

Thanks again!

NCE layers selection

Hi all,
Thanks for your wonderful work and code.
I'm wondering about the selection of NCE layers. Why you select '0,4,8,12,16' as default layers? For better feature extractions or it just works fine? Is this default setting designed for RESNET-9?
And, If we change the generator, we have to reconsider the selection of NCE layers. Is that correct?

Thanks in advance for any replies!

Question about some specific use case

@taesungp and @junyanz

Thanks for sharing this great project.

I don't have too much experience with this kind of project (just basic deep learning models).
But I am trying to use your project to accomplish a task and I am wondering if you could answer a few questions.

Basically I am trying to add skin issues on faces. So I get a picture of someone and want to add acne or pimples on it.
Now, I am training a model using the celebaHQ dataset (3k images on TrainA) and 17 images (TrainB) with people with severe acne to see the results (I have just a few examples).
The model is still training, but here are some outputs:

image

image

I am using the CUT model with default settings.

  1. Are there some parameters that I should take a look for tunning the model?
  2. First, I've tested with just 15 pictures on trainA, but the results were terrible, so I know that the amount of data on TrainA it's important. My question is: how much data should I put on TrainB? I am using only 17, I not sure if I used 50 or 100 would make a difference.

regards,
Patrick

Continue Learning

I was wondering, if i could give the train command an flag for an already trained network, after i had to cancel training? Haven't found this function and neither an indicator in the code in train.py

Cheers and thanks for the awesome project!
Martin

Question about inference stability

Could you, please, comment on the inference stability. I mean, does the translation output for the same image significantly varies from epoch to epoch or it is pretty much the same after a sufficient amount on training epochs?

Hinge Gan loss

Hi , taesung
You used Hinge Gan loss in your previous paper(SPADE), could you please give me some hints why not using Hinge Gan loss in CUT? I assume Hinge Gan loss could improve the performance, but you intentionally match everything same as CycleGAN?

Thanks!

the patch nums for different input resolution

The default setting of num_patches is 256 for input images with resolution of 256x256.

parser.add_argument('--num_patches', type=int, default=256, help='number of patches per layer')

I am wondering this hyperparameter's effects, have you ever tried other numbers?
Besides, what if the input image resolution is changed (e.g. 400x400), should we change the num_patches for better results ?

Confusing typo in the paper

Hi!
I like your paper a lot.

There is a typo in the formula 3 (definition of PatchNCE loss):
the query patch is not in the translated image (y hat) but in the original (x).
It contradicts the figure 2 and the implementation.
So you need to swap z^s_l and hat z^s_l.

The typo is confusing because maybe this definition of the loss will work also.

Query and keys in the code are different than what they are shown as being in the pseudo-code?

feat_q = self.netG(tgt, self.nce_layers, encode_only=True)

feat_k = self.netG(src, self.nce_layers, encode_only=True)

self.loss_NCE = self.calculate_NCE_loss(self.real_A, self.fake_B)

Awesome paper, I have been creating my own implementation in Tensor flow 2, my question was in regards to the cut_model.py, the source and target for the NCE loss and the query/key variables for it seem to be opposite of that in the pseudo-code. F_q is supposed to be samples from G_enc(x) and f_k is supposed to be samples from G_enc(G(x)) or is it the other way around? Thanks for your time and hard work, this new method looks highly promising as a successor to Cyclegans.

Is it not need Large Batch size?

Hi! I'm so glad to meet your Paper and Code! But I have a question.
I learned that Contrasitive Learning needs a large batch size.
But when I 'm training your code my custom data, it is running a 24G GPU and Batch size 8,
and Model makes good results!!
How is this possible?

Thanks again!

Can not run with grayscale images

The line 67 in unaligned_dataset.py
transform = get_transform(modified_opt)
the flag of grayscale is missing. Therefore, the code can not play with grayscale images.
I modified it with
transform = get_transform(modified_opt,grayscale=(self.opt.input_nc == 1)).
It can play with my grayscale images, now.

Is `gpu_ids -1` flag ignored?

System
Ububtu 20.04
No CUDA GPU

Steps
bash ./datasets/download_cut_dataset.sh grumpifycat
python train.py --dataroot ./datasets/grumpifycat --name grumpycat_CUT --CUT_mode CUT --gpu_ids -1

Results

...
gpu_ids: -1 [default: 0]
...
Traceback (most recent call last):
  File "train.py", line 39
...
Found no NVIDIA driver on your system...

Expected
Should be consistent with readme stating CPU processing is possible.

data_dependent_initialization error when using cycle_gan model

I'm currently comparing how the various included models perform for my dataset. While the standard CUT and stylegan models both worked, when I try using the option --model cycle_gan I get the error:

Type Error: data_dependent_initialize() takes 1 positional argument but 2 were given

This seems to be due to the fact that in models/cycle_gan_model.py on line 193 data_dependent_initalize is defined as:

    def data_dependent_initialize(self):
        return

so that it doesn't have the arguments (self, data) as it seems to be intended to.

I tried modifying the function to be the same as the base model class (which simply passes/does nothing), assuming that the cycle_gan model did not need this type of data initialization but that did not help.

I understand I could just use the existing cyclegan repo, but was hoping I could use this one as a "one-stop shop".

Multi-GPU Testing fails

I tried to use the flag "--gpu_ids 0,1" in the command "python test.py --dataroot ./datasets/mscoco17/ --name model-name --CUT_mode CUT --phase train --load_size 786 --crop_size 786 --num_test 20 --gpu_ids 0,1".
With the flag "--gpu_ids 0" or "--gpu_ids 1" it is working properly. But with both gpus i get the appended Traceback.
I downloaded the repository today, so it should include the changes regarding multi-gpu, which were updated 4 days ago.
Traceback (most recent call last) File "test.py", line 56, in <module> model.data_dependent_initialize(data) File "/data/after-final-structure-217/cut-em2coco/cut-em2coco-mgpu/models/cut_model.py", line 105, in data_dependent_initialize self.forward() # compute fake images: G(A) File "/model_folder/models/cut_model.py", line 154, in forward self.fake = self.netG(self.real) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/model_folder/models/networks.py", line 1006, in forward fake = self.model(input) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/padding.py", line 170, in forward return F.pad(input, self.padding, 'reflect') File "/home/user/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 3569, in _pad return torch._C._nn.reflection_pad2d(input, pad) RuntimeError: non-empty 3D or 4D (batch mode) tensor expected for input, but got: [ torch.cuda.FloatTensor{0,3,784,784} ]

Question about the crossentropy loss

I had a question in regards to the softmax crossentropy between the logits from f_q and f_k and the value that is y_true. In the pseudo-code, it says that we are measuring the crossentropy between our concatenated logits (the softmax of the positive pair should approach 1, and the 255 negatives should be zeros?). If this is the case, where we maximize the positive logits value (similarity between corresponding spatial locations) and relative to this we make the negative patches dissimilar, so why is it that in the pseudo-code we are minimizing the crossentropy between logits and just zeros? If we were trying to get all values to be zero, wouldn't that just be trying to make the negatives and the positive locations dissimilar? Am I just missing something simple?

Thanks

Edit:
I should clarify, the part of the pseudo-code I am referring to is where we have logits (which are the positive and negative logits concatenated), and we flatten them. The target that is given is torch.zeros(B*S) [0...0]. From my understanding (or misunderstanding), shouldn't the target label be [1, 0...0], where the 1 is the positive similarities and the zeros are the from cosine similarities of the negative pairs?

How to make sure $\hat z$ and $z$ in same location $s$ be positive pairs?

In Eq 3 of original paper, it seems that $\hat z^s$ and $z^s$ are positive pairs.

It is reasonable in the 'horse' and 'zebra' examples because the location of object is almost same.

However, if we consider a special case: the horse is in the upper-left corner of image; zebra is in bottom-right corner of image.

In this case, the same location $s$ may not mean positve pairs.

So if such implementation rely on a good dataset that all objects are in the center of image?

Possible bug in PatchNCELoss

# diagonal entries are similarity between same features, and hence meaningless.
# just fill the diagonal with very small number, which is exp(-10) and almost zero
diagonal = torch.eye(npatches, device=feat_q.device, dtype=self.mask_dtype)[None, :, :]
l_neg_curbatch.masked_fill_(diagonal, -10.0)
l_neg = l_neg_curbatch.view(-1, npatches)

Link to the Code

The comment says that you want to put a very small number exp(-10) in the diagonal, but later you set -10.0 as value.

So either the comment is wrong or the exponential function is missing in the code below.
Or did I miss something?

One-sided CUT test

Hi, how can I do a one-sided test where it's simply A to B, without an aligned image?

When I specify --dataset_mode single like CycleGAN, I get the following error:

dataset [SingleDataset] was created
model [CUTModel] was created
creating web directory ./results/helloworld_CUT/test_latest
Traceback (most recent call last):
  File "test.py", line 56, in <module>
    model.data_dependent_initialize(data)
  File "/home/miner/CUT/models/cut_model.py", line 101, in data_dependent_initialize
    self.set_input(data)
  File "/home/miner/CUT/models/cut_model.py", line 143, in set_input
    self.real_B = input['B' if AtoB else 'A'].to(self.device)
KeyError: 'B'

Metrics for evaluation

The paper describes the usage of the Frechet Inception Distance for evaluation. I don't see any code for this however. Where is this located? Or will it be added later?

GTA->Cityscapes

Thanks for releasing the code! Could you release the GTA->Cityscapes pretrained model? Have you ever tested the fid of this results?

how to netG stylegan options?

Thank you for your good research and I'm very interested in this research.
I have one question.

"""
elif netG == 'stylegan2':
net = StyleGAN2Generator(input_nc, output_nc, ngf, norm_layer=norm_layer, use_dropout=use_dropout, opt=opt)
"""

I want to use stylegan2 option in netG. But why specify generator architecture [resnet_9blocks | resnet_6blocks | unet_256 | unet_128]') Are there only options?

Thanks.

Question about image pool

Thanks for presenting such awesome and solid work!
I am confused about the setting of the image pool in CutGAN. I think that an image pool can make the discriminator more robust with an unpaired input training data, but I noticed that you have abandoned it in the CutGAN. Could you please tell me the reason?

FastCUT LR schedule.

I was looking through the default options, and I noticed that for FastCUT, the default learning rate schedule was 150 epochs at constant LR and 50 epochs decaying the LR to zero. This is opposed to the CycleGAN learning rate schedule (100 epochs at constant LR, 100 epochs decaying), which may contradict with what the appendix of the paper says:

the fast variant FastCUT is trained up to 200 epochs, following CycleGAN.

Does this learning rate schedule show better results for FastCUT?

GTA to Cityscapes translation

Hi,

The results for GTA to Cityscapes translation are great!
Can you please explain what is the setting?
Is it the same as the other datasets?
I'm wondering because you seem to get the same 3D structure, with only the texture and lighting different, which is exactly what I want.

Can you clarify the settings? (which hyper parameters, models, etc)

Semantic preservation via mutual information

Thanks for this work. This work is definitely a milestone in image translation and I really love it.

I wonder why the semantics are disentangled by constraining the mutual information. Why the mutual information will not maintain the texture information? Thanks for your explanation.

Question about the data preprocessing

I notice that in the training stage, you resize and crop the image into 256. But when I dont resize the image, it seems that the results get worse. Maybe the NCE loss cannot distinguish the similar patches.
Do you try other crop size or dont resize the original image in the experiment ? Thank you !

Possible failure cases

Thanks for this great work, I have a question regarding some possible failure cases which may be related to hyperparameters choices. What if the image contains patches with high similarity, in other words, let's consider an image of a synthetic hand, we want to translate it to a real hand, different patches in the source image are very similar, how do you choose the negative patches in this case? The same thing applies for the horse/zebra images, if we take the grass as a positive patch, how do you eliminate the other grass patches? In other words, you can find false negatives in the internal image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.