Giter Club home page Giter Club logo

cct's People

Contributors

rosikand avatar suzannalin avatar yassouali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cct's Issues

inference

Thank you so far for all the help!

I am now trying to run inference.py on VOC12 images with a semi-supervised trained model.
I filled in the arguments in the script:
image

I have two errors:
image
It includes all the aux. decoders (weight and bias).
and:
image

Is the first exception an issue for the code?
How do I go about the second error?

mIoU of unlabeled dataset

Hello! Thanks for sharing this work. The paper is also very well written!

I want to try this framework on my own dataset. So far, I have run the script using VOC2012, and I notice that during training, the mean IoU is also calculated for the unlabeled dataset.
Do I need to have the segmentation labels of the "unsupervised" images as well? Is there an easy way to go around this?

Thanks :)

Test set images

Hi,
Your work is amazing. I trained the network on the VOC dataset. Unfortunately, I could not find the test image set. How can I obtain the test set?

ModuleNotFoundError: No module named 'tensorboard'

Hi;

after running training command.. I got belw error:

from torch.utils import tensorboard
File "/.conda/envs/semiseg/lib/python3.6/site-packages/torch/utils/tensorboard/init.py", line 1, in
import tensorboard
ModuleNotFoundError: No module named 'tensorboard'

low performance in semi-supervised mode when employing weakly_loss with 2 gpus

Thank you for your nice work!

I tried to training the model with 1464 labeled samples in semi-supervised mode, and I used 2 gpus. I set the epoch as 80, and stop it after 50 epoch. But the performance is poor, e.g., miou at epoch 5 is 34.70% while at epoch 10 is 11.40%.
image

I set the 'use_weak_labels' as true, the 'drop_last' as false, and the rest are default.

Have you ever met this situation?

How to obtain figure 2 (c) and (d)

Hi

Congrats on the great work.
Just wondering do you have the code to produce figure 2 (c) and (d) in your paper? We would like to generate similar figures for our project.

Thanks

Evaluation bugs caused performance turn down

Sorry for the wrong post, I just checked that the base size isnot be set in evaluation. However, when I modified the evaluation function to be:

output = torch.nn.functional.interpolate(output,
                                                              size=(H, W),
                                                              mode='bilinear',
                                                              align_corners=True)

instead of

output = output[:, :, :H, :W]

in here, the result in my case (which is a custom data-set) increase about 1.5%.

.json format custom dataset

Hello !

Great job ! Thanks for sharing it.

I was wondering is it possible to run it with .json format annotations ?

Questions in VATDecoder

Thanks for your excellent work and kindly code releasing. When delving deep into your codes, I found the implementation of the VATDecoder seems problematic. In the get_r_adv function, adv_distance is calculated by the kl_div between logp_hat and pred. However, logp_hat is the log_softmax result while pred is the softmax result, which are not compatible. In my understanding, the implementation should be like the following:
with torch.no_grad():
logp = F.log_softmax(decoder(x_detached), dim=1)
for _ in range(it):
logp_hat = F.log_softmax(decoder(x_detached + xi*d), dim=1)
adv_distance = F.kl_div(logp_hat, logp, reduction='batchmean')
I wonder if my understanding correct. Waiting for your reply.

custom dataset

Hi @yassouali ,

  • Thanks for the code. The paper is very clearly written. Thanks for the overall work.
  • I was keen to try this on a custom data. Can you tell me what are the changes to be done for that?

Thanks

Unsupervised data is only used to improve the performance of the Encoder part?

Hi Yassine,

I have noticed that you answer the question (#43) that the unsupervised loss is not back-propagated through the main decoder. You used .detach() achieve this function.

From my understanding, the main decoder is not affected by the unsupervised loss. In other words, this means only the Encoder part benefited from the training process? Because, in the process of inference, we only need to use the main decoder and the Aux-Decoder part will not influence the performance of test data.

Thanks and regards.

Too high performance of semi supervised learning

Hi, thank you for sharing the code for such a great paper.
When I was trying to reproduce semi supervised learning with 1.5K pixel labels in table1, I got 72.0 which is way too higher than the reported number 69.4. I set use_weak_lables: false, supervised: false, semi: true and epochs: 50. If my understanding on your paper is correct, this is the right setting for CCT 1.5K in table1. For your information here is the whole config I used. Am I missing something here?

{
    "name": "CCT",
    "experim_name": "CCT",
    "n_gpu": 1,
    "n_labeled_examples": 1464,
    "diff_lrs": true,
    "ramp_up": 0.1,
    "unsupervised_w": 30,
    "ignore_index": 255,
    "lr_scheduler": "Poly",
    "use_weak_lables": false,
    "weakly_loss_w": 0.4,
    "pretrained": true,

    "model":{
        "supervised": false,
        "semi": true,
        "supervised_w": 1,

        "sup_loss": "CE",
        "un_loss": "MSE",

        "softmax_temp": 1,
        "aux_constraint": false,
        "aux_constraint_w": 1,
        "confidence_masking": false,
        "confidence_th": 0.5,

        "drop": 6,
        "drop_rate": 0.5,
        "spatial": true,
    
        "cutout": 6,
        "erase": 0.4,
    
        "vat": 2,
        "xi": 1e-6,
        "eps": 2.0,

        "context_masking": 2,
        "object_masking": 2,
        "feature_drop": 6,

        "feature_noise": 6,
        "uniform_range": 0.3
    },


    "optimizer": {
        "type": "SGD",
        "args":{
            "lr": 1e-2,
            "weight_decay": 1e-4,
            "momentum": 0.9
        }
    },


    "train_supervised": {
        "data_dir": "VOCtrainval_11-May-2012",
        "batch_size": 10,
        "crop_size": 320,
        "shuffle": true,
        "base_size": 400,
        "scale": true,
        "augment": true,
        "flip": true,
        "rotate": false,
        "blur": false,
        "split": "train_supervised",
        "num_workers": 8
    },

    "train_unsupervised": {
        "data_dir": "VOCtrainval_11-May-2012",
        "weak_labels_output": "pseudo_labels/result/pseudo_labels",
        "batch_size": 10,
        "crop_size": 320,
        "shuffle": true,
        "base_size": 400,
        "scale": true,
        "augment": true,
        "flip": true,
        "rotate": false,
        "blur": false,
        "split": "train_unsupervised",
        "num_workers": 8
    },

    "val_loader": {
        "data_dir": "VOCtrainval_11-May-2012",
        "batch_size": 1,
        "val": true,
        "split": "val",
        "shuffle": false,
        "num_workers": 4
    },

    "trainer": {
        "epochs": 50,
        "save_dir": "/home/whbae/scratch/CCT.tmp/1464_semi",
        "save_period": 5,
  
        "monitor": "max Mean_IoU",
        "early_stop": 10,
        
        "tensorboardX": true,
        "log_dir": "saved/",
        "log_per_iter": 20,

        "val": true,
        "val_per_epochs": 5
    }
}

Fail to reimplement your paper's result for semi-supervised.

I use the default config file to conduct experiments, but I only got 68.9mIoU for not adopting weak label and got 70.09mIoU for adopting weak label following your readme. These results are far lower than yours. My env is pytorch 1.7.0 and python 3.8.5. Could provide some advice?

Weird behavior (model trained on a custom dataset)

Hello, Thank you so much for the great work.

I am trying to train it on a custom dataset which contains 7 classes, we assigned pixel value 1-7 to each classes and 0 as background. We generated some labels (shown below), it looks like a black image but the pixel value for the object should not be 0. And I also built a custom dataloader for it.
b4900225 17

After training, here is a sample prediction and I am not sure what causes this. The black parts in the middle is where the object was located, the model is literally labeling the background for some reason.
2b701108

Is the labeling image correct?

Loss function

Thank you for your contribution. I want to know what the |Dl| and |Du| in your cross-entropy loss function formula and semi-supervised loss function formula represent, thank you for your answer.

Questions about training my own datasets

Hi! Thank you for your contributions to this repo!

Here, I have a question about training a model for a self-collected dataset. When I used this code, I modified the argument self.num_classes=21 to self.num_classes=5 and other parts which are relative to #num_classes, remaining the rest unchanged. However, some of the classes' IoU turns to be 0 even in the train set. Also I turned down the ''semi'' setting and used a "supervised" mode and it turned out that there was no improvement on the previous mentioned classes' IoU.

There are two interesting facts: firstly, I trained with DeepLab-V3+ before and each of the class has non-zero IoU. Secondly, when I trained the CCT model with self.num_classes=21 and other hyper-parameters remain the same as in this repo, each of the class has non-zero IoU, too. That is, if I simply changed self.num_classes(whatever the mode is), the performance of some classes collapsed.

Can you comment on this or give some advices on how to train with a different dataset? Thx!

Input for the unsupervised branch

Hi, Thank you so muchfor the great work.
I am trying to use your implementation for another application. I was wondering if you could give me some info about the file format for the unsupervised branch.
When I run the code for example I get ['/JPEGImages/2011_003274.jpg', '/SegmentationClassAug/2011_003274.png'] as one example of the 'file_list' generate by 'dataloaders/voc.py' when the 'self.split' is set as 'train_unsupervised'.
Now my question is, should I expect /SegmentationClassAug/2011_003274.png to have no label or annotation because it is unsupervised? If so, that is not the case, because although /SegmentationClassAug/2011_003274.png seems all black at the first look, the pixels depicting the object in it however, are colored with the value of the their class.

I appreciate your clarification.

Reproducing Cross-domain Experiments

@yassouali @SuzannaLin Firstly, thanks a lot for your great work! But I still have problems in reproducing the cross-domain experiments.

I have seen the implementation from https://github.com/tarun005/USSS_ICCV19, however, I notice that there are still many alternatives for the training schemes.

During training, in each iteration (i.e. each execution of 'optimizer.step()'), do you train the network by forwarding inputs from both two datasets? Or by forwarding inputs from only one dataset in the current iteration, and then executing "optimizer.step()", and then forwarding inputs from the other dataset in the next iteration, and so on?

Also, are there any tricks to deal with the data imbalance situation, e.g. the CamVid dataset only contains 367 images while the Cityscapes dataset has 2975 training images? (Just like constructing a training batch with different ratios for two datasets or other sorts of things)

Besides, can you give some hints on hyperparameters, e.g. the number of training iterations, batch size, learning rate, weight decay?

Looking forward to your reply! Thanks a lot!

Cusom dataset with one class

How i can use this model for custom dataset with one class? I get error, i change cross entropy loss to binary loss but get other error. Can you help me?

saving best model

Hi Yassine,
During training, is always the latest model saved or only the best?
If the best model is saved, on which metric is that decision based?
For example: This is what I got after 76 epochs
image

but at 78 the model stopped and its mIoU for the EVAL set is lower:
image

Error with Inference

Hello !

I am getting RuntimeError: Error(s) in loading state_dict for DataParallel: size mismatch for module.main_decoder.upsample error. What may cause this error ? I am using images of the same size as for training dataset images.

Akmaral

Base_dataset problem

Hi,
Thanks so much for sharing the code! It really helps a lot!

From you base_dataset.py, you padding the image with normalize value in here. While when you data loader getitem, , you normalize everything again. Does it means you normalize double times for the padding values? Will it cause some drops of the accuracy?

Cheers

SegmentationClassAug for training

Hello !

What is the reason to use SegmentationClassAug instead of SegmentationClass ?

I've just finished to train the model with common pixel-wise annotated images. And the result is not good. I was wondering maybe it is related to my labels.

Thanks )

hyperparameter tuning

Hi! Thanks for your great work.
I am also trying to use my own dataset to carry out the experiment, but I find that semi-supervised has only improved 0.5% IoU than full-supervised for my dataset, could you give me some suggestions for hyperparameter tuning in order to improve the result of semi-supervised?
Thanks.

image resize to 512×1024

Hello !

As I understood image size is adjusted by crop_size in config.json, which is 320. In this case images are resized into 320x320. How can I assign two different values for the width and height ?

Thank you !

Unsupervised loss not back-propogated through the main decoder

Hi Yassine,

Thanks for your great work! I have noticed that you mentioned the unsupervised loss is not back-propagated through the main decoder in the paper. From my understanding, this means the trainable parameters are only optimized through the supervised loss?

Can you please help me to figure out where the implementation is?

Many thanks,

Run on custom dataset

Hello,

I was wondering how I could run the code in this repo on a custom dataset? Would it be enough if I just create two PyTorch DataLoaders for labeled and unlabeled data?

Thanks

Difference in the loss computation between paper and code

I'm trying to re-implement your approach.

In the paper in figure 3 you say "The unsupervised loss is then computed between the outputs of the auxiliary decoders and that of the main decoder.".

image

But actually in the model.py (master, lines 114-125) you do the folloing:

# Get main prediction
x_ul = self.encoder(x_ul)
output_ul = self.main_decoder(x_ul)

# Get auxiliary predictions
outputs_ul = [aux_decoder(x_ul, output_ul.detach()) for aux_decoder in self.aux_decoders] 
targets = F.softmax(output_ul.detach(), dim=1)

# Compute unsupervised loss
loss_unsup = sum([
    self.unsuper_loss(inputs=u, targets=targets, conf_mask=self.confidence_masking, threshold=self.confidence_th, use_softmax=False)
    for u in outputs_ul
])
  • compute main decoder output
  • compute aux decoders output (can not understand why you are passing output_ul.detach() as it is ignored in aux_decoder's forward )
  • compute softmax for aux decoders outputs
  • compute loss between each element in aux decoders output and softmaxed aux decoders output

Main decoder's output is not used in unsupervised loss computation as it is absent in output_ul. In this line it is redefined

Am I wrong or your actual unsupervised loss computation is a little bit different than stated in the arxiv preprint?

How to obtain figure 2(d)

Hi, thank you for your nice work!

I want to know how to produce figure2(d)? There are 2048 channels for hidden representation, how to visualize? Thanks for your help!

Slow backward() speed when using torch.nn.parallel.DistributedDataParallel

Thanks a lot for your great work and clean repo. @yassouali

I have run your code on two gpus, and got at best 70.9 mIoU under your default config. However, it took me 28 hours to train on 2 titan-xp gpus, which is somewhat slow. Also, taking the sync batchnorm into consideration, I want to equip your code with pytorch official DistributedDataParallel and its syncBN to accelerate training and increase accuracy. But I found that it's even slower than the normal DataParallel, and the best performance is only 69.9 at best. After checking, with DistributedDataParallel the 'loss.backward()' operation consumes ~1.3s each step while with normal DataParallel it only consumes ~0.4s. And the gradients slightly differ from the normal DataParallel.

Do you make any optimization on your code with the 'backward()' operation? Have you tried to use DistributedDataParallel? How to apply syncBN to your code? Is there any way to accelerate the training? And Would you mind giving me some advice on my problem?

Looking forward to your reply. Thanks a lot.

Strange errors when training with 2x GPUs

Really impressive work and excellent results!

Here we have tried to reproduce your results with 2x 16G V100 GPUs as we have no access to 32G V100 GPUs (your experiment requires 32G V100, right?)
During the training stage, we also meet the errors as follows:

image

It would be great if you could give me some advice on possible reasons. My guess is that the PSPNet has a very special global pooling and the BN statics over the global pooled features might cause some errors.

Labels for unsupervised

I want to use custom dataset. But on VOC dataset i see that in file 1464_train_unsupervised write path to image and path to label. I dont understand. Unsupervised approch assumes that we have not labels. Help undersund this moment, please

custom dataset with 4 classes

Thank you so far for all your great help.
I have an issue that I also found in the closed issues, but for me it isn't solved.
I have my own custom data set with 4 classes (background and 3 objects, labeled 0-3), so I changed num_classes = 4 in voc.py
The results with training fully supervised are as shown in the image below. There is one class with an IoU of 0.0.
image
I ran multiple tests, using semi and weakly supervised settings, the results are unpredictable and often show 0.0 for the object classes. Only the background has good results.
Is there something I need to adjust in the code?

Error in training the model

Dear authors,

When I tried to train the model following your instructions, the following ValueError pops up. How could I solve this?

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1])

Best regards,
Jiawei

ohem loss

hi, Its really a nice work.

CCT does NOT work when I changed supervised loss function from cross-entropy to OHEM, the mean-IoU on eval dataset seems like really close between full-sup and semi-sup, I dont know why this happen? And by the way, have u ever tried ohem loss?

Training error!

I want to train VOC2012, but get the error below:

Traceback (most recent call last):
  File "train.py", line 98, in <module>
    main(config, args.resume)
  File "train.py", line 82, in main
    trainer.train()
  File "/home/byronnar/bigfile/projects/CCT/base/base_trainer.py", line 91, in train
    results = self._train_epoch(epoch)
  File "/home/byronnar/bigfile/projects/CCT/trainer.py", line 76, in _train_epoch
    total_loss, cur_losses, outputs = self.model(x_l=input_l, target_l=target_l, x_ul=input_ul, curr_iter=batch_idx, target_ul=target_ul, epoch=epoch-1)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/byronnar/bigfile/projects/CCT/models/model.py", line 93, in forward
    output_l = self.main_decoder(self.encoder(x_l))
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/byronnar/bigfile/projects/CCT/models/encoder.py", line 61, in forward
    x = self.psp(x)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/byronnar/bigfile/projects/CCT/models/encoder.py", line 36, in forward
    align_corners=False) for stage in self.stages])
  File "/home/byronnar/bigfile/projects/CCT/models/encoder.py", line 36, in <listcomp>
    align_corners=False) for stage in self.stages])
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward
    exponential_average_factor, self.eps)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1652, in batch_norm
    raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1])
  0%|                                                                                                         | 0/9118 [00:02<?, ?it/s]

How should I do? Thank you

low performance for full supervised setting

I modified the config file to set the code to 'supervised' mode, but the result seems to be very low:
Epoch : 40 | Mean_IoU : 0.699999988079071 | PixelAcc : 0.933 | Val Loss : 0.26163
compared with 'semi' mode:

Epoch : 40 | Mean_IoU : 0.7120000123977661 | PixelAcc : 0.931 | Val Loss : 0.31637

Note that I have changed the supervised list to the 10k+ augmented list in the 'supervised' setting.
Did I miss something here?

adjusting the model for input images with 4 channels

Hi Yassine,
I want to use your model to train a segmentation model on satellite imagery with 4 channels: RGB and Height. So far I got it to work on training on the RGB images only, now I want to change the model/encoder to ingest a 4th channel. I can't find where the number of input channels is specified. In _PSPModule this is set to 2048. What does this mean?
How do I adjust the code to train on 4-channel images?
Thanks for your help!

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1])

Hello,

I tried to use my customized dataset with image size 256x256 and 12 classes. I'm not using your VOC.py and dataloader codes. I am using my own dataloaders as follows but I get this error after some iterations:

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1]) T (1) | Ls 2.51 Lu 0.00 Lw 0.00 PW 0.00 m1 0.04 m2 0.04|: 1%|▍ | 20/1894 [00:11<18:32, 1.68it/s]

Dataloaders in train.py file:

` num_classes = 12

supervised_data = SLD_Labeled()
other_data = SLD_Unlabeled()


labeled_percentage = 0.2
all_data_size = len(other_data)
labeled_size = math.ceil(labeled_percentage * all_data_size)
unlabeled_size = all_data_size - labeled_size
unsupervised_data, val_data = random_split(other_data, [unlabeled_size, labeled_size])

supervised_loader = DataLoader(dataset=supervised_data,
                                batch_size=4,
                                shuffle=True,
                                )

unsupervised_loader = DataLoader(dataset=unsupervised_data,
                                batch_size=4,
                                shuffle=True,
                                )

val_loader = DataLoader(dataset=val_data,
                        batch_size=4,
                        shuffle=False,
                        )`

Error on custom dataset

I am having trouble while training medical images. It has RGB image and mask image and contains only one class. I just replaced the path and set num_classes = 2 in the voc.py. However, i am getting this error:

RuntimeError: 1only batches of spatial targets supported (3D tensors) but got targets of size: : [2, 320, 320, 3].

Can you help me? Also, what should i change to pallete.py?

Fix code indentation

  • /utils/losses.py has to line erros:
    • alpha = 1/alpha # inverse
    • idx[idx==255]=0

Poor mIoU when training on 1464 images in a supervised manner

Hi,
I trained the model on the 1464 images in a supervised manner. The highest mIoU on Val is 67.00%, but the reported number in your paper is 69.4%. Here is my config.json file. Can you have a look at which part is wrong?

 {  "name": "CCT",
    "experim_name": "CCT",
    "n_gpu": 1,
    "n_labeled_examples": 1464,
    "diff_lrs": true,
    "ramp_up": 0.1,
    "unsupervised_w": 30,
    "ignore_index": 255,
    "lr_scheduler": "Poly",
    "use_weak_lables":false,
    "weakly_loss_w": 0.4,
    "pretrained": true,
    "model":{
        "supervised": true,
        "semi": false,
        "supervised_w": 1,

        "sup_loss": "CE",
        "un_loss": "MSE",

        "softmax_temp": 1,
        "aux_constraint": false,
        "aux_constraint_w": 1,
        "confidence_masking": false,
        "confidence_th": 0.5,

        "drop": 6,
        "drop_rate": 0.5,
        "spatial": true,
    
        "cutout": 6,
        "erase": 0.4,
    
        "vat": 2,
        "xi": 1e-6,
        "eps": 2.0,

        "context_masking": 2,
        "object_masking": 2,
        "feature_drop": 6,

        "feature_noise": 6,
        "uniform_range": 0.3
    },


    "optimizer": {
        "type": "SGD",
        "args":{
            "lr": 1e-2,
            "weight_decay": 1e-4,
            "momentum": 0.9
        }
    },


    "train_supervised": {
        "data_dir": "../data/VOC2012",
        "batch_size": 10,
        "crop_size": 320,
        "shuffle": true,
        "base_size": 400,
        "scale": true,
        "augment": true,
        "flip": true,
        "rotate": false,
        "blur": false,
        "split": "train_supervised",
        "num_workers": 8
    },

    "train_unsupervised": {
        "data_dir": "VOCtrainval_11-May-2012",
        "weak_labels_output": "pseudo_labels/result/pseudo_labels",
        "batch_size": 10,
        "crop_size": 320,
        "shuffle": true,
        "base_size": 400,
        "scale": true,
        "augment": true,
        "flip": true,
        "rotate": false,
        "blur": false,
        "split": "train_unsupervised",
        "num_workers": 8
    },

    "val_loader": {
        "data_dir": "../data/VOC2012",
        "batch_size": 1,
        "val": true,
        "split": "val",
        "shuffle": false,
        "num_workers": 4
    },

    "trainer": {
        "epochs": 80,
        "save_dir": "saved/",
        "save_period": 5,
  
        "monitor": "max Mean_IoU",
        "early_stop": 10,
        
        "tensorboardX": true,
        "log_dir": "saved/",
        "log_per_iter": 20,

        "val": true,
        "val_per_epochs": 5
    }
}

It seems that there are some problems in the code?

It seems that there are some problems in the code?Could you please check it and tell me if I got it wrong. Thanks!

[ def _compute_metrics(self, outputs, target_l, target_ul, epoch):
seg_metrics_l = eval_metrics(outputs['sup_pred'], target_l, self.num_classes, self.ignore_index)
self._update_seg_metrics(*seg_metrics_l, True)
seg_metrics_l = self._get_seg_metrics(True)
self.mIoU_l, self.pixel_acc_l, self.class_iou_l = seg_metrics_l.values()

    if self.mode == 'semi':
        seg_metrics_ul = eval_metrics(outputs['unsup_pred'], target_ul, self.num_classes, self.ignore_index)
        self._update_seg_metrics(*seg_metrics_ul, False)
        seg_metrics_ul = self._get_seg_metrics(False)
        self.mIoU_ul, self.pixel_acc_ul, self.class_iou_ul = seg_metrics_ul.values()
        


def _update_seg_metrics(self, correct, labeled, inter, union, supervised=True):
    if supervised:
        self.total_correct_l += correct
        self.total_label_l += labeled
        self.total_inter_l += inter
        self.total_union_l += union
    else:
        self.total_correct_ul += correct
        self.total_label_ul += labeled
        self.total_inter_ul += inter
        self.total_union_ul += union



def _get_seg_metrics(self, supervised=True):
    if supervised:
        pixAcc = 1.0 * self.total_correct_l / (np.spacing(1) + self.total_label_l)
        IoU = 1.0 * self.total_inter_l / (np.spacing(1) + self.total_union_l)
    else:
        pixAcc = 1.0 * self.total_correct_ul / (np.spacing(1) + self.total_label_ul)
        IoU = 1.0 * self.total_inter_ul / (np.spacing(1) + self.total_union_ul)
    mIoU = IoU.mean()
    return {
        "Pixel_Accuracy": np.round(pixAcc, 3),
        "Mean_IoU": np.round(mIoU, 3),
        "Class_IoU": dict(zip(range(self.num_classes), np.round(IoU, 3)))
    }](url)

self.mIoU_ul, self.pixel_acc_ul, self.class_iou_ul = seg_metrics_ul.values()
but
"Pixel_Accuracy": np.round(pixAcc, 3),
"Mean_IoU": np.round(mIoU, 3),
"Class_IoU": dict(zip(range(self.num_classes), np.round(IoU, 3)))

It seems that the return value does not correspond

using weakly supervised dataset

Hello Yassine,

I have some questions about adding weakly labeled images.
First, I generated pseudo labels using irn, and the 10.582 .png files were saved correctly. Then, I set use_weak_labels: true in the configs file. However, when I run the main train.py script, it gives me the following error: (see image as well)

TypeError: forward() missing 2 required positional arguments: 'curr_iter' and 'epoch'
(with use_weak_labels: false I do not have this issue)
image

Second, how exactly are the pseudo labeled images utilized? When using the weakly labeled method, the model still trains on the labeled and unlabeled datasets as well? With each iteration the same amount of labeled, unlabeled, and pseudo labeled images? Some images will be in the labeled and the pseudo labeled data set, or some in both the unlabeled and pseudo labeled data sets?

I hope my questions are clear. I would appreciate your help.

ps: how many epochs did the model require to achieve the results published in the paper?

Cityscapes/CamVid config file

Hello,

I'm interested in running additional experiments using CCT on Cityscapes and possibly CamVid. Could you please provide a config.json for this setting similar to what was provided for VOC?

I expect to have to do some adjustments, but having the configuration you used would be much appreciated.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.