Giter Club home page Giter Club logo

deta's People

Contributors

jozhang97 avatar sangbumchoi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

deta's Issues

indices should be either on cpu or on the same device as the indexed tensor (cpu)

#when I train the model, i meet the wrong:

return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):
  File "main.py", line 345, in <module>
Traceback (most recent call last):
  File "main.py", line 345, in <module>
    main(args)
  File "main.py", line 295, in main
    main(args)
  File "main.py", line 295, in main
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
        return forward_call(*input, **kwargs)return forward_call(*input, **kwargs)

  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
    indices = self.stg1_assigner(enc_outputs, bin_targets)
    indices = self.stg1_assigner(enc_outputs, bin_targets)  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl

  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    pos_pr_inds = all_pr_inds[matched_labels == 1]
    pos_pr_inds = all_pr_inds[matched_labels == 1]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Traceback (most recent call last):
  File "./tools/launch.py", line 192, in <module>
    main()
  File "./tools/launch.py", line 188, in main
    cmd=process.args)
subprocess.CalledProcessError: Command '['./configs/deta.sh', '--coco_path', '/hdd/jy/code/data/coco2017']' returned non-zero exit status 1.

#Why?
#the wrong with the code or train environment?

assign_second_stage is performced only once

the second stage match is performed with decoder's input 'init reference point', instead of after each layer like other detrs
Have you ever tried to perform label assign after each decoder layer?

Error in table 6

image

@jozhang97
Thanks for sharing your work!

It seems that APs in the table 6. is so high. I think that the order of values in APL and APs should be changed.

indices should be either on cpu or on the same device as the indexed tensor (cpu)

#when I train the model, i meet the wrong:

return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):
  File "main.py", line 345, in <module>
Traceback (most recent call last):
  File "main.py", line 345, in <module>
    main(args)
  File "main.py", line 295, in main
    main(args)
  File "main.py", line 295, in main
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
        return forward_call(*input, **kwargs)return forward_call(*input, **kwargs)

  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
    indices = self.stg1_assigner(enc_outputs, bin_targets)
    indices = self.stg1_assigner(enc_outputs, bin_targets)  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl

  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    pos_pr_inds = all_pr_inds[matched_labels == 1]
    pos_pr_inds = all_pr_inds[matched_labels == 1]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Traceback (most recent call last):
  File "./tools/launch.py", line 192, in <module>
    main()
  File "./tools/launch.py", line 188, in main
    cmd=process.args)
subprocess.CalledProcessError: Command '['./configs/deta.sh', '--coco_path', '/hdd/jy/code/data/coco2017']' returned non-zero exit status 1.

#Why?
#the wrong with the code or train environment?

Sometimes fails to meet pre_nms_topk with only two classes

I am running DETA on a data set with only one real class (and one N/A class; in particular various tensors are n by 2). In some long runs, the run fails with RuntimeError: selected index k out of range at the line below:

pre_nms_inds.append(torch.topk(prop_logits_b.sigmoid() * lvl_mask, pre_nms_topk)[1])

If I understand correctly, this should only be failing if the number k requested from topk, in this case pre_nms_topk, which is 1000, is too small; specifically I believe this can only happen if the length of the lvl_mask is less than 1000. (Perhaps my data augmentation has produced an unreasonably tiny image? I thought they were all rescaled.) I don't really understand where we are in the code when this occurs, but would it be harmful to trim the k supplied to topk down to the available length?

indices should be either on cpu or on the same device as the indexed tensor (cpu)

#when I train the model, i meet the wrong:

return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):
  File "main.py", line 345, in <module>
Traceback (most recent call last):
  File "main.py", line 345, in <module>
    main(args)
  File "main.py", line 295, in main
    main(args)
  File "main.py", line 295, in main
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
        return forward_call(*input, **kwargs)return forward_call(*input, **kwargs)

  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
    indices = self.stg1_assigner(enc_outputs, bin_targets)
    indices = self.stg1_assigner(enc_outputs, bin_targets)  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl

  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    pos_pr_inds = all_pr_inds[matched_labels == 1]
    pos_pr_inds = all_pr_inds[matched_labels == 1]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Traceback (most recent call last):
  File "./tools/launch.py", line 192, in <module>
    main()
  File "./tools/launch.py", line 188, in main
    cmd=process.args)
subprocess.CalledProcessError: Command '['./configs/deta.sh', '--coco_path', '/hdd/jy/code/data/coco2017']' returned non-zero exit status 1.

#Why?
#the wrong with the code or train environment?

CUDA error: device-side assert triggered

I have the same error, when I use the command on coco dataset, as "GPUS_PER_NODE=4 ./tools/run_dist_launch.sh 4 ./configs/deta_swin_ft.sh --coco_path /mnt/home/dataset/coco --finetune /mnt/home/DETA/adet_swin_pt_o365.pth" or " ./configs/deta.sh --eval --coco_path ./data/coco --resume ./adet_checkpoint0011.pth".
My environment is Pytorch=1.8.1 Cuda=11.1, and I train well on Deformable-DETR whithout this error.
The detail of error is as follow:

Test: [ 0/2500] eta: 1:19:14 class_error: 0.00 loss: 14.3390 (14.3390) loss_ce: 0.6692 (0.6692) loss_bbox: 0.2385 (0.2385) loss_giou: 0.8719 (0.8719) loss_ce_0: 0.7682 (0.7682) loss_bbox_0: 0.2413 (0.2413) loss_giou_0: 0.8721 (0.8721) loss_ce_1: 0.7386 (0.7386) loss_bbox_1: 0.2372 (0.2372) loss_giou_1: 0.8720 (0.8720) loss_ce_2: 0.7082 (0.7082) loss_bbox_2: 0.2383 (0.2383) loss_giou_2: 0.8715 (0.8715) loss_ce_3: 0.6925 (0.6925) loss_bbox_3: 0.2384 (0.2384) loss_giou_3: 0.8715 (0.8715) loss_ce_4: 0.6827 (0.6827) loss_bbox_4: 0.2385 (0.2385) loss_giou_4: 0.8716 (0.8716) loss_ce_enc: 1.1769 (1.1769) loss_bbox_enc: 0.4718 (0.4718) loss_giou_enc: 1.7679 (1.7679) loss_ce_unscaled: 0.6692 (0.6692) class_error_unscaled: 0.0000 (0.0000) loss_bbox_unscaled: 0.0477 (0.0477) loss_giou_unscaled: 0.4359 (0.4359) cardinality_error_unscaled: 889.5000 (889.5000) loss_ce_0_unscaled: 0.7682 (0.7682) loss_bbox_0_unscaled: 0.0483 (0.0483) loss_giou_0_unscaled: 0.4361 (0.4361) cardinality_error_0_unscaled: 886.5000 (886.5000) loss_ce_1_unscaled: 0.7386 (0.7386) loss_bbox_1_unscaled: 0.0474 (0.0474) loss_giou_1_unscaled: 0.4360 (0.4360) cardinality_error_1_unscaled: 889.5000 (889.5000) loss_ce_2_unscaled: 0.7082 (0.7082) loss_bbox_2_unscaled: 0.0477 (0.0477) loss_giou_2_unscaled: 0.4358 (0.4358) cardinality_error_2_unscaled: 889.5000 (889.5000) loss_ce_3_unscaled: 0.6925 (0.6925) loss_bbox_3_unscaled: 0.0477 (0.0477) loss_giou_3_unscaled: 0.4358 (0.4358) cardinality_error_3_unscaled: 889.5000 (889.5000) loss_ce_4_unscaled: 0.6827 (0.6827) loss_bbox_4_unscaled: 0.0477 (0.0477) loss_giou_4_unscaled: 0.4358 (0.4358) cardinality_error_4_unscaled: 889.5000 (889.5000) loss_ce_enc_unscaled: 1.1769 (1.1769) loss_bbox_enc_unscaled: 0.0944 (0.0944) loss_giou_enc_unscaled: 0.8839 (0.8839) cardinality_error_enc_unscaled: 22179.5000 (22179.5000) time: 1.9019 data: 0.6984 max mem: 1327
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [96,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [97,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [98,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [99,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [100,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [101,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [102,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [103,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [104,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [105,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [106,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [107,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [108,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [109,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [110,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [111,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [112,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [113,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [114,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [115,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [116,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [117,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [118,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [119,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [120,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [121,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [122,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [123,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [124,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [125,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [126,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [127,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [32,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [33,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [34,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [35,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [36,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [37,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [38,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [39,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [40,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [41,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [42,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [43,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [44,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [45,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [46,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [47,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [48,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [49,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [50,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [51,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [52,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [53,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [54,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [55,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [56,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [57,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [58,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [59,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [60,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [61,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [62,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [63,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [0,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [1,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [2,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [3,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [4,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [5,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [6,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [7,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [8,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [9,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [10,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [11,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [12,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [13,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [14,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [15,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [16,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [17,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [18,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [19,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [20,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [21,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [22,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [23,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [24,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [25,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [26,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [27,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [28,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [29,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [30,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [31,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [64,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [65,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [66,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [67,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [68,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [69,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [70,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [71,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [72,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [73,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [74,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [75,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [76,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [77,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [78,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [79,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [80,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [81,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [82,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [83,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [84,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [85,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [86,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [87,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [88,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [89,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [90,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [91,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [92,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [93,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [94,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
/opt/conda/conda-bld/pytorch_1616554793803/work/aten/src/ATen/native/cuda/IndexKernel.cu:142: operator(): block: [0,0,0], thread: [95,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.
Traceback (most recent call last):
File "main.py", line 346, in
main(args)
File "main.py", line 284, in main
test_stats, coco_evaluator = evaluate(model, criterion, postprocessors,
File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/home/DETA/engine.py", line 110, in evaluate
loss_dict = criterion(outputs, targets)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/home/DETA/models/deformable_detr.py", line 398, in forward
indices = self.stg1_assigner(enc_outputs, bin_targets)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/home/DETA/models/assigner.py", line 328, in forward
pos_pr_inds = all_pr_inds[matched_labels == 1]
RuntimeError: CUDA error: device-side assert triggered

Is it possible to use IoU assignment only in the final assignment?

Hi,

Many thanks for your novel and interesting work.

In the paper, all experiments follow the two-stage DETR framework(which means the input query of decoder is the first-stage proposal). Have you ever tried vanilla DETR framework(the input query of decoder is learnable) with IoU assignment?

To be more general, can we just relax the one-to-one matching constraint and instead allow one-to-many assignments in the final assignment?

indices should be either on cpu or on the same device as the indexed tensor (cpu)

#when I train the model, i meet the wrong:

return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):
  File "main.py", line 345, in <module>
Traceback (most recent call last):
  File "main.py", line 345, in <module>
    main(args)
  File "main.py", line 295, in main
    main(args)
  File "main.py", line 295, in main
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    model, criterion, data_loader_train, optimizer, device, epoch, args.clip_max_norm)
  File "/hdd/jy/code/DETA/engine.py", line 43, in train_one_epoch
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    loss_dict = criterion(outputs, targets)
  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
        return forward_call(*input, **kwargs)return forward_call(*input, **kwargs)

  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
  File "/hdd/jy/code/DETA/models/deformable_detr.py", line 398, in forward
    indices = self.stg1_assigner(enc_outputs, bin_targets)
    indices = self.stg1_assigner(enc_outputs, bin_targets)  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl

  File "/home/jinying/miniconda3/envs/deta/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    return forward_call(*input, **kwargs)
  File "/hdd/jy/code/DETA/models/assigner.py", line 326, in forward
    pos_pr_inds = all_pr_inds[matched_labels == 1]
    pos_pr_inds = all_pr_inds[matched_labels == 1]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
Traceback (most recent call last):
  File "./tools/launch.py", line 192, in <module>
    main()
  File "./tools/launch.py", line 188, in main
    cmd=process.args)
subprocess.CalledProcessError: Command '['./configs/deta.sh', '--coco_path', '/hdd/jy/code/data/coco2017']' returned non-zero exit status 1.

#Why?
#the wrong with the code or train environment?

map question

Thank you for sharing the code. I want to ask you how to set the epoch50 for your training? I verified that 12 and 24 are OK, but the results of 50 rounds of Deformable Detr are almost the same as those of 24 rounds. Is it not good to add more rounds later

Conflicting label assign results.

Thank you for your great work!
I'm confused about the codes of label assignment: function sample_topk_per_gt().

gt_inds2, counts = gt_inds.unique(return_counts=True)    
scores, pr_inds2 = iou[gt_inds2].topk(k, dim=1)
gt_inds2 = gt_inds2[:,None].repeat(1, k)
pr_inds3 = torch.cat([pr[:c] for c, pr in zip(counts, pr_inds2)])
gt_inds3 = torch.cat([gt[:c] for c, gt in zip(counts, gt_inds2)])

From the above codes, I guess that one object query will be matched to multiple ground truths, resulting in conflicting label assign results.

Lvis checkpoint

Hi I wonder if you have a plan to provide the checkpoint file of LVIS.

Thanks

Adding DETA to 🤗 Transformers

Hi DETA authors,

As this work is very nice and it builds upon DETR and Deformable DETR, both of which are available in 🤗 Transformers, it was relatively straightforward to implement DETA as well (as the only difference is a tweak in the loss function + postprocessing).

Here's a notebook that illustrates inference with DETA models: https://colab.research.google.com/drive/1epI4ejrD0dbrSR9vRRhEPE7duoALqIk9?usp=sharing.

Now I'd also like to make a fine-tuning tutorial for people, illustrating how to fine-tune DETA on a custom dataset. For that I'm using my original DETR fine-tuning tutorial, and tweaking it for DETA. However here I got a question; I'm fine-tuning on the "balloon" dataset which only consists of 1 class (balloon). However during inference, I'm getting an error stating that that "topk is out of range". This is because of this line which seems to select the top 10,000 scores, however when you're fine-tuning on a single class, then the number of queries * number of classes = 300 * 1 = 300. Hence this is smaller than 10,000 => so was wondering what the recommendation here is when fine-tuning on a dataset with only a single class (or more generally, for any custom dataset).

Also, I'm currently hosting the DETA checkpoints on my personal username on HuggingFace:

It would be cool if you could create an organization on the 🤗 Hub and host the checkpoints there (or under your own personal username if you prefer so). This way, you can also write model cards (READMEs) for those repositories etc. It seems there's already an org for the UT-data-bootcamp, but not sure we should host the checkpoints there.

Let me know what you think!

Open-sourcely yours,

Niels
ML Engineer @ HF

the gpu memory cost?

I trained the Swin-L model with bigger(1200*2000) input and batch size is set to 1. And I trained with DDP mode。 The trainning logs showed it is 11256 max memory cost but the actual GPU memory cost is nearly 26G. Is it normal?

Swin-L config without Objects365 pretraining and Objects365 pretraining setting

Hello,

Thank you for sharing the great work! We really find it useful.

We have one question regarding how to train your model using Swin-L, without Objects365 pretraining (i.e, only ImageNet-21K pretraining). Do you mind sharing the config or any settings for us to try?

Additionally, the script to pretrain the Swin-L on Objects365 (deta_swin_pre.sh) is missing. Is it also possible that you can share that script and Objects365's setting (e.g., all images are used? and so on?).

Thanks,

Self-attn in decoder layers.

I noticed there is a section about DETA does not need self-attention in the decoder. in the paper. The results show that when the self-attn is replaced by ffn in decoder, the performance is better. I wonder whether the final version in the table of compared-with-other-SOTAs using this setting? Because I found in the code that the self-attn is hard-coded in the decoder layer:

self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.