Giter Club home page Giter Club logo

rsprompter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

rsprompter's Issues

torch.cuda.Out Of Memory Error: CUDA out of memory.

Hello, I have encountered the problem of CUDA out of memory in training, do you have any solution for me, thank you.

for example:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.76 GiB total capacity; 9.88 GiB already allocated; 69.12 MiB free; 10.26 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

How to save checkpoints and visualize during training?

Wonderful job! I run the code with the default config on WHU dataset. However, after 2 epochs the evaluation results are still 0.0000. There are no vis masks and no checkpoints can be used to predict. Do you have any idea about save checkpoints and visualize during training?

Anyone encountered this KeyError: 'pytorch-lightning_version'? It seems come from the provided checkpoint.

File "/remote_workspace/myprojects/RSPrompter-cky/tools/../mmpl/engine/runner/pl_runner.py", line 322, in run
return trainer_func(model=self.model, datamodule=self.datamodule, *args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 706, in test
return call._call_and_handle_interrupt(
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/trainer/call.py", line 42, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 92, in launch
return function(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 749, in _test_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/trainer/trainer.py", line 901, in _run
self._checkpoint_connector._restore_modules_and_callbacks(ckpt_path)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/trainer/connectors/checkpoint_connector.py", line 395, in _restore_modules_and_callbacks
self.resume_start(checkpoint_path)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/trainer/connectors/checkpoint_connector.py", line 83, in resume_start
self._loaded_checkpoint = _pl_migrate_checkpoint(loaded_checkpoint, checkpoint_path)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/utilities/migration/utils.py", line 113, in _pl_migrate_checkpoint
old_version = _get_version(checkpoint)
File "/opt/conda/lib/python3.8/site-packages/lightning/pytorch/utilities/migration/utils.py", line 136, in _get_version
return checkpoint["pytorch-lightning_version"]
KeyError: 'pytorch-lightning_version'

query-base训练不收敛

关于代码训练有2个疑问,非常希望得到解答,谢谢:
1)为什么使用WHU+query config,训练模型不收敛
image

2)WHU + anchor config不能达到论文中的指标,请问还需要调整额外的参数吗
image

关于环境配置

在python3.10下,兼容pycocotools什么版本,最新的好像只兼容3.8。我安装了pycocotools后,在运行时,可以导入pycocotools但无法导入pycocotools._mask。求解答

数据集标签转换

您好,作者,我看您用的NWPU和SSDD这两个数据集都是目标检测的标签,而您训练这两个数据集是用的json实例分割标签,请问您是怎么完成目标检测到实例分割标签的转换的?是重新标注还是用什么模型自动化生成的吗?期待您的回复。

Question about the performance

Hi, I run the code on NWPU dataset, but it cannot reach the performance of the paper. The max_epoch is set to 1200, but the performance reach the best at epoch 654. The performance is 0.6180, but the results in the paper is 0.6689(66.89). I run the code with 8 GPUS and train_batch_size_per_gpu is set to 3.

image

In addition, the offline test is inconsistent with the test results during the training process. During training, the result on the val dataset is 0.6180, but when the training ends, the performance with tool/test.py is 0.54, which is much worse.

Can you give me some advices? Thank you very much!

about SAM-det and SAM-cls

Nice work. In the results table, it was not mentioned the details about the implementations of SAM-det and SAM-cls, any reference?

SAM-cls

How use resnet18 (trained with label) to classify?In your project, I didn't find the config file <'configs/ins_seg/samcls_{data_set_name}_config.py'>

Where is the ”sam_vit_h_4b8939.pth file“

Thanks for your contribution, I was experimenting with python train.py and encountered the following error
No such file or directory: 'pretrain/sam/sam_vit_h_4b8939.pth'
I didn't find it in the project file either, where is it please?
Good luck!

Ask for demo

Hi, I am try to use your work in droplet segmentation, can you put a demo about how to use it please, I saw you mentioned

"you need to configure the test_evaluator and test_loader in the configuration file, as well as the config and ckpt-path paths in tools/test.py, and then run"

if that is possible to use Google Colab as a demo please, for example, my folder is called 'conten/drive/MyDrive/RSPrompter' and my images set is saved as 'conten/drive/MyDrive/image.npz' or folder conten/drive/MyDrive/images' model is saved in the folder 'conten/drive/MyDrive/RSPrompter/models'. can you provide more details about which file I need to change or which line I need to change.

What's the meaning of valsegm_map_0?

Hello,

What does valsegm_map_0 mean because it doesn't seem like it's a map50:95? Is there a mistake in my understanding?
Because I looked at the log, the values output by it and the COCOAPI were not the same.

Predict.py file running error

vit-b and vit-h training both completed successfully. However, when running predictions, vit-b predicts normally while vit-h throws an error:

RuntimeError: Error(s) in loading state_dict for SegSAMAnchorPLer:
Missing key(s) in state_dict: "panoptic_head.neck.down_sample_layers.12.0.conv.weight", "panoptic_head.neck.down_sample_layers.12.0.bn.weight", "panoptic_head.neck.down_sample_layers.12.0.bn.bias", "panoptic_head.neck.down_sample_layers.12.0.bn.running_mean", "panoptic_head.neck.down_sample_layers.12.0.bn.running_var", "panoptic_head.neck.down_sample_layers.12.1.conv.weight", "panoptic_head.neck.down_sample_layers.12.1.bn.weight", "panoptic_head.neck.down_sample_layers.12.1.bn.bias", "panoptic_head.neck.down_sample_layers.12.1.bn.running_mean", "panoptic_head.neck.down_sample_layers.12.1.bn.running_var", "panoptic_head.neck.down_sample_layers.13.0.conv.weight", "panoptic_head.neck.down_sample_layers.13.0.bn.weight", "panoptic_head.neck.down_sample_layers.13.0.bn.bias", "panoptic_head.neck.down_sample_layers.13.0.bn.running_mean", "panoptic_head.neck.down_sample_layers.13.0.bn.running_var", "panoptic_head.neck.down_sample_layers.13.1.conv.weight", "panoptic_head.neck.down_sample_layers.13.1.bn.weight", "panoptic_head.neck.down_sample_layers.13.1.bn.bias", "panoptic_head.neck.down_sample_layers.13.1.bn.running_mean", "panoptic_head.neck.down_sample_layers.13.1.bn.running_var", "panoptic_head.neck.fusion_layers.12.conv.weight", "panoptic_head.neck.fusion_layers.12.bn.weight", "panoptic_head.neck.fusion_layers.12.bn.bias", "panoptic_head.neck.fusion_layers.12.bn.running_mean", "panoptic_head.neck.fusion_layers.12.bn.running_var", "panoptic_head.neck.fusion_layers.13.conv.weight", "panoptic_head.neck.fusion_layers.13.bn.weight", "panoptic_head.neck.fusion_layers.13.bn.bias", "panoptic_head.neck.fusion_layers.13.bn.running_mean", "panoptic_head.neck.fusion_layers.13.bn.running_var".

Is there something I missed when switching from vit-b to vit-h for predictions, with changes only in type, checkpoint, in_channels, and selected_channels, just like during training?

predict.py报错

TypeError: class SSDDInsSegDataset in mmpl/datasets/ssdd_ins_dataset.py: '<' not supported between instances of 'str' and 'int'

The result is not the same with paper

I use the pretrained weights u provided in huggingface and test it for NWPU val data. but i got the different result from the paper.
Can you tell me what I might be doing wrong. Thank u for very much!
image

about Visualization

Hi, I run the python tools/predict.py,The program ran smoothly, but I didn't find the visualization result. How should I call out the result? Thank you

train.py出现FileNotFoundError

您好!想请问一下运行train.py后出现的错误是找不到对应的数据源(如下图所示),请问是直接在config文件里对data_prefix路径进行修改吗?以及为什么抓取的图像数据的名字是2_211.tif但在程序文件里却没看到有这个图像来源的提示?data_parent是对的。望回复,谢谢?
image
image

Unable to download datasets

I get "This site can’t be reached" when trying to download the three datasets using the links in the README file. Any alternative way to download the datasets? Thanks!

How to resume form ckpt?

Thanks for your great work ! I wonder is there a way to resume training like "--resume" in detectron2?

Question about prompt gen

Very interesting work!!!

I have a question to ask, why do you need to directly generate the 256 dimensional vector corresponding to the point prompt when designing a network instead of predicting k coordinate points?

Training settings

Nice work, thanks for sharing. The max epochs and learning rate utilized in this repo are different from the ones stated in your Arxiv paper. For instance, total training epoch is 700/1000 (WHU) in the paper while it is set to 5000 in "rsprompter_query_whu_config.py". The learning is also different (5e-4 v.s. 2e-4).

Integrate this algorithm into MMDet?

Hello, this is very practical work. May I ask if you have the time or idea to integrate this algorithm into mmdet? We can provide the necessary assistance. Thank you.

实验配置

尊敬的作者您好:
特别感谢您在遥感图像分割领域做出的巨大贡献!
想问一下您在实验时用到的硬件配置是怎样的?训练时间消耗大概是多少?
期待您的解答!感谢!!

阅读论文关于提示生成的疑问?

论文中提到了基于anchor和query生成的提示;
1、基于anchor类似于sam-det,是用监督训练得到的边界框的坐标和类别生成提示,对于较难分割的物体仅提供box可能无法达到较好的分割效果;
2、基于query生成的提示我不太理解,请问是使用mlp-prompt生成提示点吗?这样生成的点坐标越多理论上是越准确的,但是生成的提示点无法使用标签对其训练,所以直接对提示点最终生成的mask与gt_label进行训练,这样理解对吗,好像无法对提示点进行回归训练模型不太好收敛?
我觉得第2种方法似乎是更准确的,我在使用SAM时只要提示点足够多,总能达到一个较好的分割效果。

How to increase mIoU evaluation indicators?

First of all, thank you for your nice work.

I have two questions that I would like to ask you for advice.

  1. I want to be try SAM-seg in Potsdam dataset(semantic segmentation dataset), What are the general implementation steps?
  2. I hope to try to increase the statistics and evaluation results of mIoU and F1-Score. How should I modify the config file (taking samseg_mask2former_whu_config.py as an example) based on mmseg's iou_metric?

On other datasets

Hi, I want to train the model on other datasets, but the code is always stuck in the process of running.

At the beginning, the program runs normally, but after a while the code stops.
image
There is one graphics card with zero gpu utilization. I guess it is loading pictures, because the size of the pictures in my dataset(about 15000KB) is much larger than the dataset in the paper such as ssdd(within 100KB).
image

And after a while, cuda out of memory.
And the program ends, but the video memory of some graphics cards has not been released.

Can you give me some advices? Thank you much.

How to train well on single GPU?

I tried training the model from scratch on the NWPU dataset. I have just one GPU at this moment. So I modified rsprompter_query_nwpu_config.py: I set devices=1 and enabled accumulate_grad_batches=24 to simulate a batch size of 24. After 100 epochs, I got the following intermediate result:

Evaluate annotation type *bbox*
DONE (t=0.04s).
Accumulating evaluation results...
DONE (t=0.01s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.100
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.191
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.105
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.140
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.042
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.041
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.141
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.308
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.339
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.183
07/10 13:19:52 - mmengine - INFO - Evaluating segm...
segm_mAP_copypaste: 0.117 0.244 0.111 -1.000 0.145 0.069
Loading and preparing results...
DONE (t=0.13s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=0.04s).
Accumulating evaluation results...
DONE (t=0.01s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.117
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.244
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.111
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.145
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.069
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.050
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.185
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.299
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.319
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.217

Any tips on training with a single GPU to achieve the numbers reported in the paper? For e.g. AP_box @50 = 77.53 and AP_mask @50 = 85.77 on NWPU dataset... Before training for longer, wanted to seek your advice. Thank you!

size mismatch for panoptic_head.roi_head.mask_head.point_emb.8.weight:

Hi ! Great work, I have a trouble in loading the state_dict for model, details as follow:

Traceback (most recent call last):
File "/code/prompt/RSPrompter-cky/./tools/predict.py", line 49, in
main()
File "/code/prompt/RSPrompter-cky/./tools/predict.py", line 45, in main
runner.run(args.status, ckpt_path=args.ckpt_path)
File "/code/prompt/RSPrompter-cky/tools/../mmpl/engine/runner/pl_runner.py", line 323, in run
return trainer_func(model=self.model, datamodule=self.datamodule, *args, **kwargs)
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 805, in predict
return call._call_and_handle_interrupt(
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 847, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 901, in _run
self._checkpoint_connector._restore_modules_and_callbacks(ckpt_path)
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/checkpoint_connector.py", line 396, in _restore_modules_and_callbacks
self.restore_model()
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/checkpoint_connector.py", line 278, in restore_model
trainer.strategy.load_model_state_dict(self._loaded_checkpoint)
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 351, in load_model_state_dict
self.lightning_module.load_state_dict(checkpoint["state_dict"])
File "/opt/conda/envs/RSPrompter/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SegSAMAnchorPLer:
size mismatch for panoptic_head.roi_head.mask_head.point_emb.8.weight: copying a param with shape torch.Size([2560, 256]) from checkpoint, the shape in current model is torch.Size([2048, 256]).
size mismatch for panoptic_head.roi_head.mask_head.point_emb.8.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([2048]).


config file predict_rsprompter_anchor_nwpu.py and mask_head config:
mask_head=dict(
type='SAMPromptMaskHead',
per_query_point=prompt_shape[1],
with_sincos=True,
class_agnostic=True,
loss_mask=dict(
type='mmdet.CrossEntropyLoss', use_mask=True, loss_weight=1.0))),

prompt_shape = [60, 4], pth file NWPU_anchor.pth from HungFace. How can I fix this?

Thanks for your wonderful work. Question about PE in Anchor-based Prompter

Hello thanks for your wonderful work.
I noticed that you mentioned "Since the Φroi-p operations may cause the subsequent prompt generation to lose positional information relative to the entire image, we incorporate positional encoding (PE) into the original fused features (Fagg)".
Would you mind pointing which part of the code implement this function?

Hello. Thanks for your wonderful work. Question about detection model

Hello .Thanks for your wonderful work.
I noticed that you trained an additional object detection model in SAM-det. Have you ever considered embed an detection head after the SAM's image encoder ? Because (1) we could use the information from SAM's image encoder instead of training an image encoder from scratch (2) save some computational resources.

Best regards,
Bizhe

TypeError: 'list' object cannot be interpreted as an integer,when running the predict.py

return trainer_func(model=self.model, datamodule=self.datamodule, *args, **kwargs)
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 805, in predict
return call._call_and_handle_interrupt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 847, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 918, in _run
_log_hyperparams(self)
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning/pytorch/loggers/utilities.py", line 96, in log_hyperparams
logger.save()
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning_utilities/core/rank_zero.py", line 27, in wrapped_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning/pytorch/loggers/tensorboard.py", line 217, in save
save_hparams_to_yaml(hparams_file, self.hparams)
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning/pytorch/core/saving.py", line 302, in save_hparams_to_yaml
hparams = apply_to_collection(hparams, DictConfig, OmegaConf.to_container, resolve=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning_utilities/core/apply_func.py", line 59, in apply_to_collection
v = apply_to_collection(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning_utilities/core/apply_func.py", line 59, in apply_to_collection
v = apply_to_collection(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning_utilities/core/apply_func.py", line 59, in apply_to_collection
v = apply_to_collection(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/anaconda3/envs/torch-2.0-py311/lib/python3.11/site-packages/lightning_utilities/core/apply_func.py", line 92, in apply_to_collection
return elem_type(*out) if is_namedtuple
else elem_type(out)
^^^^^^^^^^^^^^
TypeError: 'list' object cannot be interpreted as an integer

Could you provide your ckpt file conveniently?

Thank you very much for your project. I encountered such an error when running test.py. I learned that it is because I need to use the ckpt file I trained. Could you provide your ckpt file conveniently?

when it will be available on the mmdectetion?

Hi keyan!
I was very inspired by your work in the last mmlab talk when I heard that your work will soon be integrated into mmdectetion and support coco training! I would like to enquire about roughly when it will be available on the mmdectetion?
Thanks again for your excellent work, and selfless open source!
Best wishes!

Verify the result visualization

Hello, please ask me the following two questions again:
(1) How to visualize results on validation sets after running python/tools/predict.py. Where are the results saved? Does the code have an inference part? After I run it as follows:
Loaded model weights from the checkpoint at /home/zhaopo/SAM/SAM_SSDD/RSPrompter-cky/results/ssdd_ins/7.7/checkpoints/epoch_epoch=9-map_valsegm_map_0=0.6500.ckpt
Predicting DataLoader 0: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 232/232 [01:18<00:00, 2.96it/s]

(2) Are these indicators normal after I run 10 rounds:
wandb: valbbox_map_0 0.55
wandb: valbbox_map_50_0 1.0
wandb: valbbox_map_75_0 0.505
wandb: valbbox_map_l_0 -1.0
wandb: valbbox_map_m_0 0.55
wandb: valbbox_map_s_0 -1.0
wandb: valsegm_map_0 0.65
wandb: valsegm_map_50_0 1.0
wandb: valsegm_map_75_0 1.0
wandb: valsegm_map_l_0 -1.0
wandb: valsegm_map_m_0 0.65
wandb: valsegm_map_s_0 -1.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.