bcv-uniandes / dms Goto Github PK
View Code? Open in Web Editor NEWDynamic Multimodal Instance Segmentation Guided by Natural Language Queries, ECCV 2018
Home Page: https://biomedicalcomputervision.uniandes.edu.co
License: MIT License
Dynamic Multimodal Instance Segmentation Guided by Natural Language Queries, ECCV 2018
Home Page: https://biomedicalcomputervision.uniandes.edu.co
License: MIT License
The download link to all pretrained weights seem to be down, for example this one.
Would it be possible to upload the pretrained weights on another platform?
Traceback (most recent call last):
File "E:/DMS-master/dmn_pytorch/train.py", line 179, in
max_query_len=args.time)
File "E:\DMS-master\dmn_pytorch\referit_loader.py", line 91, in init
self.corpus = torch.load(corpus_path)
File "D:\Anaconda\lib\site-packages\torch\serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "D:\Anaconda\lib\site-packages\torch\serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "D:\Anaconda\lib\site-packages\torch\serialization.py", line 193, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'data\unc\corpus.pth'
Traceback (most recent call last):
File "refer.py", line 39, in
from external import mask
File "/home/wam/refer-master/external/mask.py", line 3, in
import external._mask as _mask
ImportError: No module named _mask
Could you give me some advice to solve the error?
what is the main question?
Establecer posibles puntos de falla del modelo original.
I have tried to reproduce the code. The code ran smoothly but the result was far below the ones as you mentioned in the paper, I used the weights you provide and max mIoU was 0.11. When I visualize it, it's totally messy. I was wondering the reason.
I have these words in readme in order to install refer
sudo pip install git+https://github.com/andfoy/refer.git
sudo pip install -U git+https://github.com/taolei87/sru.git@43c85ed --no-deps
But I still meet:
Traceback (most recent call last):
File "/home/lanxiao/DMS-master/dmn_pytorch/train.py", line 23, in
from dmn_pytorch.models import DMN
File "/home/lanxiao/DMS-master/dmn_pytorch/init.py", line 12, in
from .referit_loader import ReferDataset
File "/home/lanxiao/DMS-master/dmn_pytorch/referit_loader.py", line 21, in
from referit import REFER
ImportError: cannot import name 'REFER'
Hi,
I am attempting to use the DMN model with the pre-trained weights provided. However, whenever I evaluate the model using the below-mentioned command for UNC dataset using the pre-trained weights downloaded from the link provided in the ReadMe, I get a very low mIoU(close to 0.11).
Pre-trained weights( for unc dataset)
Command Used for evaluating the model
python -u -m dmn_pytorch.train --data referit_data --dataset unc --val testA --backend dpn92 --num-filters 10 --lang-layers 3 --mix-we --snapshot ./snapshots/dmn_unc_weights.pth --epochs 0 --eval-first --high-res
Result of Evaluation(using above command)
Evaluation done. Elapsed time: 343.907 (s)
Maximum IoU: 0.1159133240581 - Threshold: 0.0000000000000
I have tried with other datasets also but I am unable to get anywhere close to the performance numbers mentioned in the ReadMe using the pre-trained weights. I am not sure what the issue is here. Below I provide the different versions of packages/libraries I am using.
Dependency Versions
Python 3.7
Pytorch 1.0
SRU 2.1.2 (latest version)
Cuda 9.0
Please note that I am getting low numbers for all datasets and have only provided UNC dataset(and numbers) as an example.
Hi,
I just read your paper and appreciate your well-organized code very much. But I cannot produce results using the pre-trained model.
I am not quite sure where the problem is. Did I use the wrong command?
python -u -m dmn_pytorch.train
--data /home/xxx/dms_data
--dataset unc
--val testA
--backend dpn92
--num-filters 10
--lang-layers 3
--mix-we
--save-folder ./checkpoints
--snapshot /home/xxx/DMS/checkpoints/dmn_unc_weights.pth
--epochs 0
--eval-first
How much impact it will have on the mIoU when I adjust size from 512 to 384 or 256 with --size. I got quite lower result compared with your result. The mIoU of the referit val down to 0.47 and unc to 0.36 when I use --size. I am wondering whether it is reasonable result.
I am thinking of using bounding box rather than mask to locate the object with language guide. How could I use your code to train and test that? Is there any suggestions or instructions.
Appreciated for your efforts to reply
Hi, I try to run this code, but get the following error. where should I download this missed 'data/unc/corpus.pth' file ?
Traceback (most recent call last):
File "/usr/local/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/media/wangxiao/b8efbc67-7ea5-476d-9631-70da75f84e2d/reference_code/DMS/dmn_pytorch/train.py", line 177, in
max_query_len=args.time)
File "/media/wangxiao/b8efbc67-7ea5-476d-9631-70da75f84e2d/reference_code/DMS/dmn_pytorch/referit_loader.py", line 89, in init
self.corpus = torch.load(corpus_path)
File "/usr/local/lib/python3.6/site-packages/torch/serialization.py", line 356, in load
f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'data/unc/corpus.pth'
Can you guide me test on my own dataset? How data needs to be created and things to keep in mind?
Hi, i notice that the network is set to train mode in evaluate function. Is that a special setting? Does this line affect the results of evaluation?
Line 342 in 9fa3a3a
Sorry, I cannot obtain your data. There is an error.
ModuleNotFoundError: No module named 'referit.external._mask'
Hi, I'd like to fine tune my model weights using my own dataset based on your pretrained weights, like dmn_referit_weights.pth, I don't know how to do that because I have no optimizer weights to resume the training. Any instruction or suggestion will be appreciated for that.
when i train the project on the GPU0(8G)
i get:
RuntimeError: CUDA out of memory. Tried to allocate 9.50 MiB (GPU 0; 7.93 GiB total capacity; 6.49 GiB already allocated; 14.81 MiB free; 30.49 MiB cached)
when i train the project on the GPU0(12G)
i get:
RuntimeError: CUDA out of memory. Tried to allocate 16.88 MiB (GPU 0; 11.90 GiB total capacity; 10.58 GiB already allocated; 18.44 MiB free; 62.29 MiB cached)
when i train the project on the GPU0(12G) GPU1(12G)
I add:
if args.cuda:
net = nn.DataParallel(net , device_ids=[0,1])
net.cuda()
i get:
RuntimeError: CUDA out of memory. Tried to allocate 16.88 MiB (GPU 0; 11.90 GiB total capacity; 10.58 GiB already allocated; 18.44 MiB free; 62.29 MiB cached)
what can I do to solove this question?Thanks
According to link of download_data.sh . I can't download referit_splits.tar.bz2,can you offer it to me ?thanks a lot. I hope you can link to the network disk or send it to my email. Thank you again. email:[email protected]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.