Giter Club home page Giter Club logo

seg_every_thing's People

Contributors

agrimgupta92 avatar ashwinb avatar gadcam avatar ir413 avatar katotetsuro avatar newstzpz avatar rbgirshick avatar ronghanghu avatar shenyunhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seg_every_thing's Issues

How to run inference on bbox to mask model?

Hi,

What is the config file that correspond to the bounding box to mask model -models/bbox2mask_coco/28594643_model_final.pkl?

I tried the following command:

cd /path/to/clone/seg_every_thing
python2 tools/infer_simple.py --cfg configs/bbox2mask_coco/fast_rcnn_R-50-FPN_1x.yaml --output-dir /tmp/output --wts lib/datasets/data/trained_models/28594643_model_final.pkl  demo

But I got the following error message:

Traceback (most recent call last):
  File "tools/infer_simple.py", line 162, in <module>
    main(args)
  File "tools/infer_simple.py", line 132, in main
    model, im, None, timers=timers
  File "/path/to/clone/detectron/lib/core/test.py", line 66, in im_detect_all
    model, im, cfg.TEST.SCALE, cfg.TEST.MAX_SIZE, boxes=box_proposals
  File "/path/to/clone/detectron/lib/core/test.py", line 145, in im_detect_bbox
    hashes = np.round(inputs['rois'] * cfg.DEDUP_BOXES).dot(v)
KeyError: u'rois'

System information

  • Operating system: Linux Ubuntu 16.04.2
  • Compiler version: 5.4.0
  • CUDA version: 9
  • cuDNN version: 7
  • NVIDIA driver version: 390.30
  • GPU model : TITAN X (Pascal) 11170 MB memory
  • PYTHONPATH environment variable: /usr/local/caffe2_build::/path/to/clone/detectron/lib
  • python --version output: ? Python 2.7.12
  • Anything else that seems relevant: installed via docker image following these instructions + git checkout 3c4c7f6 + remake + added detectron/lib to python-path.

Thank you!
Adva

COCO-aligned annotations for VisualGenome

Dear authors,

I am trying to download the COCO-aligned annotations for VisualGenome from link by running lib/datasets/data/vg3k_bbox2mask/fetch_vg3k_json.sh, but I'm getting the 404 error.

Could you please re-upload the annotation files or provide some other reference?

Can anyone help me to provide the VG dataset?

The resource of VG dataset is overtimed, Can anyone help me to provide the VG dataset?
#!/bin/bash
wget -O ./lib/datasets/data/vg3k_bbox2mask/instances_vg3k_cocoaligned_train.json
https://people.eecs.berkeley.edu/~ronghang/projects/seg_every_thing/data/vg3k_bbox2mask/instances_vg3k_cocoaligned_train.json
wget -O ./lib/datasets/data/vg3k_bbox2mask/instances_vg3k_cocoaligned_val.json
https://people.eecs.berkeley.edu/~ronghang/projects/seg_every_thing/data/vg3k_bbox2mask/instances_vg3k_cocoaligned_val.json
wget -O ./lib/datasets/data/vg3k_bbox2mask/instances_vg3k_cocoaligned_test.json
https://people.eecs.berkeley.edu/~ronghang/projects/seg_every_thing/data/vg3k_bbox2mask/instances_vg3k_cocoaligned_test.json

Mask attributes

Hi,
How can I extract the mask attributes (such as color) along with the class label (e.g brown dog)?
Thanks

Does not run infer_simple.py

Expected results

What did you expect to see?
Expected some output related to the first training with the demo.

Actual results

What did you observe instead?

Found Detectron ops lib: /usr/local/caffe2_build/lib/libcaffe2_detectron_ops_gpu.so
E0607 14:02:35.360527   260 init_intrinsics_check.cc:54] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0607 14:02:35.360548   260 init_intrinsics_check.cc:54] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0607 14:02:35.360565   260 init_intrinsics_check.cc:54] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
/detectron/lib/core/config.py:1090: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  yaml_cfg = AttrDict(yaml.load(f))
INFO io.py:  67: Downloading remote file https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl to /tmp/detectron-download-cache/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl
Traceback (most recent call last):
  File "tools/infer_simple.py", line 148, in <module>
    main(args)
  File "tools/infer_simple.py", line 98, in main
    args.weights = cache_url(args.weights, cfg.DOWNLOAD_CACHE)
  File "/detectron/lib/utils/io.py", line 68, in cache_url
    download_url(url, cache_file_path)
  File "/detectron/lib/utils/io.py", line 114, in download_url
    response = urllib2.urlopen(url)
  File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 435, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 473, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 301: Moved Permanently

Detailed steps to reproduce

python2 tools/infer_simple.py \
>     --cfg configs/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml \
>     --output-dir /tmp/detectron-visualizations \
>     --image-ext jpg \
>     --wts https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl \
>     demo

My version is from the dockers images that you guys have. Takin in consideration that I had to modify it to remove an error . Everything is the same except :

Clone the Detectron repository

RUN git clone https://github.com/facebookresearch/detectron /detectron
&& cd /detectron
&& git reset --hard 3c4c7f6

System information

  • Operating system: Linux
  • Compiler version: ?
  • CUDA version: 9.0
  • cuDNN version: ?
  • NVIDIA driver version: ?
  • GPU models (for all devices if they are not all the same): ?
  • PYTHONPATH environment variable: ?
  • python --version output: 2.7.12
  • Anything else that seems relevant: ?

KeyError: u'Non-existent config key: MRCNN.BBOX2MASK'

I was trying to test the inference section. My computer is Ubuntu16.04 with 1080TI (CUDA9.0 + CUDNN7.1). The installation of Caffe2 and Detectron all seemed work (no error in test, except some warnings). Then I ran

python2 tools/infer_simple.py \
    --cfg configs/bbox2mask_vg/eval_sw/runtest_clsbox_2_layer_mlp_nograd.yaml \
    --output-dir /tmp/detectron-visualizations-vg3k \
    --image-ext jpg \
    --thresh 0.5 --use-vg3k \
    --wts lib/datasets/data/trained_models/33241332_model_final_coco2vg3k_seg.pkl \
    demo_vg3k

It aborted with error KeyError: u'Non-existent config key: MRCNN.BBOX2MASK' . Followed is the full logging.

/home/meng/.local/lib/python2.7/site-packages/scipy/sparse/lil.py:19: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from . import _csparsetools
/home/meng/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:165: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._shortest_path import shortest_path, floyd_warshall, dijkstra,\
/home/meng/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/_validation.py:5: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._tools import csgraph_to_dense, csgraph_from_dense,\
/home/meng/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:167: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._traversal import breadth_first_order, depth_first_order, \
/home/meng/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:169: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._min_spanning_tree import minimum_spanning_tree
/home/meng/.local/lib/python2.7/site-packages/scipy/sparse/csgraph/__init__.py:170: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._reordering import reverse_cuthill_mckee, maximum_bipartite_matching, \
Found Detectron ops lib: /usr/local/lib/libcaffe2_detectron_ops_gpu.so
E0802 21:35:37.893579 20333 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0802 21:35:37.893594 20333 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0802 21:35:37.893596 20333 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
Traceback (most recent call last):
  File "tools/infer_simple.py", line 162, in <module>
    main(args)
  File "tools/infer_simple.py", line 108, in main
    merge_cfg_from_file(args.cfg)
  File "/home/meng/lab-mill/Detectron/lib/core/config.py", line 1091, in merge_cfg_from_file
    _merge_a_into_b(yaml_cfg, __C)
  File "/home/meng/lab-mill/Detectron/lib/core/config.py", line 1149, in _merge_a_into_b
    _merge_a_into_b(v, b[k], stack=stack_push)
  File "/home/meng/lab-mill/Detectron/lib/core/config.py", line 1139, in _merge_a_into_b
    raise KeyError('Non-existent config key: {}'.format(full_key))
KeyError: u'Non-existent config key: MRCNN.BBOX2MASK'

I checked with some similar errors online, which is about "indentation mismatch", but seems not our case. Anyone knows how to solve this, and does the warning information matter? Thanks.

Dockerfile error:

i am getting the following issue when i am trying to run the following command
COMMAND :
nvidia-docker run --rm -it 5de59a4a77bd python2 tools/train_net.py --multi-gpu-testing --cfg configs/getting_started/tutorial_2gpu_e2e_faster_rcnn_R-50-FPN.yaml OUTPUT_DIR /tmp/detectron-output

ERROR :
satish@vader:~/Revision/DETECTRON/seg_every_thing$ nvidia-docker run --rm -it 5de59a4a77bd python2 tools/train_net.py --multi-gpu-testing --cfg configs/getting_started/tutorial_2gpu_e2e_faster_rcnn_R-50-FPN.yaml OUTPUT_DIR /tmp/detectron-output
Found Detectron ops lib: /usr/local/caffe2_build/lib/libcaffe2_detectron_ops_gpu.so
E0929 00:50:01.726465 1 init_intrinsics_check.cc:54] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0929 00:50:01.726497 1 init_intrinsics_check.cc:54] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0929 00:50:01.726505 1 init_intrinsics_check.cc:54] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
INFO train_net.py: 95: Called with args:
INFO train_net.py: 96: Namespace(cfg_file='configs/getting_started/tutorial_2gpu_e2e_faster_rcnn_R-50-FPN.yaml', multi_gpu_testing=True, opts=['OUTPUT_DIR', '/tmp/detectron-output'], skip_test=False)
/detectron/lib/core/config.py:1090: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml_cfg = AttrDict(yaml.load(f))
INFO io.py: 67: Downloading remote file https://s3-us-west-2.amazonaws.com/detectron/ImageNetPretrained/MSRA/R-50.pkl to /tmp/detectron-download-cache/ImageNetPretrained/MSRA/R-50.pkl
Traceback (most recent call last):
File "tools/train_net.py", line 128, in
main()
File "tools/train_net.py", line 101, in main
assert_and_infer_cfg()
File "/detectron/lib/core/config.py", line 1054, in assert_and_infer_cfg
cache_cfg_urls()
File "/detectron/lib/core/config.py", line 1063, in cache_cfg_urls
__C.TRAIN.WEIGHTS = cache_url(__C.TRAIN.WEIGHTS, __C.DOWNLOAD_CACHE)
File "/detectron/lib/utils/io.py", line 68, in cache_url
download_url(url, cache_file_path)
File "/detectron/lib/utils/io.py", line 114, in download_url
response = urllib2.urlopen(url)
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 435, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 473, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 301: Moved Permanently

I even changed the url in lib. utils io.py
from : https://s3-us-west-2.amazonaws.com/detectron/ImageNetPretrained/MSRA/R-50.pkl
to : https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/MSRA/R-50.pkl

But still detectron is some how pulling the cached url

Then to fix this i pulled the latest github repo of detectron with the updated "url". But after the as mentioned in the other issue.

"RUN make ops " fails in the dockerfile with the below error
Can you please suggest the fix or work around :

Step 13/13 : RUN make ops
---> Running in cfe37fc1b57b
mkdir -p build && cd build && cmake .. && make -j28
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Cannot find gflags with config files. Using legacy find.
CMake Warning at /usr/local/caffe2_build/share/cmake/Caffe2/public/gflags.cmake:2 (find_package):
By not providing "Findgflags.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "gflags", but
CMake did not find one.

Could not find a package configuration file provided by "gflags" with any
of the following names:

gflagsConfig.cmake
gflags-config.cmake

Add the installation prefix of "gflags" to CMAKE_PREFIX_PATH or set
"gflags_DIR" to a directory containing one of the above files. If "gflags"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
/usr/local/caffe2_build/share/cmake/Caffe2/Caffe2Config.cmake:16 (include)
CMakeLists.txt:8 (find_package)

-- Found gflags: /usr/include
-- Found gflags (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so)
-- Cannot find glog. Using legacy find.
CMake Warning at /usr/local/caffe2_build/share/cmake/Caffe2/public/glog.cmake:2 (find_package):
By not providing "Findglog.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "glog", but
CMake did not find one.

Could not find a package configuration file provided by "glog" with any of
the following names:

glogConfig.cmake
glog-config.cmake

Add the installation prefix of "glog" to CMAKE_PREFIX_PATH or set
"glog_DIR" to a directory containing one of the above files. If "glog"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
/usr/local/caffe2_build/share/cmake/Caffe2/Caffe2Config.cmake:30 (include)
CMakeLists.txt:8 (find_package)

-- Found glog: /usr/include
CMake Warning at CMakeLists.txt:13 (message):
You are using an older version of Caffe2 (version 0.8.1). Please consider
moving to a newer version.

-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- CUDA detected: 9.0
-- Added CUDA NVCC flags for: sm_30 sm_35 sm_50 sm_52 sm_60 sm_61 sm_70
-- Found libcuda: /usr/local/cuda/lib64/stubs/libcuda.so
-- Found libnvrtc: /usr/local/cuda/lib64/libnvrtc.so
-- Found CUDNN: /usr/include
-- Found cuDNN: v7.0.5 (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
-- Summary:
-- CMake version : 3.5.1
-- CMake command : /usr/bin/cmake
-- System name : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.0
-- CXX flags : -std=c++11 -O2 -fPIC -Wno-narrowing
-- Caffe2 version : 0.8.1
-- Caffe2 include path : /usr/local/caffe2_build/include
-- Have CUDA : TRUE
-- CUDA version : 9.0
-- CuDNN version : 7.0.5
-- Configuring done
-- Generating done
-- Build files have been written to: /detectron/build
make[1]: Entering directory '/detectron/build'
make[2]: Entering directory '/detectron/build'
make[3]: Entering directory '/detectron/build'
make[3]: Entering directory '/detectron/build'
[ 20%] Building NVCC (Device) object CMakeFiles/caffe2_detectron_custom_ops_gpu.dir/detectron/ops/caffe2_detectron_custom_ops_gpu_generated_zero_even_op.cu.o
Scanning dependencies of target caffe2_detectron_custom_ops
make[3]: Leaving directory '/detectron/build'
make[3]: Entering directory '/detectron/build'
[ 40%] Building CXX object CMakeFiles/caffe2_detectron_custom_ops.dir/detectron/ops/zero_even_op.cc.o
In file included from /usr/local/caffe2_build/include/caffe2/core/allocator.h:22:0,
from /usr/local/caffe2_build/include/caffe2/core/context.h:25,
from /detectron/detectron/ops/zero_even_op.h:20,
from /detectron/detectron/ops/zero_even_op.cc:17:
/detectron/detectron/ops/zero_even_op.cc: In member function 'bool caffe2::ZeroEvenOp<T, Context>::RunOnDevice() [with T = float; Context = caffe2::CPUContext]':
/detectron/detectron/ops/zero_even_op.cc:25:23: error: no matching function for call to 'caffe2::Tensorcaffe2::CPUContext::dim() const'
CAFFE_ENFORCE(X.dim() == 1);
^
In file included from /usr/local/caffe2_build/include/caffe2/core/net.h:34:0,
from /usr/local/caffe2_build/include/caffe2/core/operator.h:29,
from /detectron/detectron/ops/zero_even_op.h:21,
from /detectron/detectron/ops/zero_even_op.cc:17:
/usr/local/caffe2_build/include/caffe2/core/tensor.h:687:17: note: candidate: caffe2::TIndex caffe2::Tensor::dim(int) const [with Context = caffe2::CPUContext; caffe2::TIndex = long int]
inline TIndex dim(const int i) const {
^
/usr/local/caffe2_build/include/caffe2/core/tensor.h:687:17: note: candidate expects 1 argument, 0 provided
/detectron/detectron/ops/zero_even_op.cc:33:27: error: 'class caffe2::Tensorcaffe2::CPUContext' has no member named 'numel'
for (auto i = 0; i < Y->numel(); i += 2) {
^
CMakeFiles/caffe2_detectron_custom_ops.dir/build.make:62: recipe for target 'CMakeFiles/caffe2_detectron_custom_ops.dir/detectron/ops/zero_even_op.cc.o' failed
make[3]: Leaving directory '/detectron/build'
make[3]: *** [CMakeFiles/caffe2_detectron_custom_ops.dir/detectron/ops/zero_even_op.cc.o] Error 1
make[2]: *** [CMakeFiles/caffe2_detectron_custom_ops.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/caffe2_detectron_custom_ops.dir/all' failed
Scanning dependencies of target caffe2_detectron_custom_ops_gpu
make[3]: Leaving directory '/detectron/build'
make[3]: Entering directory '/detectron/build'
[ 60%] Building CXX object CMakeFiles/caffe2_detectron_custom_ops_gpu.dir/detectron/ops/zero_even_op.cc.o
In file included from /usr/local/caffe2_build/include/caffe2/core/allocator.h:22:0,
from /usr/local/caffe2_build/include/caffe2/core/context.h:25,
from /detectron/detectron/ops/zero_even_op.h:20,
from /detectron/detectron/ops/zero_even_op.cc:17:
/detectron/detectron/ops/zero_even_op.cc: In member function 'bool caffe2::ZeroEvenOp<T, Context>::RunOnDevice() [with T = float; Context = caffe2::CPUContext]':
/detectron/detectron/ops/zero_even_op.cc:25:23: error: no matching function for call to 'caffe2::Tensorcaffe2::CPUContext::dim() const'
CAFFE_ENFORCE(X.dim() == 1);
^
In file included from /usr/local/caffe2_build/include/caffe2/core/net.h:34:0,
from /usr/local/caffe2_build/include/caffe2/core/operator.h:29,
from /detectron/detectron/ops/zero_even_op.h:21,
from /detectron/detectron/ops/zero_even_op.cc:17:
/usr/local/caffe2_build/include/caffe2/core/tensor.h:687:17: note: candidate: caffe2::TIndex caffe2::Tensor::dim(int) const [with Context = caffe2::CPUContext; caffe2::TIndex = long int]
inline TIndex dim(const int i) const {
^
/usr/local/caffe2_build/include/caffe2/core/tensor.h:687:17: note: candidate expects 1 argument, 0 provided
/detectron/detectron/ops/zero_even_op.cc:33:27: error: 'class caffe2::Tensorcaffe2::CPUContext' has no member named 'numel'
for (auto i = 0; i < Y->numel(); i += 2) {
^
make[3]: *** [CMakeFiles/caffe2_detectron_custom_ops_gpu.dir/detectron/ops/zero_even_op.cc.o] Error 1
CMakeFiles/caffe2_detectron_custom_ops_gpu.dir/build.make:69: recipe for target 'CMakeFiles/caffe2_detectron_custom_ops_gpu.dir/detectron/ops/zero_even_op.cc.o' failed
make[3]: Leaving directory '/detectron/build'
CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/caffe2_detectron_custom_ops_gpu.dir/all' failed
make[2]: Leaving directory '/detectron/build'
make[2]: *** [CMakeFiles/caffe2_detectron_custom_ops_gpu.dir/all] Error 2
make[1]: *** [all] Error 2
Makefile:127: recipe for target 'all' failed
make[1]: Leaving directory '/detectron/build'
make: *** [ops] Error 2
Makefile:13: recipe for target 'ops' failed
The command '/bin/sh -c make ops' returned a non-zero code: 2

Parallel inference with visual genome categories.

I want to do inference on big dataset of images. tools/infer_simply.py does inference by processing each image sequentially. All I want is to extract bounding boxes and their features. Can someone please guide me on how to do this more efficiently. Is the support already available to run inference on batches rather than doing for each image individually? I am new to the detectron framework. Any help is really appreciated.

KeyError: u'Non-existent config key: BODY_UV_RCNN Error

Hi guys:
I have some trouble debugging the following error. I tried some ways to fix it such as checking detectron PATHONPATH, but still doesn't doesn't work me. Can someone help me find out what is the problem here?

(caffe2_env) (/data1/wdyli/miniconda3) [wdyli@TENCENT64 ~/densepose]$ python tools/infer_simple.py     --cfg configs/DensePose_ResNet101_FPN_s1x-e2e.yaml     --output-dir /data1/wdyli/densepose/DensePoseData/sample_output/     --image-ext jpg     --wts https://dl.fbaipublicfiles.com/densepose/DensePose_ResNet101_FPN_s1x-e2e.pkl     /data1/wdyli/densepose/DensePoseData/demo_data/demo_im.jpg
Found Detectron ops lib: /data1/wdyli/miniconda2/lib/python2.7/site-packages/torch/lib/libcaffe2_detectron_ops_gpu.so
[E init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
[E init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
[E init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
Traceback (most recent call last):
  File "tools/infer_simple.py", line 140, in <module>
    main(args)
  File "tools/infer_simple.py", line 87, in main
    merge_cfg_from_file(args.cfg)
  File "/data1/wdyli/densepose/detectron/detectron/core/config.py", line 1135, in merge_cfg_from_file
    _merge_a_into_b(yaml_cfg, __C)
  File "/data1/wdyli/densepose/detectron/detectron/core/config.py", line 1185, in _merge_a_into_b
    raise KeyError('Non-existent config key: {}'.format(full_key))
KeyError: u'Non-existent config key: BODY_UV_RCNN'

dockerfile error

Dockerfile errror.
There is no /detectron/lib directory. It should just be /detectron/

This :
WORKDIR /detectron/lib
RUN make

Should be this:
WORKDIR /detectron
RUN make

train on own dataset

I convert my dataset to the coco format,the dataset has no segmentation,so i set the "segmentation" =[] in annotations,and set the cfg.TRAIN.MRCNN_LABELS_TO_KEEP = ( ),i also convert the pretrained coco MXRCNN model to my dataset followed the convert_coco_model_cityscapes.py

finally, i got the error:
Exception encountered running PythonOp function: ValueError: min() arg is an empty sequence

Is the cfg.TRAIN.MRCNN_LABELS_TO_KEEP =( ) means that segmention would not be used?
how should i train bbox branch on my dataset which without segmentation from the coco pretrained MXRCNN model?

GPU out of memory during inference.

Hi,
I have been trying to run the code for inference for several days. However, when I try to run the code for 'Resnet 101-FPN'. It shows gpu out of memory after inferring several images. What I don't get is that the number of inferring images change with the total number of images in dataset I put as input. For example: when I try to infer my whole dataset of 4K+ images, it infers around 44 images and then when I tried with 44 images to infer, it only inferred 18 images. I am still a beginner. Help is much appreciated.
Again, I also tried to run the code with 'Resnet 50-FPN'. It shows me the following error:
self.append(rep.decode("string-escape"))
ValueError: invalid \x escape
I searched about these. I learned that it is because there is a '\x' character. Is there any way that I could remove it?

For 50 FPN:
python2 tools/infer_simple.py
--cfg configs/bbox2mask_vg/eval_sw/runtest_clsbox_2_layer_mlp_nograd.yaml
--output-dir /tmp/detectron-visualizations-vg3k
--image-ext jpg
--thresh 0.5 --use-vg3k
--wts lib/datasets/data/trained_models/33241332_model_final_coco2vg3k_seg.pkl
demo_vg3k

For 101 FPN
python2 tools/infer_simple.py
--cfg configs/bbox2mask_vg/eval_sw_R101/runtest_clsbox_2_layer_mlp_nograd_R101.yaml
--output-dir /tmp/detectron-visualizations-vg3k-R101
--image-ext jpg
--thresh 0.5 --use-vg3k
--wts lib/datasets/data/trained_models/33219850_model_final_coco2vg3k_seg.pkl
demo_vg3k

System information

  • Operating system: Ubuntu 18.04
  • CUDA version: 10
  • cuDNN version: 7.4.1
  • NVIDIA driver version: 410
  • GPU models (for all devices if they are not all the same): ? Nvidia GeForce GTX 1060
  • python --version output: 2.7

Error while running inference for Part 2

I am trying to run inference for Part 2 of your code. I am trying to run inference on Google Colab. I followed all the steps and install caffe2 and Detectron and all are running fine. I ran :

python2 tools/infer_simple.py \ --cfg configs/bbox2mask_vg/eval_sw/runtest_clsbox_2_layer_mlp_nograd.yaml \ --output-dir /tmp/detectron-visualizations-vg3k \ --image-ext jpg \ --thresh 0.5 --use-vg3k \ --wts lib/datasets/data/trained_models/33241332_model_final_coco2vg3k_seg.pkl \ demo_vg3k

The code aborted with error:
KeyError: u'Non-existent config key: MRCNN.BBOX2MASK

The logging information is:

Found Detectron ops lib: /usr/local/lib/python2.7/dist-packages/torch/lib/libcaffe2_detectron_ops_gpu.so [E init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. [E init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. [E init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. Traceback (most recent call last): File "tools/infer_simple.py", line 162, in <module> main(args) File "tools/infer_simple.py", line 108, in main merge_cfg_from_file(args.cfg) File "/content/drive/My Drive/seg_every_thing/detectron/detectron/core/config.py", line 1152, in merge_cfg_from_file _merge_a_into_b(yaml_cfg, __C) File "/content/drive/My Drive/seg_every_thing/detectron/detectron/core/config.py", line 1212, in _merge_a_into_b _merge_a_into_b(v, b[k], stack=stack_push) File "/content/drive/My Drive/seg_every_thing/detectron/detectron/core/config.py", line 1202, in _merge_a_into_b raise KeyError('Non-existent config key: {}'.format(full_key)) KeyError: u'Non-existent config key: MRCNN.BBOX2MASK'

I was looking for this error and came across another same error issue on your page "#2" which is been closed. I read the solution there. However, I am guessing my problem is due to the new version of Detectron. Now available Detectron folder does not contain the lib folder in it. I have not gone thoroughly into the codes so, please help me to solve this.
Thanks in advance.

  • PYTHONPATH environment variable: /env/python:/content/drive/My Drive/seg_every_thing/detectron/detectron

  • python --version output: 2.7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.