Comments (4)
Yeah, this one caught me out as well. The current dockerfile reads:
ENV FORCE_CUDA="1"
RUN pip install -e /detectron2_repo
The issue being that the detectron2/setup.py
requires cuda to be there and as far as I can see there isn't a way to have access to cuda during the build. So instead you have to build the docker up to that point, bring up the docker, perform the final step, and then from outside the docker commit.
Step 1
Dockerfile:
FROM nvidia/cuda:10.1-cudnn7-devel
# To use this Dockerfile:
# 1. `nvidia-docker build -t detectron2:v0 .`
# 2. `nvidia-docker run -it --name detectron2 detectron2:v0`
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
libpng-dev libjpeg-dev python3-opencv ca-certificates \
python3-dev build-essential pkg-config git curl wget automake libtool && \
rm -rf /var/lib/apt/lists/*
RUN curl -fSsL -O https://bootstrap.pypa.io/get-pip.py && \
python3 get-pip.py && \
rm get-pip.py
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip install torch torchvision cython \
'git+https://github.com/facebookresearch/fvcore'
RUN pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 /detectron2_repo
ENV FORCE_CUDA="1"
docker build . -f Dockerfile.partial -t detectron2-partial
Step 2
docker run -it detectron2-partial pip install -e /detectron2_repo
Step 3
Wait for that to finish and do a docker ps
and look for the detectron2-partial
instance where your pip install
is happening. It will be some hash like: b1ab0d1e909b
, you can then do:
docker commit b1ab0d1e909b detectron2
and you should have a docker image which has detectron2 installed.
but honestly, this isn't very nice. I'm sure there is a proper way to compile things that link against cuda as part of a docker build but I'm not sure what they are :)
from detectron2.
with #61 the build command nvidia-docker build -t detectron2:v0 .
works for me.
from detectron2.
Thanks for the fix @ppwwyyxx
It now successfully compiles but now I get a similar bug to what would happen if I used my approach above to compile on a v100 (for example) and tried to run on m laptop's GTX 1050 or one of our local servers which have GTX 1080Ti. Namely:
> docker run --net=host --runtime=nvidia -u $(id -u):$(id -g) ... /detectron2/base:0.1 python3 /detectron2_repo/demo/demo.py --config-file /detectron2_repo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input ~/London1_input/frame_0003.jpg --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 --pid=53375 /var/lib/docker/overlay2/d74c2d66316c7b908576f4a954eafbc37be6ab3efa52c255ccc1758f7a5d4a36/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.
This error does not appear if I create the docker before the final step and build detectron on each gpu in turn. Things I have tried include:
- Setting
TORCH_CUDA_ARCH_LIST="All"
- doesn't compile
Given that detectron does work on the 1080ti if I compile on it directly I am left to believe that there is some configuration of TORCH_CUDA_ARCH_LIST
which would make it work; but I've yet to find it
from detectron2.
Yeah, this one caught me out as well. The current dockerfile reads:
ENV FORCE_CUDA="1" RUN pip install -e /detectron2_repo
The issue being that the
detectron2/setup.py
requires cuda to be there and as far as I can see there isn't a way to have access to cuda during the build. So instead you have to build the docker up to that point, bring up the docker, perform the final step, and then from outside the docker commit.Step 1
Dockerfile:FROM nvidia/cuda:10.1-cudnn7-devel # To use this Dockerfile: # 1. `nvidia-docker build -t detectron2:v0 .` # 2. `nvidia-docker run -it --name detectron2 detectron2:v0` ENV DEBIAN_FRONTEND noninteractive RUN apt-get update && apt-get install -y \ libpng-dev libjpeg-dev python3-opencv ca-certificates \ python3-dev build-essential pkg-config git curl wget automake libtool && \ rm -rf /var/lib/apt/lists/* RUN curl -fSsL -O https://bootstrap.pypa.io/get-pip.py && \ python3 get-pip.py && \ rm get-pip.py # install dependencies # See https://pytorch.org/ for other options if you use a different version of CUDA RUN pip install torch torchvision cython \ 'git+https://github.com/facebookresearch/fvcore' RUN pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' # install detectron2 RUN git clone https://github.com/facebookresearch/detectron2 /detectron2_repo ENV FORCE_CUDA="1"
docker build . -f Dockerfile.partial -t detectron2-partial
Step 2docker run -it detectron2-partial pip install -e /detectron2_repo
Step 3
Wait for that to finish and do adocker ps
and look for thedetectron2-partial
instance where yourpip install
is happening. It will be some hash like:b1ab0d1e909b
, you can then do:docker commit b1ab0d1e909b detectron2
and you should have a docker image which has detectron2 installed.
but honestly, this isn't very nice. I'm sure there is a proper way to compile things that link against cuda as part of a docker build but I'm not sure what they are :)
Solved my problem using this method! extremely helpful!
Ubuntu 18.04 and Cuda 10.1
from detectron2.
Related Issues (20)
- export_model.py crashes with keypoints HOT 1
- export_model.py crashes with keypoints HOT 9
- Very slow training on Apple M1 Pro HOT 2
- UnpicklingError: invalid load key, '\xef'. HOT 2
- export_model.py - list_of_lines[165] = " [1344, 1344], 1344 \n" HOT 1
- Please read & provide the following HOT 2
- The comits you are making are breaking the code!!! HOT 1
- @torch.compiler.disable - AttributeError: module 'torch' has no attribute 'compiler' HOT 7
- missing config key error HOT 2
- Please read & provide the following HOT 1
- Detectron2 about rotated object detection HOT 1
- AttributeError: Cannot find field 'gt_masks' in the given Instances! HOT 1
- DensePose的apply_net.py运行dump的选项时候,如何多gpu运行呢? HOT 1
- Encountered freezing during start training at iteration 0 HOT 2
- printing label name and bbox coordinates of predicted images
- Add device argument for multi-backends access & Ascend NPU support HOT 3
- How to convert densepose model to onnx? HOT 1
- 模型跑出来的效果超出预期
- Does this project support FCOS? HOT 1
- C++ and onnx HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from detectron2.