Giter Club home page Giter Club logo

deepo's Introduction

deepo

workflows docker build license

PLEASE NOTE, THE DEEP LEARNING FRAMEWORK WAR IS OVER, THIS PROJECT IS NO LONGER BEING MAINTAINED.


Deepo is an open framework to assemble specialized docker images for deep learning research without pain. It provides a “lego set” of dozens of standard components for preparing deep learning tools and a framework for assembling them into custom docker images.

At the core of Deepo is a Dockerfile generator that

  • allows you to customize your deep learning environment with Lego-like modules
    • define your environment in a single command line,
    • then deepo will generate Dockerfiles with best practices
    • and do all the configuration for you
  • automatically resolves the dependencies for you
    • deepo knows which combos (CUDA/cuDNN/Python/PyTorch/Tensorflow, ..., tons of dependancies) are compatible
    • and will pick the right versions for you
    • and arrange sequence of installation procedures using topological sorting

We also prepare a series of pre-built docker images that


Table of contents


Step 1. Install Docker and nvidia-docker.

Step 2. Obtain the all-in-one image from Docker Hub

docker pull ufoym/deepo

For users in China who may suffer from slow speeds when pulling the image from the public Docker registry, you can pull deepo images from the China registry mirror by specifying the full path, including the registry, in your docker pull command, for example:

docker pull registry.docker-cn.com/ufoym/deepo

Now you can try this command:

docker run --gpus all --rm ufoym/deepo nvidia-smi

This should work and enables Deepo to use the GPU from inside a docker container. If this does not work, search the issues section on the nvidia-docker GitHub -- many solutions are already documented. To get an interactive shell to a container that will not be automatically deleted after you exit do

docker run --gpus all -it ufoym/deepo bash

If you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g.

docker run --gpus all -it -v /host/data:/data -v /host/config:/config ufoym/deepo bash

This will make /host/data from the host visible as /data in the container, and /host/config as /config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.

Please note that some frameworks (e.g. PyTorch) use shared memory to share data between processes, so if multiprocessing is used the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to docker run.

docker run --gpus all -it --ipc=host ufoym/deepo bash

Step 1. Install Docker.

Step 2. Obtain the all-in-one image from Docker Hub

docker pull ufoym/deepo:cpu

Now you can try this command:

docker run -it ufoym/deepo:cpu bash

If you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g.

docker run -it -v /host/data:/data -v /host/config:/config ufoym/deepo:cpu bash

This will make /host/data from the host visible as /data in the container, and /host/config as /config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.

Please note that some frameworks (e.g. PyTorch) use shared memory to share data between processes, so if multiprocessing is used the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to docker run.

docker run -it --ipc=host ufoym/deepo:cpu bash

You are now ready to begin your journey.

$ python

>>> import tensorflow
>>> import sonnet
>>> import torch
>>> import keras
>>> import mxnet
>>> import cntk
>>> import chainer
>>> import theano
>>> import lasagne
>>> import caffe
>>> import paddle

$ caffe --version

caffe version 1.0.0

$ darknet

usage: darknet <function>

Note that docker pull ufoym/deepo mentioned in Quick Start will give you a standard image containing all available deep learning frameworks. You can customize your own environment as well.

If you prefer a specific framework rather than an all-in-one image, just append a tag with the name of the framework. Take tensorflow for example:

docker pull ufoym/deepo:tensorflow

Step 1. pull the all-in-one image

docker pull ufoym/deepo

Step 2. run the image

docker run --gpus all -it -p 8888:8888 -v /home/u:/root --ipc=host ufoym/deepo jupyter lab --no-browser --ip=0.0.0.0 --allow-root --LabApp.allow_origin='*' --LabApp.root_dir='/root'

Step 1. prepare generator

git clone https://github.com/ufoym/deepo.git
cd deepo/generator

Step 2. generate your customized Dockerfile

For example, if you like pytorch and lasagne, then

python generate.py Dockerfile pytorch lasagne

or with CUDA 11.1 and CUDNN 8

python generate.py Dockerfile pytorch lasagne --cuda-ver 11.1 --cudnn-ver 8

This should generate a Dockerfile that contains everything for building pytorch and lasagne. Note that the generator can handle automatic dependency processing and topologically sort the lists. So you don't need to worry about missing dependencies and the list order.

You can also specify the version of Python:

python generate.py Dockerfile pytorch lasagne python==3.6

Step 3. build your Dockerfile

docker build -t my/deepo .

This may take several minutes as it compiles a few libraries from scratch.

. modern-deep-learning dl-docker jupyter-deeplearning Deepo
ubuntu 16.04 14.04 14.04 18.04
cuda X 8.0 6.5-8.0 8.0-10.2/None
cudnn X v5 v2-5 v7
onnx X X X O
theano X O O O
tensorflow O O O O
sonnet X X X O
pytorch X X X O
keras O O O O
lasagne X O O O
mxnet X X X O
cntk X X X O
chainer X X X O
caffe O O O O
caffe2 X X X O
torch X O O O
darknet X X X O
paddlepaddle X X X O
. CUDA 11.3 / Python 3.8 CPU-only / Python 3.8
all-in-one latest all all-py38 py38-cu113 all-py38-cu113 all-py38-cpu all-cpu py38-cpu cpu
TensorFlow tensorflow-py38-cu113 tensorflow-py38 tensorflow tensorflow-py38-cpu tensorflow-cpu
PyTorch pytorch-py38-cu113 pytorch-py38 pytorch pytorch-py38-cpu pytorch-cpu
Keras keras-py38-cu113 keras-py38 keras keras-py38-cpu keras-cpu
MXNet mxnet-py38-cu113 mxnet-py38 mxnet mxnet-py38-cpu mxnet-cpu
Chainer chainer-py38-cu113 chainer-py38 chainer chainer-py38-cpu chainer-cpu
Darknet darknet-cu113 darknet darknet-cpu
paddlepaddle paddle-cu113 paddle paddle-cpu
. CUDA 11.3 / Python 3.6 CUDA 11.1 / Python 3.6 CUDA 10.1 / Python 3.6 CUDA 10.0 / Python 3.6 CUDA 9.0 / Python 3.6 CUDA 9.0 / Python 2.7 CPU-only / Python 3.6 CPU-only / Python 2.7
all-in-one py36-cu113 all-py36-cu113 py36-cu111 all-py36-cu111 py36-cu101 all-py36-cu101 py36-cu100 all-py36-cu100 py36-cu90 all-py36-cu90 all-py27-cu90 all-py27 py27-cu90 all-py27-cpu py27-cpu
all-in-one with jupyter all-jupyter-py36-cu90 all-py27-jupyter py27-jupyter all-py27-jupyter-cpu py27-jupyter-cpu
Theano theano-py36-cu113 theano-py36-cu111 theano-py36-cu101 theano-py36-cu100 theano-py36-cu90 theano-py27-cu90 theano-py27 theano-py27-cpu
TensorFlow tensorflow-py36-cu113 tensorflow-py36-cu111 tensorflow-py36-cu101 tensorflow-py36-cu100 tensorflow-py36-cu90 tensorflow-py27-cu90 tensorflow-py27 tensorflow-py27-cpu
Sonnet sonnet-py36-cu113 sonnet-py36-cu111 sonnet-py36-cu101 sonnet-py36-cu100 sonnet-py36-cu90 sonnet-py27-cu90 sonnet-py27 sonnet-py27-cpu
PyTorch pytorch-py36-cu113 pytorch-py36-cu111 pytorch-py36-cu101 pytorch-py36-cu100 pytorch-py36-cu90 pytorch-py27-cu90 pytorch-py27 pytorch-py27-cpu
Keras keras-py36-cu113 keras-py36-cu111 keras-py36-cu101 keras-py36-cu100 keras-py36-cu90 keras-py27-cu90 keras-py27 keras-py27-cpu
Lasagne lasagne-py36-cu113 lasagne-py36-cu111 lasagne-py36-cu101 lasagne-py36-cu100 lasagne-py36-cu90 lasagne-py27-cu90 lasagne-py27 lasagne-py27-cpu
MXNet mxnet-py36-cu113 mxnet-py36-cu111 mxnet-py36-cu101 mxnet-py36-cu100 mxnet-py36-cu90 mxnet-py27-cu90 mxnet-py27 mxnet-py27-cpu
CNTK cntk-py36-cu113 cntk-py36-cu111 cntk-py36-cu101 cntk-py36-cu100 cntk-py36-cu90 cntk-py27-cu90 cntk-py27 cntk-py27-cpu
Chainer chainer-py36-cu113 chainer-py36-cu111 chainer-py36-cu101 chainer-py36-cu100 chainer-py36-cu90 chainer-py27-cu90 chainer-py27 chainer-py27-cpu
Caffe caffe-py36-cu113 caffe-py36-cu111 caffe-py36-cu101 caffe-py36-cu100 caffe-py36-cu90 caffe-py27-cu90 caffe-py27 caffe-py27-cpu
Caffe2 caffe2-py36-cu90 caffe2-py36 caffe2 caffe2-py27-cu90 caffe2-py27 caffe2-py36-cpu caffe2-cpu caffe2-py27-cpu
Torch torch-cu113 torch-cu111 torch-cu101 torch-cu100 torch-cu90 torch-cu90 torch torch-cpu
Darknet darknet-cu113 darknet-cu111 darknet-cu101 darknet-cu100 darknet-cu90 darknet-cu90 darknet darknet-cpu
@misc{ming2017deepo,
    author = {Ming Yang},
    title = {Deepo: set up deep learning environment in a single command line.},
    year = {2017},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/ufoym/deepo}}
}

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

Deepo is MIT licensed.

deepo's People

Contributors

aisensiy avatar aleksartamonov avatar arturgontijo avatar cuihaoleo avatar keithmattix avatar kklemon avatar ren477a avatar seiro-ogasawara avatar ufoym avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepo's Issues

SomethingWrong with MxNet

Wonderful deepo ! It made DL much more convenient. But I encountered some error while importing the framework MxNet.
I can import tensorflow, caffe, pytorch....But I can't import mxnet.
Python Interpreter told me : Illegal instruction(core dumped).
Have somebody else solved this problem before?

Support for package managers

The deepo generator already supports plenty of common libraries and packages for deep learning and data science but in many cases one has a need for additional libraries that are most probably not supported. After implementing some common NLP libraries for the generator by myself (see #56, #57, #58), I had to realize that writing mostly redundant code for each pip or apt package doesn't really makes sense at all. Instead I think it would be better to support common package managers and enable to add any of such packages dynamically.

My idea is that with this approach packages that are not explicitly implemented could be still be added as follows:

python generator.py Dockerfile keras pip+pandas pip+nltk==3.3 apt+ping

Of course this wouldn't work out for ones that are difficult to install like OpenCV but at least it would give users an easy way to add any not explicitly supported packages.

I already started working on an implementation but currently have to rethink everything again as the current design of the generator and composer would probably not allow to support such a feature.

Improve cachebility of generated Dockerfiles

Is there a reason why all the modules and other statements that are generated by the generator.py script are concatenated and listed as a single RUN statement in the final Dockerfile?

Especially for large images with long and error prone build processes this results in that caching can't be used at all. I already had many cases where I had to try different combinations of packages/libraries or implement new ones by myself and it always cost a hell of a time to have to rebuild the image all over again when a slight change around the end of Dockerfile had to be made.

What about changing the generator in such a way, that each module does not generate the final string to be added to the Dockerfile directly but just returns a list of strings, each being listed as a single RUN statement in the final Dockerfile?

Python 3.6

Why is it Python 3.5 that is used and not 3.6?

Spyder (or any other IDE) support

Hello,

This docker image makes developing models so much easier! Is there any way to access the ML libraries installed with this docker image through an IDE such Spyder?

The jupyter notebook access is nice, but being able to run code in Spyder would make debugging a little bit easier.

Error when import lasagne and theano

Thanks for the one-stop DL docker files. I'm using the all-jupyter-py36-cu90 and found the following errors when importing theano and lasagne.

import theano
/usr/local/lib/python3.6/dist-packages/theano/tests/main.py:6: DeprecationWarning: Importing from numpy.testing.nosetesteris deprecated, import from numpy.testing instead.
from numpy.testing.nosetester import NoseTester
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.6/dist-packages/theano/init.py", line 156, in
import theano.gpuarray
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/init.py", line 33, in
from . import fft, dnn, opt, extra_ops, multinomial, reduction, sort, rng_mrg, ctc
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/fft.py", line 14, in
from .opt import register_opt, op_lifter, register_opt2
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/opt.py", line 2801, in
from .dnn import (local_abstractconv_cudnn,
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 339, in
handle_type = CUDNNDataType('cudnnHandle_t', 'cudnnDestroy')
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 259, in CUDNNDataType
version=version(raises=False))
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 319, in version
if not dnn_present():
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 209, in dnn_present
dnn_present.avail, dnn_present.msg = _dnn_check_version()
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 180, in _dnn_check_version
v = version()
File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 334, in version
"while the library is version %s." % v)
RuntimeError: Mixed dnn version. The header is version 7201 while the library is version 7102.

import lasagne
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.6/dist-packages/lasagne/init.py", line 12, in
import theano
File "/usr/local/lib/python3.6/dist-packages/theano/init.py", line 144, in
if hasattr(theano.tests, "TheanoNoseTester"):
AttributeError: module 'theano' has no attribute 'tests'

nvidia-docker run -v with all-py36-jupyter image

I'm running your all-py36 and all-py36-jupyter images on a remote, gpu-equipped machine. When I add -v ~/path/to/my/data:/data to the suggested nvidia-docker command for running the all-py36 image, I can access the data directory within the container. However, when I make the same addition to the suggested command for running the all-py36-jupyter image and then use the browser on my local machine to open the Jupyter Notebook at [SERVER ADDRESS]:[JUPYTER PORT], I don't see the data directory anywhere in the file menu. Am I doing something wrong?

unauthorized: authentication required

When I use the command ' docker pull ufoym/deepo' and after the download for a few minute, it has the following tips: unauthorized: authentication required. Want to know how to fix it,thanks!!

Why can't I install net-tools in the container?

I want to run commond ifconfig and ip in the container, but 'apt install' doesn't work. Is there some wrong with package source?

I use image ufoym/deep:all-py27-jupyter

root@0767f14c50dc:/# apt install net-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package net-tools

root@0767f14c50dc:/# apt install iputils-ping
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package iputils-ping

Thank you in advance ^_^

Fail to build Docker image (`*-py27` related images e.g. `keras-py27`)

How to reproduce this issue?

git clone https://github.com/ufoym/deepo.git
cd deepo/docker
docker build -t ufoym/deepo:keras-py27 -f Dockerfile.keras-py27 .

Error log from docker build

  • while $PIP_INSTALLing setuptools

image

Collecting setuptools
  Downloading setuptools-39.0.1-py2.py3-none-any.whl (569kB)
Collecting pip
  Downloading pip-10.0.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: setuptools, pip
  Found existing installation: pip 8.1.1
    Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-10.0.0 setuptools-39.0.1
Traceback (most recent call last):
  File "/usr/bin/pip", line 9, in <module>
    from pip import main
ImportError: cannot import name main

Questions regarding features

Hi, I had a question regarding the integration of a few features and plans for the future of this distribution. This Docker image has made my life soooo much easier, so thank you for that. I just had questions around if you ever plan to build in integration for Anaconda python or Jupyter lab? I was also wondering if you had a set release schedule for when you update the image with new Cuda distributions. Do you do that at certain intervals or just major releases, e.g. 8.0, 9.0 etc. Thanks again, Deepo has been a life saver.

Best,
Steve

Add python libraries

Hi, I am new to docker and there are still lots of things that I do not fully grasp. However, I managed to install the image with all the packets. However, I find that there are some python libraries could be helpful for me that are not installed. One way to overcome this is by manually installing those each time I run the docker image. However, I was wondering whether there is a better way to do it so that I do not have to constantly install the libraries.

Thanks in advance

OpenCV function not implemented

I got unspecified error when trying to run opencv following this basic OpenCV getting started. The error is:

OpenCV(3.4.1) Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /root/opencv/modules/highgui/src/window.cpp, line 636
Traceback (most recent call last):
  File "image_get_started.py", line 8, in <module>
    cv2.imshow("image", img)
cv2.error: OpenCV(3.4.1) /root/opencv/modules/highgui/src/window.cpp:636: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

I was using the keras-cpu-py36 while adding OpenCV in its Dockerfile. Looking back to OpenCV and the libgtk2.0-dev and pkg-config wasn't yet included.

how to use?

sir:
I want to know the loction of the caffe and other sofewares.so that i can run the softwares withs samples.

Build for cuda 10 / RTX 20xx

Hey! I'm looking into what it would take to build deepo for cuda 10. I'm creating this issue as a somewhat of a placeholder, I'll put more details in the comments as I work through it. Is anyone already working on this?

Context: I'm one of the founders of vast.ai, a service that allows renting others' gpus, and we use docker; Deepo is one of our main suggested images. We have some folks with 2080 machines, but we're blocked on having image support for cuda 10 builds of the frameworks - eg, cuda 9 tf seems to break on cuda 10.

Thanks for the awesome images, btw :)

Error when import theano.

Error occured when import theano in python

>>> import theano
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/theano/__init__.py", line 156, in <module>
    import theano.gpuarray
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/__init__.py", line 33, in <module>
    from . import fft, dnn, opt, extra_ops, multinomial, reduction, sort, rng_mrg, ctc
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/fft.py", line 14, in <module>
    from .opt import register_opt, op_lifter, register_opt2
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/opt.py", line 2801, in <module>
    from .dnn import (local_abstractconv_cudnn,
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 339, in <module>
    handle_type = CUDNNDataType('cudnnHandle_t', 'cudnnDestroy')
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 259, in CUDNNDataType
    version=version(raises=False))
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 319, in version
    if not dnn_present():
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 209, in dnn_present
    dnn_present.avail, dnn_present.msg = _dnn_check_version()
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 180, in _dnn_check_version
    v = version()
  File "/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py", line 334, in version
    "while the library is version %s." % v)
RuntimeError: Mixed dnn version. The header is version 7104 while the library is version 7102.

python3-distutils missing therefore docker build fails.

Please add python3-distutils:
+++ b/generator/modules/python.py @@ -34,6 +34,7 @@ class Python(Module): DEBIAN_FRONTEND=noninteractive $APT_INSTALL \ python3.6 \ python3.6-dev \ python3-distutils \ && \
Otherwise the container rocks.Good job and thanks for the effort.

Cheerz

Using deepo remotely from macOS

This is more of a general question, if you don't know off the top of your head please feel free to close. Is there any way to use deepo / nvidia-docker with a macOS machine running Docker client commands remotely connecting to a Linux docker-machine with nvidia-docker configured?

Reproducible container builds via explicit version annotations

It is my understanding that, in its current form, the container builds are not really reproducible as they use the most recent versions available. This is great in many cases, but undesirable in many others, e.g., if you want to build the exact same container that you would one year ago. Having explicit versions would mitigate this. Have you consider adding explicit version annotations in some form?

Tensorflow 1.4 error

I am trying to upgrade to tensorflow 1.4 but i have this error

ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

image

jupyter ERROR:

usage: ipykernel_launcher.py [-h] [--batch-size N] [--test-batch-size N]
[--epochs N] [--lr LR] [--momentum M] [--no-cuda]
[--seed S] [--log-interval N]
ipykernel_launcher.py: error: unrecognized arguments: -f /root/.local/share/jupyter/runtime/kernel-8d4d68c2-c1a5-4176-83d3-6d030e7d6e33.json

An exception has occurred, use %tb to see the full traceback.

SystemExit: 2

/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2918: UserWarning: To exit: use 'exit', 'quit', or Ctrl-D.
warn("To exit: use 'exit', 'quit', or Ctrl-D.", stacklevel=1)

Missing '\' in Dockerfile

Thank you for great Dockerfiles.

Small error found
Missing '' (backslash) in Dockerfile.all-jupyter-py27-cpu at line 307

matplotlib.pyplot error when importing: ImportError: No module named '_tkinter', please install the python3-tk package

It looks like there is some package missing (python3-tk?), which prevents from normal usage of matplotlib.pyplot. Is there any workaround for this issue? Thanks!

Python 3.6.3 (default, Oct  6 2017, 08:44:35) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib.pyplot
Traceback (most recent call last):
  File "/usr/lib/python3.6/tkinter/__init__.py", line 37, in <module>
    import _tkinter
ModuleNotFoundError: No module named '_tkinter'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/matplotlib/pyplot.py", line 116, in <module>
    _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
  File "/usr/local/lib/python3.6/dist-packages/matplotlib/backends/__init__.py", line 60, in pylab_setup
    [backend_name], 0)
  File "/usr/local/lib/python3.6/dist-packages/matplotlib/backends/backend_tkagg.py", line 6, in <module>
    from six.moves import tkinter as Tk
  File "/usr/local/lib/python3.6/dist-packages/six.py", line 92, in __get__
    result = self._resolve()
  File "/usr/local/lib/python3.6/dist-packages/six.py", line 115, in _resolve
    return _import_module(self.mod)
  File "/usr/local/lib/python3.6/dist-packages/six.py", line 82, in _import_module
    __import__(name)
  File "/usr/lib/python3.6/tkinter/__init__.py", line 39, in <module>
    raise ImportError(str(msg) + ', please install the python3-tk package')
ImportError: No module named '_tkinter', please install the python3-tk package

Boost 1.65 requires Cmake 3.9.3 or newer

Ubuntu 16.04 supplies Cmake 3.5.1-1ubuntu3, but Boost 1.65.1, installed in the Deepo containers, requires 3.9.3 or newer:

https://stackoverflow.com/questions/42123509/cmake-finds-boost-but-the-imported-targets-not-available-for-boost-version

CMake cannot detect the dependencies between the different Boost libraries. They have explicitly implemented in FindBoost.

For every Boost release this information is added by the CMake maintainers and it gets part of the next CMake release. So you have to make sure, that your CMake version was released after the Boost version you try to find.

Boost 1.63 requires CMake 3.7 or newer.
Boost 1.64 requires CMake 3.8 or newer.
Boost 1.65 and 1.65.1 require CMake 3.9.3 or newer.
Boost 1.66 requires CMake 3.11 or newer.
Boost 1.67 requires CMake 3.12 or newer.
Boost 1.68 and 1.69 require CMake 3.13 or newer.

Torch7 libjpeg missing?

When using the torch and loading images, it reports libjpeg not found.

Reproduce

$ th

require('image')
image.lena() or image.load('xxxx.jpg')

Output

/usr/local/share/lua/5.1/trepl/init.lua:389: module 'libjpeg' not found:No LuaRocks module found for libjpeg
	no field package.preload['libjpeg']
	no file '/root/.luarocks/share/lua/5.1/libjpeg.lua'
	no file '/root/.luarocks/share/lua/5.1/libjpeg/init.lua'
	no file '/usr/local/share/lua/5.1/libjpeg.lua'
	no file '/usr/local/share/lua/5.1/libjpeg/init.lua'
	no file './libjpeg.lua'
	no file '/usr/local/share/luajit-2.1.0-beta1/libjpeg.lua'
	no file '/.luarocks/share/lua/5.1/libjpeg.lua'
	no file '/.luarocks/share/lua/5.1/libjpeg/init.lua'
	no file '/root/.luarocks/lib/lua/5.1/libjpeg.so'
	no file '/usr/local/lib/lua/5.1/libjpeg.so'
	no file './libjpeg.so'
	no file '/usr/local/lib/lua/5.1/loadall.so'
	no file '/.luarocks/lib/lua/5.1/libjpeg.so'	
warning: <libjpeg> could not be loaded (is it installed?)	
/usr/local/share/lua/5.1/trepl/init.lua:389: module 'liblua_png' not found:No LuaRocks module found for liblua_png
	no field package.preload['liblua_png']
	no file '/root/.luarocks/share/lua/5.1/liblua_png.lua'
	no file '/root/.luarocks/share/lua/5.1/liblua_png/init.lua'
	no file '/usr/local/share/lua/5.1/liblua_png.lua'
	no file '/usr/local/share/lua/5.1/liblua_png/init.lua'
	no file './liblua_png.lua'
	no file '/usr/local/share/luajit-2.1.0-beta1/liblua_png.lua'
	no file '/.luarocks/share/lua/5.1/liblua_png.lua'
	no file '/.luarocks/share/lua/5.1/liblua_png/init.lua'
	no file '/root/.luarocks/lib/lua/5.1/liblua_png.so'
	no file '/usr/local/lib/lua/5.1/liblua_png.so'
	no file './liblua_png.so'
	no file '/usr/local/lib/lua/5.1/loadall.so'
	no file '/.luarocks/lib/lua/5.1/liblua_png.so'	
warning: <liblua_png> could not be loaded (is it installed?)	
/usr/local/share/lua/5.1/dok/inline.lua:738: <image.lena> no bindings available to load images (libjpeg AND libpng missing)
stack traceback:
	[C]: in function 'error'
	/usr/local/share/lua/5.1/dok/inline.lua:738: in function 'error'
	/usr/local/share/lua/5.1/image/init.lua:1690: in function 'lena'
	[string "_RESULT={image.lena()}"]:1: in main chunk
	[C]: in function 'xpcall'
	/usr/local/share/lua/5.1/trepl/init.lua:661: in function 'repl'
	/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:204: in main chunk
	[C]: at 0x00405050	

Is it a docker image bug or a OS-related library missing? How do I fix it?

Permission denied when creating new notebook

Hi, the issue I had is that the Jupyter Notebook is up and running, but I can't open any existing notebooks (blank) or create new ones (permission denied). The command I used was:

nvidia-docker run -d -p 6789:8888 \
-v /path/to/local/drive:/home/$USER \
--ipc=host \
--name Jupyter_$USER \
ufoym/deepo:all-jupyter-py36 jupyter notebook --no-browser --allow-root \
--ip=0.0.0.0 \
--NotebookApp.password='sha1:xxx'

Thanks in advance for your help!

Converting to a Singularity container

Thanks for your work! This looks great.

I was trying to convert the all-in-one to a Docker image, but I'm having issues with the cuda support. It converts OK (creates the image), but upon running it complains that it can't find cuda.

See below for the command to convert.
sudo singularity build py27-gpu.img docker://ufoym/deepo:all-py27

Any tips? It would be nice if this worked with Singularity as that is what is effectively used in HPC environments.

deepo:py27 caffe

when using python2.7 version deepo, I import caffe, and there is not Net and version module, is there any problem about this version?
Looking forward for your reply.
@ufoym

Tensorflow source build with latest cuda

Hi, I have a docker file which builds tensorflow from sources with latest cuda 9.2 and cudnn 7.1.4. Does this repo accept docker with source builds? I can send PR if you do! Thanks!

CuDnn library version does not match

Hi there, thanks for this great project.

I had an error running tensorflow with the GPU.
The error messages are like this:
E tensorflow/stream_executor/cuda/cuda_dnn.cc:396] Loaded runtime CuDNN library: 7102 (compatibility version 7100) but source was compiled with 7005 (compatibility version 7000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration. F tensorflow/core/kernels/conv_ops.cc:712] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(), &algorithms)
I can fix the issue by downloading an lower version of CuDnn and run the following command:
$ dpkg -i libcudnn7_7.0.5.15-1+cuda9.0_amd64.deb".

I just wonder can you fix this problem for the docker image?
Thanks.

About the caffe2.detectron

Hi@ufoym Did the deepo:caffe2 include the Detectron module? I didn't find it after i pull this images.

Dependency Issues

I keep having some random dependency issues when using Deepo. Is there any plans (or could there be) to move over to the Ananconda python distributions from Continuum analytics? I've never had dependency issues with their package management system.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.