Giter Club home page Giter Club logo

clouddream's Introduction

Dockerized deepdream: Generate ConvNet Art in the Cloud

Google recently released the deepdream software package for generating images like

ConvNet Art

which uses the Caffe Deep Learning Library and a cool iPython notebook example.

Setting up Caffe, Python, and all of the required dependencies is not trivial if you haven't done it before! More importantly, a GPU isn't required if you're willing to wait a couple of seconds for the images to be generated.

Let's make it brain-dead simple to launch your very own deepdreaming server (in the cloud, on an Ubuntu machine, Mac via Docker, and maybe even Windows if you try out Kitematic by Docker)!

Motivation

I decided to create a self-contained Caffe+GoogLeNet+Deepdream Docker image which has everything you need to generate your own deepdream art. In order to make the Docker image very portable, it uses the CPU version of Caffe and comes bundled with the GoogLeNet model.

The compilation procedure was done on Docker Hub and for advanced users, the final image can be pulled down via:

docker pull visionai/clouddream

The docker image is 2.5GB, but it contains a precompiled version of Caffe, all of the python dependencies, as well as the pretrained GoogLeNet model.

For those of you who are new to Docker, I hope you will pick up some valuable engineering skills and tips along the way. Docker makes it very easy to bundle complex software. If you're somebody like me who likes a clean Mac OS X on a personal laptop, and do the heavy-lifting in the cloud, then read on.

Instructions

We will be monitoring the inputs directory for source images and dumping results into the outputs directory. Nginx (also inside a Docker container) will be used to serve the resulting files and a simple AngularJS GUI to render the images in a webpage.

Prerequisite:

You've launched a Cloud instance using a VPS provider like DigitalOcean and this instance has Docker running. If you don't know about DigitalOcean, then you should give them a try. You can lauch a Docker-ready cloud instance in a few minutes. If you're going to set up a new DigitalOcean account, consider using my referral link: https://www.digitalocean.com/?refcode=64f90f652091.

Will need an instance with at least 1GB of RAM for processing small output images.

Let's say our cloud instance is at the address 1.2.3.4 and we set it up so that it contains our SSH key for passwordless log-in.

ssh [email protected]
git clone https://github.com/VISIONAI/clouddream.git
cd clouddream
./start.sh

To make sure everything is working properly you can do

docker ps

You should see three running containers: deepdream-json, deepdream-compute, and deepdream-files

root@deepdream:~/clouddream# docker ps
CONTAINER ID        IMAGE                 COMMAND                CREATED             STATUS              PORTS                         NAMES
21d495211abf        ubuntu:14.04          "/bin/bash -c 'cd /o   7 minutes ago       Up 7 minutes                                      deepdream-json
7dda17dafa5a        visionai/clouddream   "/bin/bash -c 'cd /o   7 minutes ago       Up 7 minutes                                      deepdream-compute
010427d8c7c2        nginx                 "nginx -g 'daemon of   7 minutes ago       Up 7 minutes        0.0.0.0:80->80/tcp, 443/tcp   deepdream-files

If you want to stop the processing, just run:

./stop.sh

If you want to jump inside the container to debug something, just run:

./enter.sh
cd /opt/deepdream
python deepdream.py
#This will take input.jpg, run deepdream, and write output.jpg

Feeding images into deepdream

From your local machine you can just scp images into the inputs directory inside deepdream as follows:

# From your local machine
scp images/*jpg [email protected]:~/clouddream/deepdream/inputs/

Instructions for Mac OS X and boot2docker

First, install boot2docker. Now start boot2docker.

boot2docker start

My boot2docker on Mac returns something like this:

Waiting for VM and Docker daemon to start...
.............o
Started.
Writing /Users/tomasz/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/tomasz/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/tomasz/.boot2docker/certs/boot2docker-vm/key.pem

To connect the Docker client to the Docker daemon, please set:
    export DOCKER_TLS_VERIFY=1
    export DOCKER_HOST=tcp://192.168.59.103:2376
    export DOCKER_CERT_PATH=/Users/tomasz/.boot2docker/certs/boot2docker-vm

So I simply paste the last three lines (the ones starting with export) right into the terminal.

export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/tomasz/.boot2docker/certs/boot2docker-vm

Keep this IP address in mind. For me it is 192.168.59.103.

NOTE: if running a docker ps command fails at this point and it says something about certificates, you can try:

boot2docker ssh sudo /etc/init.d/docker restart

Now proceed just like you're in a Linux environment.

cd ~/projects
git clone https://github.com/VISIONAI/clouddream.git
cd clouddream
./start.sh

You should now be able to visit http://192.168.59.103 in your browser.

Processing a YouTube video

If don't have your own source of cool jpg images to process, or simply want to see what the output looks like on a youtube video, I've included a short youtube.sh script which does all the work for you.

If you want to start processing the "Charlie Bit My Finger" video, simply run:

./youtube.sh https://www.youtube.com/watch?v=DDZQAJuB3rI

And then visit the http://1.2.3.4:8000 URL to see the frames show up as they are being processed one by one. The final result will be writen to http://1.2.3.4/out.mp4

Here are some frames from the Daft Punk - Pentatonix video:

deepdreaming Pentatonix

Navigating the Image Gallery

You should now be able to visit http://1.2.3.4 in your browser and see the resulting images appear in a nicely formatted mobile-ready grid.

You can also show only N images by changing to the URL so something like this:

http://1.2.3.4/#/?N=20

And instead of showing random N images, you can view the latest images:

http://1.2.3.4/#/?latest

You can view the processing log here:

http://1.2.3.4/log.html

You can view the current image being processed:

http://1.2.3.4/input.jpg

You can view the current settings:

http://1.2.3.4/settings.json

Here is a screenshot of what things should look like when using the 'conv2/3x3' setting: deepdreaming Dali

And if you instead use the 'inception_4c/output' setting: deepdreaming Dali

Additionally, you can browse some more cool images on the deepdream.vision.ai server, which I've currently configured to run deepdream through some Dali art. When you go to the page, just hit refresh to see more goodies.

User contributed DeepDream images

Several people ran their own experiments on different images and different layers. For example, GitHub user ihaventkilledanybodysince1984 shows an example of different layer effects on a frog drawing.

deepdream frog

Check out the frog face effect gallery on imgur.

Changing image size and processing layer

Inside deepdream/settings.json you'll find a settings file that looks like this:

{
    "maxwidth" : 400,
    "layer" : "inception_4c/output"
}

You can change maxwidth to something larger like 1000 if you want big output images for big input images, remeber that will you need more RAM memory for processing lager images. For testing maxwidth of 200 will give you results much faster. If you change the settings and want to regenerate outputs for your input images, simply remove the contents of the outputs directory:

rm deepdream/outputs/*

Possible values for layer are as follows. They come from the tmp.prototxt file which lists the layers of the GoogLeNet network used in this demo. Note that the ReLU and Dropout layers are not valid for deepdreaming.

"conv1/7x7_s2"
"pool1/3x3_s2"
"pool1/norm1"
"conv2/3x3_reduce"
"conv2/3x3"
"conv2/norm2"
"pool2/3x3_s2"
"inception_3a/1x1"
"inception_3a/3x3_reduce"
"inception_3a/3x3"
"inception_3a/5x5_reduce"
"inception_3a/5x5"
"inception_3a/pool"
"inception_3a/pool_proj"
"inception_3a/output"
"inception_3b/1x1"
"inception_3b/3x3_reduce"
"inception_3b/3x3"
"inception_3b/5x5_reduce"
"inception_3b/5x5"
"inception_3b/pool"
"inception_3b/pool_proj"
"inception_3b/output"
"pool3/3x3_s2"
"inception_4a/1x1"
"inception_4a/3x3_reduce"
"inception_4a/3x3"
"inception_4a/5x5_reduce"
"inception_4a/5x5"
"inception_4a/pool"
"inception_4a/pool_proj"
"inception_4a/output"
"inception_4b/1x1"
"inception_4b/3x3_reduce"
"inception_4b/3x3"
"inception_4b/5x5_reduce"
"inception_4b/5x5"
"inception_4b/pool"
"inception_4b/pool_proj"
"inception_4b/output"
"inception_4c/1x1"
"inception_4c/3x3_reduce"
"inception_4c/3x3"
"inception_4c/5x5_reduce"
"inception_4c/5x5"
"inception_4c/pool"
"inception_4c/pool_proj"
"inception_4c/output"
"inception_4d/1x1"
"inception_4d/3x3_reduce"
"inception_4d/3x3"
"inception_4d/5x5_reduce"
"inception_4d/5x5"
"inception_4d/pool"
"inception_4d/pool_proj"
"inception_4d/output"
"inception_4e/1x1"
"inception_4e/3x3_reduce"
"inception_4e/3x3"
"inception_4e/5x5_reduce"
"inception_4e/5x5"
"inception_4e/pool"
"inception_4e/pool_proj"
"inception_4e/output"
"pool4/3x3_s2"
"inception_5a/1x1"
"inception_5a/3x3_reduce"
"inception_5a/3x3"
"inception_5a/5x5_reduce"
"inception_5a/5x5"
"inception_5a/pool"
"inception_5a/pool_proj"
"inception_5a/output"
"inception_5b/1x1"
"inception_5b/3x3_reduce"
"inception_5b/3x3"
"inception_5b/5x5_reduce"
"inception_5b/5x5"
"inception_5b/pool"
"inception_5b/pool_proj"
"inception_5b/output"

The GUI

The final GUI is based on https://github.com/akoenig/angular-deckgrid.

Credits

The included Dockerfile is an extended version of https://github.com/taras-sereda/docker_ubuntu_caffe

Which is a modification from the original Caffe CPU master Dockerfile tleyden: https://github.com/tleyden/docker/tree/master/caffe/cpu/master

This dockerfile uses the deepdream code from: https://github.com/google/deepdream

License

MIT License. Have fun. Never stop learning.

--Enjoy! The vision.ai team

clouddream's People

Contributors

adamstelmaszczyk avatar jannickfahlbusch avatar nicosuave avatar pentusha avatar quantombone avatar radames avatar trinitronx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clouddream's Issues

Followed digitalocean instructions, getting No such file or directory

1124 07:35:09.549391 72 upgrade_proto.cpp:618] Attempting to upgrade input file specified using deprecated V1LayerParameter: ../caffe/models/bvlc_googlenet/bvlc_googlenet.caffemodel
I1124 07:35:09.681547 72 upgrade_proto.cpp:626] Successfully upgraded file specified using deprecated V1LayerParameter
/usr/lib/python2.7/dist-packages/scipy/ndimage/interpolation.py:532: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
"the returned array has changed.", UserWarning)
./process_images_once.sh: line 24: 72 Killed python deepdream.py
cp: cannot stat 'output.jpg': No such file or directory
rm: cannot remove 'output.jpg': No such file or directory
ffmpeg version git-2014-07-28-a776238 Copyright (c) 2000-2014 the FFmpeg developers
built on Jul 28 2014 03:08:49 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
configuration: --shlibdir=/usr/lib64 --prefix=/usr --mandir=/usr/share/man --libdir=/usr/lib64 --enable-static --extra-cflags='-fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -g -fPIC -I/usr/include/gsm' --enable-gpl --disable-x11grab --enable-version3 --enable-pthreads --enable-avfilter --enable-libpulse --enable-libvpx --enable-libopus --enable-libass --disable-libx265 --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libxvid --enable-libx264 --enable-libschroedinger --enable-libgsm --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-postproc --disable-libdc1394 --enable-librtmp --enable-libfreetype --enable-avresample --enable-libtwolame --enable-libvo-aacenc --enable-gnutls --enable-nonfree --enable-libfdk-aac --enable-libfaac --enable-libopenjpeg --enable-gray --enable-libwebp
libavutil 52. 92.101 / 52. 92.101
libavcodec 55. 70.100 / 55. 70.100
libavformat 55. 49.100 / 55. 49.100
libavdevice 55. 13.102 / 55. 13.102
libavfilter 4. 11.102 / 4. 11.102
libavresample 1. 3. 0 / 1. 3. 0
libswscale 2. 6.100 / 2. 6.100
libswresample 0. 19.100 / 0. 19.100
libpostproc 52. 3.100 / 52. 3.100
[image2 @ 0x35c00c0] Could find no file with path '/tmp/outputs/image-%05d.jpg' and index in the range 0-4
/tmp/outputs/image-%05d.jpg: No such file or directory
cp: cannot stat '/tmp/out.mp4': No such file or directory

Adding models to the docker container

I'm attempting to add a new model to the docker container, but I'm missing something as I can't seem to get it to save.

I use ./enter.sh to enter the container, then run something like;

sudo apt-get install vim

exit the docker and run;

sudo docker commit 89028ee7020d visionai/clouddream

I then restart the containers, however my changes don't seem to be reflected in the now running container. Could you possibly let me know what step I'm missing here or where I'm going wrong?

Many thanks!

Conversion failed - boot2docker + youtube.sh

@quantombone Thanks for putting together the docs on getting boot2 running, you're awesome.

I tried running the youtube script and get the following:

Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[image2 @ 0x2cb4d20] Could not open file : /tmp/images/image-00001.jpg
av_interleaved_write_frame(): Input/output error
frame=    1 fps=0.0 q=5.2 Lsize=N/A time=00:00:00.03 bitrate=N/A    
video:12kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!

I checked my tmp and both folders exist.

Errors running youtube.sh

root@dreamzzz:~/clouddream# ./youtube.sh https://www.youtube.com/watch?v=nuHfVn_cfHU
URL is https://www.youtube.com/watch?v=nuHfVn_cfHU
[youtube] nuHfVn_cfHU: Downloading webpage
[youtube] nuHfVn_cfHU: Extracting video information
WARNING: [youtube] nuHfVn_cfHU: Skipping DASH manifest: ExtractorError(u'Cannot decrypt signature without player_url; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see  https://yt-dl.org/update  on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.',)
[download] Destination: /tmp/video.mp4
[download] 100% of 9.40MiB in 00:03
ffmpeg version git-2014-07-28-a776238 Copyright (c) 2000-2014 the FFmpeg developers
  built on Jul 28 2014 03:08:49 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
  configuration: --shlibdir=/usr/lib64 --prefix=/usr --mandir=/usr/share/man --libdir=/usr/lib64 --enable-static --extra-cflags='-fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -g -fPIC -I/usr/include/gsm' --enable-gpl --disable-x11grab --enable-version3 --enable-pthreads --enable-avfilter --enable-libpulse --enable-libvpx --enable-libopus --enable-libass --disable-libx265 --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libxvid --enable-libx264 --enable-libschroedinger --enable-libgsm --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-postproc --disable-libdc1394 --enable-librtmp --enable-libfreetype --enable-avresample --enable-libtwolame --enable-libvo-aacenc --enable-gnutls --enable-nonfree --enable-libfdk-aac --enable-libfaac --enable-libopenjpeg --enable-gray --enable-libwebp
  libavutil      52. 92.101 / 52. 92.101
  libavcodec     55. 70.100 / 55. 70.100
  libavformat    55. 49.100 / 55. 49.100
  libavdevice    55. 13.102 / 55. 13.102
  libavfilter     4. 11.102 /  4. 11.102
  libavresample   1.  3.  0 /  1.  3.  0
  libswscale      2.  6.100 /  2.  6.100
  libswresample   0. 19.100 /  0. 19.100
  libpostproc    52.  3.100 / 52.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/tmp/video.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2015-06-24 07:54:29
  Duration: 00:01:04.04, start: 0.000000, bitrate: 1231 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1036 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 47.95 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 192 kb/s (default)
    Metadata:
      creation_time   : 2015-06-24 07:54:29
      handler_name    : IsoMedia File Produced by Google, 5-11-2011
[swscaler @ 0x34bd4a0] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to '/tmp/images/image-%05d.jpg':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    encoder         : Lavf55.49.100
    Stream #0:0(und): Video: mjpeg, yuvj420p, 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 1 fps, 1 tbn, 1 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      encoder         : Lavc55.70.100 mjpeg
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
frame=   67 fps=6.3 q=4.4 Lsize=N/A time=00:01:07.00 bitrate=N/A dup=0 drop=1468    
video:1818kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
d6f897d68eece5d5b721505795b45619f0b3beaac751bce42ce44f39849247fb
7fe0e84fb475e215c8fdd34eaaa8e80d309c071d17bb314740157eb29261b5ba
libdc1394 error: Failed to initialize libdc1394
WARNING: Logging before InitGoogleLogging() is written to STDERR

followed by a ton of this stuff:

}
layer {
  name: "inception_5a/relu_5x5_reduce"
  typ
I0708 01:25:35.832787    22 net.cpp:370] Input 0 -> data
I0708 01:25:35.833078    22 layer_factory.hpp:74] Creating layer conv1/7x7_s2
I0708 01:25:35.833125    22 net.cpp:90] Creating Layer conv1/7x7_s2
I0708 01:25:35.833139    22 net.cpp:410] conv1/7x7_s2 <- data
I0708 01:25:35.833160    22 net.cpp:368] conv1/7x7_s2 -> conv1/7x7_s2
I0708 01:25:35.833247    22 net.cpp:120] Setting up conv1/7x7_s2
I0708 01:25:35.837800    22 net.cpp:127] Top shape: 10 64 112 112 (8028160)
I0708 01:25:35.837831    22 layer_factory.hpp:74] Creating layer conv1/relu_7x7
I0708 01:25:35.837862    22 net.cpp:90] Creating Layer conv1/relu_7x7
I0708 01:25:35.837877    22 net.cpp:410] conv1/relu_7x7 <- conv1/7x7_s2
I0708 01:25:35.837889    22 net.cpp:357] conv1/relu_7x7 -> conv1/7x7_s2 (in-place)
I0708 01:25:35.837904    22 net.cpp:120] Setting up conv1/relu_7x7
I0708 01:25:35.837927    22 net.cpp:127] Top shape: 10 64 112 112 (8028160)
I0708 01:25:35.837939    22 layer_factory.hpp:74] Creating layer pool1/3x3_s2
I0708 01:25:35.837955    22 net.cpp:90] Creating Layer pool1/3x3_s2
I0708 01:25:35.837965    22 net.cpp:410] pool1/3x3_s2 <- conv1/7x7_s2
I0708 01:25:35.837978    22 net.cpp:368] pool1/3x3_s2 -> pool1/3x3_s2

Logs look like

Deepdream ./image-00016.jpg 0 0 inception_4c/output (103, 182, 3) �[2K 0 1 inception_4c/output (103, 182, 3) �[2K 0 2 inception_4c/output (103, 182, 3) �[2K 0 3 inception_4c/output (103, 182, 3) �[2K 0 4 inception_4c/output (103, 182, 3) �[2K 0 5 inception_4c/output (103, 182, 3) �[2K 0 6 inception_4c/output (103, 182, 3) �[2K 0 7 inception_4c/output (103, 182, 3) �[2K 0 8 inception_4c/output (103, 182, 3) �[2K 0 9 inception_4c/output (103, 182, 3) �[2K 1 0 inception_4c/output (144, 255, 3) �[2K 1 1 inception_4c/output (144, 255, 3) �[2K 1 2 inception_4c/output (144, 255, 3) �[2K 1 3 inception_4c/output (144, 255, 3) �[2K 1 4 inception_4c/output (144, 255, 3) �[2K 1 5 inception_4c/output (144, 255, 3) �[2K 1 6 inception_4c/output (144, 255, 3) �[2K 1 7 inception_4c/output (144, 255, 3) �[2K 1 8 inception_4c/output (144, 255, 3) �[2K 1 9 inception_4c/output (144, 255, 3) �[2K 2 0 inception_4c/output (201, 357, 3) �[2K 2 1 inception_4c/output (201, 357, 3) �[2K 2 2 inception_4c/output (201, 357, 3) �[2K 2 3 inception_4c/output (201, 357, 3) �[2K 2 4 inception_4c/output (201, 357, 3) �[2K 2 5 inception_4c/output (201, 357, 3) �[2K 2 6 inception_4c/output (201, 357, 3) �[2K 2 7 inception_4c/output (201, 357, 3) �[2K 2 8 inception_4c/output (201, 357, 3) �[2K 2 9 inception_4c/output (201, 357, 3) �[2K 3 0 inception_4c/output (281, 500, 3) �[2K 3 1 inception_4c/output (281, 500, 3) �[2K 3 2 inception_4c/output (281, 500, 3) �[2K 3 3 inception_4c/output (281, 500, 3) �[2K 3 4 inception_4c/output (281, 500, 3) �[2K 3 5 inception_4c/output (281, 500, 3) �[2K 3 6 inception_4c/output (281, 500, 3) �[2K 3 7 inception_4c/output (281, 500, 3) �[2K 3 8 inception_4c/output (281, 500, 3) �[2K 3 9 inception_4c/output (281, 500, 3) �[2K Error Code is 0 Just created outputs/./image-00016.jpg Deepdream ./image-00061.jpg

lastly, after canceling it

KeyboardInterrupt
^Cffmpeg version git-2014-07-28-a776238 Copyright (c) 2000-2014 the FFmpeg developers
  built on Jul 28 2014 03:08:49 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
  configuration: --shlibdir=/usr/lib64 --prefix=/usr --mandir=/usr/share/man --libdir=/usr/lib64 --enable-static --extra-cflags='-fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -g -fPIC -I/usr/include/gsm' --enable-gpl --disable-x11grab --enable-version3 --enable-pthreads --enable-avfilter --enable-libpulse --enable-libvpx --enable-libopus --enable-libass --disable-libx265 --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libxvid --enable-libx264 --enable-libschroedinger --enable-libgsm --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-postproc --disable-libdc1394 --enable-librtmp --enable-libfreetype --enable-avresample --enable-libtwolame --enable-libvo-aacenc --enable-gnutls --enable-nonfree --enable-libfdk-aac --enable-libfaac --enable-libopenjpeg --enable-gray --enable-libwebp
  libavutil      52. 92.101 / 52. 92.101
  libavcodec     55. 70.100 / 55. 70.100
  libavformat    55. 49.100 / 55. 49.100
  libavdevice    55. 13.102 / 55. 13.102
  libavfilter     4. 11.102 /  4. 11.102
  libavresample   1.  3.  0 /  1.  3.  0
  libswscale      2.  6.100 /  2.  6.100
  libswresample   0. 19.100 /  0. 19.100
  libpostproc    52.  3.100 / 52.  3.100
[image2 @ 0x33380c0] Could find no file with path '/tmp/outputs/image-%05d.jpg' and index in the range 0-4
/tmp/outputs/image-%05d.jpg: No such file or directory
cp: cannot stat ‘/tmp/out.mp4’: No such file or directory

Consider adding instructions on how to increase boot2docker memory limit

I'm familiar with docker, but not boot2docker on OSX, and it took me quite a while to figure out how to increase the VM's memory size. You reference that it might be necessary to up the RAM for processing of larger images, so it would be helpful to have a quick link on how to change the configuration to match.

I followed the instructions here: https://docs.docker.com/articles/b2d_volume_resize/

Specifically,

  1. Initialise a default file to customize
    boot2docker config > ~/.boot2docker/profile command.
  2. Edit the file and change from 2 Gig to 6 Gig
    Memory = 2048
    to
    Memory = 6144
  3. Run the following commands:
$ boot2docker poweroff
$ boot2docker destroy
$ boot2docker init
$ boot2docker up

How to Continue Iterations

Hello,

I can't seem to figure out how to continue the iterations to create the zooming in loop, even after un-comenting out the frame counter and relevant code at the end of the deepdream.py script

Would love to be able to write in a frame count in the json settings file, is this possible?

Images aren't automatically being processed

I followed your directions for Digital Ocean, Ubuntu with Docker installed. I scp'ed the image to my server but noticed that nothing was displaying.. so I manually ran process_images.sh and got this error:

./process_images.sh: 6: cd: can't cd to /opt/deepdream
./process_images.sh: 10: cd: can't cd to /opt/deepdream/inputs
/root/clouddream/deepdream/inputs
File is input.jpg
Deepdream input.jpg
cp: cannot stat ‘inputs/input.jpg’: No such file or directory
pwd is /root/clouddream/deepdream/inputs
python: can't open file 'deepdream_single.py': [Errno 2] No such file or directory
cp: cannot stat ‘output.jpg’: No such file or directory
./process_images.sh: 10: cd: can't cd to /opt/deepdream/inputs
/root/clouddream/deepdream
File is 6031848_1547091962_98large.jpg
Deepdream 6031848_1547091962_98large.jpg
pwd is /root/clouddream/deepdream
Traceback (most recent call last):
  File "deepdream_single.py", line 3, in <module>
    import numpy as np
ImportError: No module named numpy
cp: cannot stat ‘output.jpg’: No such file or directory

Is there something I'm doing wrong here? Sorry, I'm new with Linux, Python and such.

My fork might be of interest

Hi,

Sent you an email a few weeks ago as I started working on my fork, but not sure you received it.
I've heavily changed your original code for my needs, and I thought it might be worth discussing how we could work together to make more people benefit from them.

My fork is at https://github.com/hamstah/clouddream

Some of the changes I introduced

  • There is a web based manager that allows the uploading of new images by url/file upload
  • You can browse the input/output of the file from there
  • You can process any of the input images with parameters specified by the user (maxwidth, n_inter, n_octave etc), so you can have multiple outputs from the same image with different parameters
  • Processing is ordered and done by workers, you can see the queue and jobs in progress (you can have multiple workers in parallel). I had issues with the original project where some image would fail and it would get stuck in a loop trying to process them.
  • Use of docker compose to make it easier to spin the containers and scale the number of workers

It's obviously work in progress and I'm no docker expert, but I think it provides a good foundation to extend.

I'm thinking of moving the project out of the fork and into its own repo, as it's almost a complete rewrite.

Would be great to hear your thoughts.

I have a version of it running on a digital ocean droplet, I can give you access if you want to check it out.

Youtube Video Processing Does Not Work?

Whenever I launch the ./youtube.sh .... this is the output I get:
Could find no file with path '/tmp/outputs/image-%05d.jpg' and index in the range 0-4
/tmp/outputs/image-%05d.jpg: No such file or directory.

Anyone else get this?

Add a better way of pushing images to the server.

Hello.
I'm finding very difficult to push images to the server.
Can you add like a web file browser or a ftp server?(with password)
Also is possible to do something with the youtube urls?
Like in a text file we put youtube urls and the server process them.
Eg: youtube_url layertype

Is it possible to do guided dreams in this build ?

Aside, the installation was really brain dead! I got it setup in 10 minutes and was able to produce my images. Thanks a lot for putting together such a clean setup. I would also love to train with my own dataset. So it would be awesome if you could support it someday.

DigitalOcean API + Deepdream Example

There should be an example that starts with a digital ocean API token, and then does these steps:

  1. Scaffold a new cloud instance (an Ubuntu appliance with Docker) "the server"
  2. Checkout this repository on the instance on the server
  3. Copy a directory of images from the client to the server
  4. Perform all of the computation in the cloud and copy the results back to the client
  5. Clean up the Application so that no more money has to be spent on keeping the DigitalOcean instance running

Is it possible enable multithreading in deepdream function?

I've noticed that the actual deepdream function that does the work actually runs on a single thread. So no matter how powerful multicore processor you got, it will only use a very small portion of it.
I am a newbie in this field, can someone tell if its possible to have a multithreaded function ?

I executed htop inside the compute container and i got this... notice only core 1 is used and others are idle.

  1  [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||100.0%]     5  [                                                                     0.0%]
  2  [                                                                     0.0%]     6  [                                                                     0.0%]
  3  [                                                                     0.0%]     7  [                                                                     0.0%]
  4  [                                                                     0.0%]     8  [                                                                     0.0%]
  Mem[|||||||||||||||||||||||||||||||||||||||||||||||||             1140/2001MB]     Tasks: 7, 1 thr; 2 running
  Swp[|||||||||||||||||                                               103/460MB]     Load average: 1.17 1.06 0.87 
                                                                                     Uptime: 05:00:11

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
  528 root       20   0 1633M 1037M 42900 R 100. 51.8  0:42.06 python deepdream.py
  544 root       20   0 23020  3292  2632 R  0.7  0.2  0:00.01 htop
    1 root       20   0 17960     0     0 S  0.0  0.0  0:00.07 /bin/bash -c cd /opt/deepdream && ./process_images.sh 2>&1 > log.html
    6 root       20   0 17964     0     0 S  0.0  0.0  0:00.00 /bin/bash ./process_images.sh
    7 root       20   0 17980     0     0 S  0.0  0.0  0:00.00 /bin/bash ./process_images_once.sh
    9 root       20   0 17984  1916  1784 S  0.0  0.1  0:00.04 /bin/bash ./process_images_once.sh
  529 root       20   0 1633M 1037M 42900 S  0.0 51.8  0:00.00 python deepdream.py
  530 root       20   0 18168  3300  2844 S  0.0  0.2  0:00.09 bash





F1Help  F2Setup F3SearchF4FilterF5Tree  F6SortByF7Nice -F8Nice +F9Kill  F10Quit

GPU support?

Any interest in adding it? Could open up some very cool realtime (or near-realtime) options.

Here's a preview of the video I'm working on. Each frame takes a few minutes to render on my DigitalOcean Droplet with 2GB Ram.

fluid3b

Only works on some input images

I can only get the program to run on certain images.

Does not work (error code 1):
filename: noise.jpg
noise

Works but produces output with incorrect file extension:
filename: test.png (output file is a jpg with .png extension)
test

get it running on Windows

Could anyone point me in the right direction? I don't seem to find the right approach to get it fully running.

Make the GUI not say "Nice Photo, eh" for every image

The "Nice Photo" line can either go away, be randomly generated from a list of statements like "Cool photo" "Check this out" "Interesting", or mention the ConvNet layer targeted for the visualization like "conv2/3x3" (but this will require the generated json files to include this information).

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.