Google's Deep_Dream
Keras 2.1.2 with model: VGG 19, implementation of Google's Deep Dream, based on the Google blog, which is based on Caffe and model : GoogLeNet ( aka inception Net ). #deepdream
IM1, IM2 ) : OUTPUT
INPUT (Output 2: with #geekodour in BITs Pilani, dreamed using py script from here, which uses Inception_V3
Tweaking the hyperparameters, we have :
1step = 0.02
num_octave = 5
octave_scale = 1.4
iterations = 4
max_loss = 10.
Weights ( VGG19 )
Before running this .py script, download the weights for the VGG19 ( ImageNet trained ) model at:
You can also try it on ResNet50, download the weights for ResNet50 model at:
and make sure the variable weights_path in this script matches the location of the file.
default_dir = /Users/User/.keras/models/
Also, download the Inception_V3 weights here : tf_top, tf_no_top
Dependencies:
1. keras 2.1.2
2. scipy 0.19.1
3. tensorflow 1.4
4. skimage
5. numpy
6. matplotlib
7. CUDA & cuDNN ( GPU ), 8/6 & GTX 960m- my system
Run .py (Start Dreaming):
> python dream.py
Why are there so many dog heads, Chalices, Japanese-style buildings and eyes being imagined by these neural networks?
Nearly all of these images are being created by 'reading the mind' of neural networks that were trained on the ImageNet dataset. This dataset has lots of different types of images within it, but there happen to be a ton of dogs, chalices, etc.
If you were to train your own neural network with lots of images of hands then you could generate your own deepdream images from this net and see everything be created from hands.
Here is using MIT's places dataset implementation.
We must go deeper: Iterations
If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. Here's my implemetation. I have added the frames in this repository.
Extension : Visualize my face in the Golden gate bridge
I have downloaded this photo of the golden gate bridge.Lets try to dream my face, with effects from different layers.