Giter Club home page Giter Club logo

neural-storyteller's Introduction

neural-storyteller

neural-storyteller is a recurrent neural network that generates little stories about images. This repository contains code for generating stories with your own images, as well as instructions for training new models.

*We were barely able to catch the breeze at the beach , and it felt as if someone stepped out of my mind . She was in love with him for the first time in months , so she had no intention of escaping . The sun had risen from the ocean , making her feel more alive than normal . She 's beautiful , but the truth is that I do n't know what to do . The sun was just starting to fade away , leaving people scattered around the Atlantic Ocean . I d seen the men in his life , who guided me at the beach once more .*

Samim has made an awesome blog post with lots of results here.

Some more results from an older model trained on Adventure books can be found here.

The whole approach contains 4 components:

The 'style-shifting' operation is what allows our model to transfer standard image captions to the style of stories from novels. The only source of supervision in our models is from Microsoft COCO captions. That is, we did not collect any new training data to directly predict stories given images.

Style shifting was inspired by A Neural Algorithm of Artistic Style but the technical details are completely different.

How does it work?

We first train a recurrent neural network (RNN) decoder on romance novels. Each passage from a novel is mapped to a skip-thought vector. The RNN then conditions on the skip-thought vector and aims to generate the passage that it has encoded. We use romance novels collected from the BookCorpus dataset.

Parallel to this, we train a visual-semantic embedding between COCO images and captions. In this model, captions and images are mapped into a common vector space. After training, we can embed new images and retrieve captions.

Given these models, we need a way to bridge the gap between retrieved image captions and passages in novels. That is, if we had a function F that maps a collection of image caption vectors x to a book passage vector F(x), then we could feed F(x) to the decoder to get our story. There is no such parallel data, so we need to construct F another way.

It turns out that skip-thought vectors have some intriguing properties that allow us to construct F in a really simple way. Suppose we have 3 vectors: an image caption x, a "caption style" vector c and a "book style" vector b. Then we define F as

F(x) = x - c + b

which intuitively means: keep the "thought" of the caption, but replace the image caption style with that of a story. Then, we simply feed F(x) to the decoder.

How do we construct c and b? Here, c is the mean of the skip-thought vectors for Microsoft COCO training captions. We set b to be the mean of the skip-thought vectors for romance novel passages that are of length > 100.

What kind of biases work?

Skip-thought vectors are sensitive to:

  • length (if you bias by really long passages, it will decode really long stories)
  • punctuation
  • vocabulary
  • syntactic style (loosely speaking)

For the last point, if you bias using text all written the same way the stories you get will also be written the same way.

What can the decoder be trained on?

We use romance novels, but that is because we have over 14 million passages to train on. Anything should work, provided you have a lot of text! If you want to train your own decoder, you can use the code available here Any models trained there can be substituted here.

Dependencies

This code is written in python. To use it you will need:

  • Python 2.7
  • A recent version of NumPy and SciPy
  • Lasagne
  • A version of Theano that Lasagne supports

For running on CPU, you will need to install Caffe and its python interface.

Getting started

You will first need to download some pre-trained models and style vectors. Most of the materials are available in a single compressed file, which you can obtain by running

wget http://www.cs.toronto.edu/~rkiros/neural_storyteller.zip

Included is a pre-trained decoder on romance novels, the decoder dictionary, caption and romance style vectors, MS COCO training captions and a pre-trained image-sentence embedding model.

Next, you need to obtain the pre-trained skip-thoughts encoder. Go here and follow the instructions on the main page to obtain the pre-trained model.

Finally, we need the VGG-19 ConvNet parameters. You can obtain them by running

wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg19.pkl

Note that this model is for non-commercial use only. Once you have all the materials, open config.py and specify the locations of all of the models and style vectors that you downloaded.

For running on CPU, you will need to download the VGG-19 prototxt and model by:

wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_19_layers.caffemodel
wget https://gist.githubusercontent.com/ksimonyan/3785162f95cd2d5fee77/raw/bb2b4fe0a9bb0669211cf3d0bc949dfdda173e9e/VGG_ILSVRC_19_layers_deploy.prototxt

You also need to modify pycaffe and model path in config.py, and modify the flag in line 8 as:

FLAG_CPU_MODE = True

Generating a story

The images directory contains some sample images that you can try the model on. In order to generate a story, open Ipython and run the following:

import generate
z = generate.load_all()
generate.story(z, './images/ex1.jpg')

If everything works, it will first print out the nearest COCO captions to the image (predicted by the visual-semantic embedding model). Then it will print out a story.

Generation options

There are 2 knobs that can be tuned for generation: the number of retrieved captions to condition on as well as the beam search width. The defaults are

generate.story(z, './images/ex1.jpg', k=100, bw=50)

where k is the number of captions to condition on and bw is the beam width. These are reasonable defaults but playing around with these can give you very different outputs! The higher the beam width, the longer it takes to generate a story.

If you bias by song lyrics, you can turn on the lyric flag which will print the output in multiple lines by comma delimiting. neural_storyteller.zip contains an additional bias vector called swift_style.npy which is the mean of skip-thought vectors across Taylor Swift lyrics. If you point path_to_posbias to this vector in config.py, you can generate captions in the style of Taylor Swift lyrics. For example:

generate.story(z, './images/ex1.jpg', lyric=True)

should output

You re the only person on the beach right now
you know
I do n't think I will ever fall in love with you
and when the sea breeze hits me
I thought
Hey

Reference

This project does not have any associated paper with it. If you found this code useful, please consider citing:

Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. "Skip-Thought Vectors." arXiv preprint arXiv:1506.06726 (2015).

@article{kiros2015skip,
  title={Skip-Thought Vectors},
  author={Kiros, Ryan and Zhu, Yukun and Salakhutdinov, Ruslan and Zemel, Richard S and Torralba, Antonio and Urtasun, Raquel and Fidler, Sanja},
  journal={arXiv preprint arXiv:1506.06726},
  year={2015}
}

If you also use the BookCorpus data for training new models, please also consider citing:

Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler. "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books." arXiv preprint arXiv:1506.06724 (2015).

@article{zhu2015aligning,
    title={Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books},
    author={Zhu, Yukun and Kiros, Ryan and Zemel, Richard and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
    journal={arXiv preprint arXiv:1506.06724},
    year={2015}
}

neural-storyteller's People

Contributors

jlumpe avatar ryankiros avatar yknzhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-storyteller's Issues

precisely which dependencies does neural storyteller have

I spent a fair number of hours over the weekend trying to set up an ec2 instance to run this system, only to be completely overwhelmed by a series of progressively more obscure bugs and apparent incompatibilities. The readme vaguely says "get lasagne and a version of theano it supports", which is not precise enough to make it go.

All I can say is (โ•ฏ=โ–ƒ=)โ•ฏ๏ธตโ”ปโ”โ”ป

I would still like to get the software running. So, precisely which versions (ideally tags or git commits) are required of its dependenices? Exactly which version(s) of Linux can it run on? Is there an existing docker image or ami I could work from?

Many thanks in advance. I know how hard it is to maintain software and I am only asking for this as someone completely enamored by your work who wants to use it and do more cool things.

Have no access to download VGG19 parameters

wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg19.pkl
will show access denied error.
We cannot download this file now.
Could you please help to figure out?

generate story error

/ais/gobi3/u/rkiros/storyteller/romance.npz
Loading skip-thoughts...
Traceback (most recent call last):
File "", line 1, in
File "generate.py", line 86, in load_all
config.paths['sktables'])
File "skipthoughts.py", line 29, in load_model
with open('%s.pkl'%path_to_umodel, 'rb') as f:
IOError: [Errno 2] No such file or directory: '/u/rkiros/public_html/models/uni_skip.npz.pkl'

import generate
z = generate.load_all()
generate.story(z, './images/ex1.jpg')

What should i do???

Can neural-storyteller work without a GPU?

After installation:

In [1]: import generate
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-1-e2356e8a9459> in <module>()
----> 1 import generate

/home/enza/erikhasblueeyes/neural-storyteller/generate.py in <module>()
     14 import lasagne
     15 from lasagne.layers import InputLayer, DenseLayer, NonlinearityLayer, DropoutLayer
---> 16 from lasagne.layers.corrmm import Conv2DMMLayer as ConvLayer
     17 from lasagne.layers import MaxPool2DLayer as PoolLayer
     18 from lasagne.nonlinearities import softmax

/home/enza/.local/lib/python2.7/site-packages/lasagne/layers/corrmm.py in <module>()
     20 
     21 if not theano.config.device.startswith("gpu"):
---> 22     raise ImportError("requires a GPU to work")  # pragma: no cover
     23 
     24 

ImportError: requires a GPU to work

Is it possible to use this without a GPU?

Project License

Amazing work.

Could not find a license. Would you be kind to specify that?

Thanks.

"Killed" error while loading skip-thoughts in CPU only mode

While running this, I get the following error:

storyteller/romance.npz
Loading skip-thoughts...

Message from syslogd@pine64 at Nov 22 04:35:08 ...
 kernel:[10516.914822] Call trace:
Killed

Has anyone encountered this or know of a possible fix? I am running on CPU only with 2gb of memory, I have Caffe & all requirements installed.

Models adjust in config.py

In this repo and in Source Ai_Writer both config.py have

# Skip-thoughts
paths['skmodels'] = '/u/rkiros/public_html/models/'
paths['sktables'] = '/u/rkiros/public_html/models/'

I've adjusted all other entrees in the file as well as in the skiptoughts.py.

But here I don't know to what type of file or model to refer to? And were to download these?

I think if I have these I finally will be able to make it work.

ADDENDA

When I run the script example, this is the error received.

<ipython-input-8-5af402f414c9> in <module>()
      1 import generate
----> 2 z = generate.load_all()
      3 generate.story(z, './images/ex1.jpg')

generate.pyc in load_all()
     84     print 'Loading skip-thoughts...'
     85     stv = skipthoughts.load_model(config.paths['skmodels'],
---> 86                                   config.paths['sktables'])
     87 
     88     # Decoder

/home/quinten/Documents/AI_Writer/skipthoughts.py in load_model(path_to_models, path_to_tables)
     27 
     28     # Load model options
---> 29     with open('%s.pkl'%path_to_umodel, 'rb') as f:
     30         uoptions = pkl.load(f)
     31     with open('%s.pkl'%path_to_bmodel, 'rb') as f:

IOError: [Errno 2] No such file or directory: '/u/rkiros/public_html/models//home/quinten/Documents/AI_Writer/data/uni_skip.npz.pkl'

And this is the skipthoughts.py

[def load_model(path_to_models, path_to_tables):
    """
    Load the model with saved tables
    """
    path_to_umodel =  '/home/quinten/Documents/AI_Writer/data/uni_skip.npz.pkl'
    path_to_bmodel =  '/home/quinten/Documents/AI_Writer/data/bi_skip.npz.pkl'

    # Load model options
    with open('%s.pkl'%path_to_umodel, 'rb') as f:
        uoptions = pkl.load(f)
    with open('%s.pkl'%path_to_bmodel, 'rb') as f:
        boptions = pkl.load(f)

    # Load parameters
    uparams = init_params(uoptions)
    uparams = load_params(path_to_umodel, uparams)
    utparams = init_tparams(uparams)
    bparams = init_params_bi(boptions)
    bparams = load_params(path_to_bmodel, bparams)
    btparams = init_tparams(bparams)

    # Extractor functions
    embedding, x_mask, ctxw2v = build_encoder(utparams, uoptions)
    f_w2v = theano.function([embedding, x_mask], ctxw2v, name='f_w2v')
    embedding, x_mask, ctxw2v = build_encoder_bi(btparams, boptions)
    f_w2v2 = theano.function([embedding, x_mask], ctxw2v, name='f_w2v2')

    # Tables
    utable, btable = load_tables(path_to_tables)

    # Store everything we need in a dictionary
    model = {}
    model['uoptions'] = uoptions
    model['boptions'] = boptions
    model['utable'] = utable
    model['btable'] = btable
    model['f_w2v'] = f_w2v
    model['f_w2v2'] = f_w2v2

    return model

def load_tables(path_to_tables):
    """
    Load the tables
    """
    words = []
    utable = numpy.load(path_to_tables + '/home/quinten/Documents/AI_Writer/data/utable.npy')
    btable = numpy.load(path_to_tables + '/home/quinten/Documents/AI_Writer/data/btable.npy')
    f = open(path_to_tables + '/home/quinten/Documents/AI_Writer/data/dictionary.txt', 'rb')
    for line in f:
        words.append(line.decode('utf-8').strip())
    f.close()
    utable = OrderedDict(zip(words, utable))
    btable = OrderedDict(zip(words, btable))
    return utable, btable](url)

And this is the config.py


# Skip-thoughts
paths['skmodels'] = '/home/quinten/Documents/AI_Writer/data/uni_skip.npz.pkl'
paths['sktables'] = '/home/quinten/Documents/AI_Writer/data/bi_skip.npz.pkl'

# Decoder
paths['decmodel'] = '/home/quinten/Documents/AI_Writer/data/romance.npz'
paths['dictionary'] = '/home/quinten/Documents/AI_Writer/data/romance_dictionary.pkl'

# Image-sentence embedding
paths['vsemodel'] = '/home/quinten/Documents/AI_Writer/data/coco_embedding.npz'

# VGG-19 convnet
paths['vgg'] = '/home/quinten/Documents/AI_Writer/data/vgg19.pkl'
paths['pycaffe'] = '/u/yukun/Projects/caffe-run/python' => I also don't know what has to come here
paths['vgg_proto_caffe'] = '/home/quinten/Documents/AI_Writer/data/VGG_ILSVRC_19_layers_deploy.prototxt'
paths['vgg_model_caffe'] = '/home/quinten/Documents/AI_Writer/data/VGG_ILSVRC_19_layers.caffemodel'


# COCO training captions
paths['captions'] = '/home/quinten/Documents/AI_Writer/data/coco_train_caps.txt'

# Biases
paths['negbias'] = '/home/quinten/Documents/AI_Writer/data/caption_style.npy'
paths['posbias'] = '/home/quinten/Documents/AI_Writer/data/romance_style.npy'

I'm seeing an old entry and the one I putted in.
I've deleted .pyc and retried, still the same output.

Unable to detect GPU

I'm having an issue with neural-storyteller or Lasagne seeing my GPU. When I try to import generate, I get the following error:

import generate
Traceback (most recent call last):
File "", line 1, in
File "generate.py", line 22, in
from lasagne.layers.corrmm import Conv2DMMLayer as ConvLayer
File "/home/julius/anaconda2/lib/python2.7/site-packages/lasagne/layers/corrmm.py", line 22, in
"requires GPU support -- see http://lasagne.readthedocs.org/en/"
ImportError: requires GPU support -- see http://lasagne.readthedocs.org/en/latest/user/installation.html#gpu-support

Naturally, I thought there was an issue between Lasagne and my GPU, but when I run
python -c "from theano.sandbox.cuda.dnn import dnn_available as d; print(d() or d.msg)"

I get the following:

Using gpu device 0: GeForce GTX 960 (CNMeM is disabled)
True

I got this both when I ran lasagne version 0.2.dev1 with theano 0.8.0 and lasagne version 0.1 with theano 0.7.0. Any idea what is going wrong and how to fix it?

How to create new posbias for custom encoder/decoder?

I used the skip-thoughts project to create new encoder and decoders for some custom corpus data. I was able to run it through neural-storyteller, and output was generated. (Quality was low, but it did function.)

Does anyone know about how I would create a new "posbias" for my custom data?

I see where it's set in the config.py file:
paths['negbias'] = './caption_style.npy'
paths['posbias'] = './romance_style.npy'

The ReadMe says this description about what the posbias represents:
We set b to be the mean of the skip-thought vectors for romance novel passages that are of length > 100.

That's all I've found so far. Any suggestions on what to try next or where to look would be great. Thank you.

No module named theano

I'm giving this project a 3third try. Even installed ubuntu on my Imac for this.

After installing all dependencies and adjusting the config.py file (except for two lines I don't know what to place in)
# Skip-thoughts paths['skmodels'] = '/u/rkiros/public_html/models/' paths['sktables'] = '/u/rkiros/public_html/models/'
But that's not the problem for now, I get a import theano error. Even though i've installed it via conda.

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-1-e2356e8a9459> in <module>()
----> 1 import generate

/home/quinten/Documents/neuralstory/neural-storyteller-master/generate.py in <module>()
      8 import skimage.transform
      9 
---> 10 import skipthoughts
     11 import decoder
     12 import embedding

/home/quinten/Documents/neuralstory/neural-storyteller-master/skipthoughts.py in <module>()
      4 import os
      5 
----> 6 import theano
      7 import theano.tensor as tensor
      8 

ImportError: No module named theano

conda list theano gives = theano 0.9.0 py27_0

any help would be appriciated!

Gpu Support

What are the changes I need to make to run on gpu please explain in detail

NameError when running: z = generate.load_all()

I was following the instructions under the header of "Generating a Story" and I received the following error message when I ran generate.load_all():

NameError: ('The following error happened while compiling the node', forall_inplace,cpu,encoder__layers}(Elemwise{Composite{minimum(((i0 + i1) - i1), i2)}}.0, InplaceDimShuffle{0,1,x}.0, Elemwise{sub,no_inplace}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, Alloc.0, encoder_U, encoder_Ux, ScalarFromTensor.0, ScalarFromTensor.0), '\n', "name 'get_version' is not defined")

I think I have got all the dependencies installed as this error does not seem to be one of those regarding dependencies. Does anyone have a hint at what's wrong? Any ideas is appreciated!

load_model() takes 0 positional arguments but 2 were given anyone help please

import generate
...: z = generate.load_all()
...: generate.story(z, './images/ex1.jpg')
...:
/ais/gobi3/u/rkiros/storyteller/romance.npz
Loading skip-thoughts...

TypeError Traceback (most recent call last)
in
1 import generate
----> 2 z = generate.load_all()
3 generate.story(z, './images/ex1.jpg')

~/neural-storyteller/generate.py in load_all()
84 # Skip-thoughts
85 print ('Loading skip-thoughts...')
---> 86 stv = skipthoughts.load_model(config.paths['skmodels'],
87 config.paths['sktables'])
88

TypeError: load_model() takes 0 positional arguments but 2 were given
anyone help please

Error in unzipping npz files

Code :
import generate
z = generate.load_all()
generate.story(z, './6736732223.jpg')

Output Error:
BadZipFile: File is not a zip file


/content/drive/My Drive/ImgCap/romance.npz
Loading skip-thoughts...

BadZipFile Traceback (most recent call last)
in ()
1 import generate
----> 2 z = generate.load_all()
3 generate.story(z, './6736732223.jpg')

7 frames
/content/drive/My Drive/ImgCap/generate.py in load_all()
84 print('Loading skip-thoughts...')
85 stv = skipthoughts.load_model(config.paths['skmodels'],
---> 86 config.paths['sktables'])
87
88 # Decoder

/content/drive/My Drive/ImgCap/skipthoughts.py in load_model(path_to_models, path_to_tables)
34 # Load parameters
35 uparams = init_params(uoptions)
---> 36 uparams = load_params(path_to_umodel, uparams)
37 utparams = init_tparams(uparams)
38 bparams = init_params_bi(boptions)

/content/drive/My Drive/ImgCap/skipthoughts.py in load_params(path, params)
172 load parameters
173 """
--> 174 pp = numpy.load(path)
175 for kk, vv in params.iteritems():
176 if kk not in pp:

/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
442 # Transfer file ownership to NpzFile
443 ret = NpzFile(fid, own_fid=own_fid, allow_pickle=allow_pickle,
--> 444 pickle_kwargs=pickle_kwargs)
445 own_fid = False
446 return ret

/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py in init(self, fid, own_fid, allow_pickle, pickle_kwargs)
191 # Import is postponed to here since zipfile depends on gzip, an
192 # optional component of the so-called standard library.
--> 193 _zip = zipfile_factory(fid)
194 self._files = _zip.namelist()
195 self.files = []

/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py in zipfile_factory(file, *args, **kwargs)
117 import zipfile
118 kwargs['allowZip64'] = True
--> 119 return zipfile.ZipFile(file, *args, **kwargs)
120
121

/usr/lib/python3.6/zipfile.py in init(self, file, mode, compression, allowZip64)
1129 try:
1130 if mode == 'r':
-> 1131 self._RealGetContents()
1132 elif mode in ('w', 'x'):
1133 # set the modified flag so central directory gets written

/usr/lib/python3.6/zipfile.py in _RealGetContents(self)
1196 raise BadZipFile("File is not a zip file")
1197 if not endrec:
-> 1198 raise BadZipFile("File is not a zip file")
1199 if self.debug > 1:
1200 print(endrec)

BadZipFile: File is not a zip file

How can we Solve this ?

config.py

2017-09-30 9 55 04

Hello, I downloaded neural_storyteller.zip, vgg19.pkl and i installed skipthoughts. wget http://www.cs.toronto.edu/~rkiros/models/dictionary.txt wget http://www.cs.toronto.edu/~rkiros/models/utable.npy wget http://www.cs.toronto.edu/~rkiros/models/btable.npy wget http://www.cs.toronto.edu/~rkiros/models/uni_skip.npz wget http://www.cs.toronto.edu/~rkiros/models/uni_skip.npz.pkl wget http://www.cs.toronto.edu/~rkiros/models/bi_skip.npz wget http://www.cs.toronto.edu/~rkiros/models/bi_skip.npz.pkl

but i can't find 'decmodel, vsemodel, pycaffe, vgg_proto_caffe, captions, neg/posbias'....

What should i do?
Do you have any bash file for install, please upload it to GitHub.... please;;;
Please write the installation process accurately. Thank You!

Embedding captions...

too slow to embedding captions.. almost 10 minutes. please help!
specification : Intel Core i5, 16GB RAM

Generating biases

There are two files used for generation referenced in config.py:
caption_style.npy'
romance_style.npy'

these style vectors are used by generate.py
bneg = numpy.load(config.paths['negbias'])
bpos = numpy.load(config.paths['posbias'])

While supposedly you can create your decoder using the skip-thoughts decoder there does not seem to be a step where the style vectors are generated. If you have trained on your own corpus, and have dictionary and model, how do you then generate the style vectors and biases?

Gallery with more stories in README

I think just one picture and story is not enough. There should be more samples showing how mood of the picture affects mood of the text passage.

cannot allocate memory error,and i do not know how much RAM it needs

this is a fun work. but it is terrible to me and others. i used 10.5g ram,and run with CPU style, but still failed๏ผŒwhen loading decoding... after loading skip-thoughts.
so, please tell us Its hardware environment requirementsใ€‚
we do not know how much ram it needs.

And๏ผŒis there a better one with smaller RAM?

Unexpected keyword 'preserve_range'

Hi, getting this error when ran using the GPU on AWS

Traceback (most recent call last):
File "main.py", line 3, in
generate.story(z, './images/ex1.jpg')
File "/home/ubuntu/neural-storyteller/generate.py", line 39, in story
rawim, im = load_image(image_loc)
File "/home/ubuntu/neural-storyteller/generate.py", line 153, in load_image
im = skimage.transform.resize(im, (256, w*256/h), preserve_range=True)
TypeError: resize() got an unexpected keyword argument 'preserve_range'

How to modify bias to generate longer stories?

Hello:

Trying to figure out which values to change to generate longer stories (more than a single paragraph)
as per Readme.md description below:

What kind of biases work?

Skip-thought vectors are sensitive to:

length (if you bias by really long passages, it will decode really long stories)

Everything works but the output limitation is frustrating. Any suggestions are much appreciated.

Thanks,

Fine tuning COCO

is the classification performed through VGG19 then parsed through COCO or is it 100% COCO?
I'm running into classification issues and would love to fine tune my results, I'm just not sure which classifier I should be fine tuning and where I would modify the code to incorporate the new training data.

AttributeError: 'collections.OrderedDict' object has no attribute 'iteritems'

Hi, I am running into a dead end with the storyteller already at one of the first steps. After having downloaded all necessary files, and set up the config.py, it still won't generate the first

import generate

gives the following error:

ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: libcublas.so.8.0: cannot open shared object file: No such file or directory

but it is still able to proceed (or at least that is how I understand it)

and then

z = generate.load_all()

runs into this error (pasting only the end)

`Loading skip-thoughts...

AttributeError Traceback (most recent call last)
in ()
----> 1 z = generate.load_all()

/home/zoza/neural-storyteller/generate.py in load_all()
84 print('Loading skip-thoughts...')
85 stv = skipthoughts.load_model(config.paths['skmodels'],
---> 86 config.paths['sktables'])
87
88 # Decoder

/home/zoza/neural-storyteller/skipthoughts.py in load_model(path_to_models, path_to_tables)
34 # Load parameters
35 uparams = init_params(uoptions)
---> 36 uparams = load_params(path_to_umodel, uparams)
37 utparams = init_tparams(uparams)
38 bparams = init_params_bi(boptions)

/home/zoza/neural-storyteller/skipthoughts.py in load_params(path, params)
173 """
174 pp = numpy.load(path)
--> 175 for kk, vv in params.iteritems():
176 if kk not in pp:
177 warnings.warn('%s is not in the archive'%kk)

AttributeError: 'collections.OrderedDict' object has no attribute 'iteritems'`

I am running Python 3.5.2 |Anaconda custom (64-bit) and I have adapted the code (print statements in brackets, cPickle module to pickle) to be able to run it on version 3.

Any help on how to fix this is appreciated!

ImportError: cannot import name hough_ellipse & ValueError: numpy.dtype has the wrong size, try recompiling??

envy@ub1404:/media/envy/data1t/os_prj/github/neural-storyteller$ pip show numpy

Name: numpy
Version: 1.8.2
Location: /usr/lib/python2.7/dist-packages
Requires:

envy@ub1404:/media/envy/data1t/os_prj/github/neural-storyteller$ pip show scipy

Name: scipy
Version: 0.17.0
Location: /usr/local/lib/python2.7/dist-packages
Requires:
envy@ub1404:/media/envy/data1t/os_prj/github/neural-storyteller$

envy@ub1404:/media/envy/data1t/os_prj/github/neural-storyteller$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import generate
Traceback (most recent call last):
File "", line 1, in
File "generate.py", line 8, in
import skimage.transform
File "/home/envy/.local/lib/python2.7/site-packages/skimage/transform/init.py", line 1, in
from ._hough_transform import (hough_ellipse, hough_line,
File "init.pxd", line 155, in init skimage.transform._hough_transform (skimage/transform/_hough_transform.c:22288)
ValueError: numpy.dtype has the wrong size, try recompiling

import generate
Traceback (most recent call last):
File "", line 1, in
File "generate.py", line 8, in
import skimage.transform
File "/home/envy/.local/lib/python2.7/site-packages/skimage/transform/init.py", line 1, in
from ._hough_transform import (hough_ellipse, hough_line,
ImportError: cannot import name hough_ellipse

which

No module find generate

After so many tries I keep on getting this error. Is there somebody who can help me fix this...

iMacvanQuinten:AI_Writer2 quintendewilde$ ipython
Python 3.6.0 |Anaconda custom (x86_64)| (default, Dec 23 2016, 13:19:00) 
Type 'copyright', 'credits' or 'license' for more information
IPython 6.0.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import generate
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-1-e2356e8a9459> in <module>()
----> 1 import generate

ModuleNotFoundError: No module named 'generate'

Killed (also skip-thoughts)

Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import generate
z = generate.load_all()
/home/OOOO/story/files/romance.npz
Loading skip-thoughts...
Killed

skip-thoughts always sign -> 'Killed'.
when i execute skipthoughts example command 'model = skipthoughts.load_model()' this command kills too.

PLEASE HELP.....

Skipthought.py

I am getting this error. Can anyone help me with this?
` File "/root/anaconda3/lib/python3.5/site-packages/numpy/lib/format.py", line 640, in read_array
array = pickle.load(fp, **pickle_kwargs)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xb2 in position 1: ordinal not in range(128)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/root/neural-storyteller/skip-thoughts/skipthoughts.py", line 59, in load_model
utable, btable = load_tables()
File "/root/neural-storyteller/skip-thoughts/skipthoughts.py", line 79, in load_tables
utable = numpy.load(path_to_tables + 'utable.npy')
File "/root/anaconda3/lib/python3.5/site-packages/numpy/lib/npyio.py", line 419, in load
pickle_kwargs=pickle_kwargs)
File "/root/anaconda3/lib/python3.5/site-packages/numpy/lib/format.py", line 646, in read_array
"to numpy.load" % (err,))
UnicodeError: Unpickling a python object failed: UnicodeDecodeError('ascii', b'z\xb2J=\xb1W!=\xe9\x9d\x07\xbdg\xfa\x89\xbd\xc2\x0b\xcd\xbd\xbb\xa5\x84\xbc\xb4\xab5=S\xa0\x84<\x82d\x13=\xd8q\xed<\xfd\x02\x92<\xe6E\x14=-}\x08<G\xfa\x9f\xbd\x12\x93\x01;r\x81\x93\xbdhwU\xbdL\x98!\xbd\xca\x89\xdd\xbaN1\x85=@\x04\xbb=VA\xbe\xbd\xfb(\x02>\xa9\x07\x92\xbdJ<\x8f='\xadW<\xea\xf2\xe9<\xe7\x1d\xc9\xbc\x06\x13*\xbd\xf2\x07]=\xf7\x9c==\xa13U=\x13\xc4(\xbd\xf6X\x03\xbcT\x1e\xfc;\xe0\xad\xbf=A*(\xbd\xad\xc7\xa2\xbb\xc8\x0f\x07\xbdKC\x14\xbd\x9f\x01\xd1<+w\xe0=\x0b\x95\x9f=OZ\x00=\xd5\xfa\n<\xb2\x81a\xbd\x1cB\x80=\xb7'\x8a=\x01\x0f\x13\xbb(\xfb\x97=\xc4\x9c'<\xc8x\xb3\xbc8{(\xbd\x82r\x96<\xb5\xf4l\xbd\x03*\xad\xbdl\xda\xf0\xbb\xbd\x91\x9f\xbaK\xc4\xd2=\x8cz\x0e=\xc5\x10\xb1=\xea\x0c\xd7;\xaft\x8d=\xe42\x00\xbe\x06\xa2\x98=\x16\x02r\xbd4\xc7?=\xd6Y?\xbc\xfw\xed<U\t\x9a\xbc\xfb\xd3\xa5<\xda\xce\x98\xba\xe0C8=t\xfa\x04\xbc\xbe\xdf\xf5\xbc\xb7\xbf\x00<\xef\x92\xd5:\xda\x05\x8f\xbd\xc4D\xaa=21\xbb\xbc\x92\x80\x8c=\x04\xef\x8b\xba\x1d!\xf5<\xc1\xcb\xe1\xbc\xca\x17*=\xdd\x16\xfd<SP0\xbd\x88>\xb1\xbd&\xb95<\xfb\x15\xa6\xbc<\x19"\xbcO\x7f\x0e\xbc\x95\xd6.=l\x15\x87<G]\xd0;#}\xc1\xbb5\_\xbb\xc4\x8a\x87=1\xd8\x8f=C\x06\xea;\xbb\xbd\xde=4\xb1\x0f\xbd\xa8\xfc\xd9\xbc&\xcf\xb3=\xe5\x99\x0e\xbdA{\xa1\xbd$[I=\xdc\x0c\xdf<E\x10s=\xd5\xf7\xbc\xcd{\xa7\xbbxDC\xbc\xccL\x199n\xb8P\xbc\x94\x06D=E\x85=\x82\xadZ=L\xea(\xbc0\xc0D\xbd\x82\xcf\x80\xbcH\xa7\x98\xbc\xb2\x81[\xbdX'b=\xebC}\xbd\x1c\xe9Y;', 1, 2, 'ordinal not in range(128)')
You may need to pass the encoding= option to numpy.load`

CPU error

I changed line 16 in generate.py to
from lasagne.layers import Conv2DLayer as ConvLayer

But then this error happens:

storyteller/romance.npz
Loading skip-thoughts...
/usr/local/lib/python2.7/site-packages/theano/scan_module/scan.py:1017: Warning: In the strict mode, all neccessary shared variables must be passed as a part of non_sequences
'must be passed as a part of non_sequences', Warning)
Loading decoder...
Loading image-sentence embedding...
Loading and initializing ConvNet...
Loading parameters...
Loading captions...
Embedding captions...
Loading biases...
Traceback (most recent call last):
File "index.py", line 3, in
generate.story(z, './images/ex1.jpg')
File "/Users/jonathan/Desktop/Story/generate.py", line 41, in story
feats = compute_features(z['net'], im).flatten()
File "/Users/jonathan/Desktop/Story/generate.py", line 164, in compute_features
fc7 = numpy.array(lasagne.layers.get_output(net['fc7'], im, deterministic=True).eval())
File "/usr/local/lib/python2.7/site-packages/lasagne/layers/helper.py", line 185, in get_output
all_outputs[layer] = layer.get_output_for(layer_inputs, **kwargs)
File "/usr/local/lib/python2.7/site-packages/lasagne/layers/pool.py", line 240, in get_output_for
mode=self.mode,
TypeError: max_pool_2d() got an unexpected keyword argument 'mode'

Is there any good solution?

MemoryError

I seem to be getting a MemoryError when trying to generate a story and im not quite sure how to fix it.

Traceback (most recent call last):
  File "D:/Users/teknogeek/Documents/Code/NeuralStoryteller/test.py", line 3, in <module>
    z = generate.load_all()
  File "D:\Users\teknogeek\Documents\Code\NeuralStoryteller\generate.py", line 105, in load_all
    stv = skipthoughts.load_model(path_to_skmodels, path_to_sktables)
  File "D:\Users\teknogeek\Documents\Code\NeuralStoryteller\skipthoughts.py", line 63, in load_model
    utable, btable = load_tables(path_to_tables)
  File "D:\Users\teknogeek\Documents\Code\NeuralStoryteller\skipthoughts.py", line 81, in load_tables
    utable = numpy.load(path_to_tables + 'utable.npy')
  File "D:\Python27\lib\site-packages\numpy\lib\npyio.py", line 406, in load
    pickle_kwargs=pickle_kwargs)
  File "D:\Python27\lib\site-packages\numpy\lib\format.py", line 638, in read_array
    array = pickle.load(fp, **pickle_kwargs)
MemoryError

The code that i am using to get this is

import generate, os

z = generate.load_all()
generate.story(z, os.path.join("images", "ex1.jpg"))

Any help?

inference time for this extremely high?

I tested this on a random image with this code:
z = generate.load_all()
s = generate.story(z, args.input)
print(s)

Im running with 2 k80s NVIDIA cards and a Xenon with 32 cores, it takes 70 minutes to process one image. It seems the majority of the time is spent in load_all. Is this normal? what kind of inference performance are others getting?

Which skipthought.py values should I change to write longer stories?

Hi,

Thanks for this great project. I have everything working fine and I would like to write longer stories.
I've changed the value of the output captions from 5 to 25 which works but the number of story lines remains the same at 5.

You mention skip-thoughts length (if you bias by really long passages, it will decode really long stories)
Are there values I can change in 'skipthoughts.py' to increase the number of output story lines?

Cheers,

I got a IOError: [Errno 2] No such file or directory: '/ais/gobi3/u/rkiros/storyteller/romance_dictionary.pkl'

I got a IOError: [Errno 2] No such file or directory: '/ais/gobi3/u/rkiros/storyteller/romance_dictionary.pkl',
I know it's because of this dataset or model file not be provide,where I should to download this file?and else include these file:
[
romance.npz,
romance_dictionary.pkl,
coco_embedding.npz,
coco_train_caps.txt,
caption_style.npy,
romance_style.npy
].
help me if you can,very thanks you.

How much time did "import generate" take?

I am new to this area. I installed all the dependencies and trying to generate some story for my images and using a nvidia gtx 960M to run. I tried several times to import generate but it took so long I stopped the kernal every time. Could anyone tell me how patient should I be. Or is there anything I am missing.
imstory

Generator.py transform.resize error

Hi.. on running "generate.story(z, r'.\images\ex1.jpg')", i got a huge error as below. I believe it is due to gpu memory allocation or something similar but i am unable to pinpoint what i wrong with it. Can someone help me with this error

generate.story(z, r'.\images\ex1.jpg')
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\skimage\transform_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\lasagne\layers\conv.py:489: UserWarning: The image_shape keyword argument to tensor.nnet.conv2d is deprecated, it has been renamed to input_shape.
border_mode=border_mode)
1 #include <Python.h>
2 #include
3 #include "theano_mod_helper.h"
4 #include "cuda_ndarray.cuh"
5 #include <math.h>
6 #include <numpy/arrayobject.h>
7 #include <numpy/arrayscalars.h>
8 #include "cudnn.h"
9 #include "cudnn_helper.h"
10 //////////////////////
11 //// Support Code
12 //////////////////////
13
14 void _capsule_destructor(PyObject o) {
15 void d = PyCapsule_GetContext(o);
16 void p = PyCapsule_GetPointer(o, NULL);
17 void (f)(void ) = (void ()(void ))d;
18 if (f != NULL) f(p);
19 }
20
21
22 static cudnnHandle_t _handle = NULL;
23
24
25 static int
26 c_set_tensorNd(CudaNdarray var, cudnnTensorDescriptor_t desc) {
27
28 int dim = CudaNdarray_NDIM(var);
29 int strides = (int )malloc(dim * sizeof(int));
30 int default_str = 1;
31 int return_value = 0;
32
33 if (strides != NULL) {
34 for (int i = dim-1; i >= 0; i--)
35 {
36 if (CudaNdarray_HOST_STRIDES(var)[i])
37 strides[i] = CudaNdarray_HOST_STRIDES(var)[i];
38 else
39 strides[i] = default_str;
40 default_str = CudaNdarray_HOST_DIMS(var)[i];
41 }
42
43 cudnnStatus_t err = cudnnSetTensorNdDescriptor(desc, CUDNN_DATA_FLOAT, dim,
44 CudaNdarray_HOST_DIMS(var),
45 strides);
46
47
48 if (err != CUDNN_STATUS_SUCCESS) {
49 PyErr_Format(PyExc_RuntimeError,
50 "Could not set tensorNd descriptor: %s"
51 "dim=%d",
52 cudnnGetErrorString(err), dim);
53
54 return_value = -1;
55 }
56 } else {
57 PyErr_Format(PyExc_MemoryError,
58 "Could not allocate memory for strides array of size %d.",
59 dim);
60
61 return_value = -1;
62 }
63
64 free(strides);
65 return return_value;
66 }
67
68
69 static int
70 c_set_filterNd(CudaNdarray var, cudnnFilterDescriptor_t desc) {
71 if (!CudaNdarray_is_c_contiguous(var)) {
72 PyErr_SetString(PyExc_ValueError,
73 "Only contiguous filters (kernels) are supported.");
74 return -1;
75 }
76 int dim = CudaNdarray_NDIM(var);
77 cudnnStatus_t err = cudnnSetFilterNdDescriptor_v4(desc,
78 CUDNN_DATA_FLOAT,
79 CUDNN_TENSOR_NCHW,
80 dim,
81 CudaNdarray_HOST_DIMS(var));
82 if (err != CUDNN_STATUS_SUCCESS) {
83 PyErr_Format(PyExc_RuntimeError,
84 "Could not set filter descriptor: %s."
85 " dims= %d",
86 cudnnGetErrorString(err), dim);
87 return -1;
88 }
89 return 0;
90 }
91
92
93
94 namespace {
95 struct __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b {
96 PyObject
__ERROR;
97
98 PyObject
storage_V3;
99 PyObject
storage_V5;
100 PyObject
storage_V7;
101 PyObject
storage_V9;
102 PyObject
storage_V11;
103 PyObject
storage_V13;
104 PyObject
storage_V1;
105
106 #define DTYPE_INPUT_0 npy_float32
107 #define TYPENUM_INPUT_0 11
108 #define ITEMSIZE_INPUT_0 4
109 #define DTYPE_INPUT_1 npy_float32
110 #define TYPENUM_INPUT_1 11
111 #define ITEMSIZE_INPUT_1 4
112 #define DTYPE_INPUT_2 npy_float32
113 #define TYPENUM_INPUT_2 11
114 #define ITEMSIZE_INPUT_2 4
115 #define DTYPE_INPUT_4 npy_float32
116 #define TYPENUM_INPUT_4 11
117 #define ITEMSIZE_INPUT_4 4
118 #define DTYPE_INPUT_5 npy_float32
119 #define TYPENUM_INPUT_5 11
120 #define ITEMSIZE_INPUT_5 4
121 #define DTYPE_OUTPUT_0 npy_float32
122 #define TYPENUM_OUTPUT_0 11
123 #define ITEMSIZE_OUTPUT_0 4
124 #define APPLY_SPECIFIC(str) str##_node_md48cd7c806151b0105e1fa2b573cc03b_0
125 #define CONV_ALGO CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM
126 #define CHOOSE_ALGO 0
127 #define CHOOSE_ALGO_ONCE 0
128 #define CHOOSE_ALGO_TIME 0
129 #define CONV_INPLACE 1
130
131 cudnnTensorDescriptor_t APPLY_SPECIFIC(input);
132 cudnnTensorDescriptor_t APPLY_SPECIFIC(output);
133 cudnnFilterDescriptor_t APPLY_SPECIFIC(kerns);
134
135 /
Keep track, from one execution to another, of the dimension of the data
136 and the algorithms, if any, that were selected according to these dimensions
137 and according to the amount of memory available at that time.
138
139 Note : Implementation selection for backward convolution only exists starting
140 at V3.
141 /
142 int APPLY_SPECIFIC(previous_input_shape)[5];
143 int APPLY_SPECIFIC(previous_kerns_shape)[5];
144 int APPLY_SPECIFIC(previous_output_shape)[5];
145 bool APPLY_SPECIFIC(previous_algo_set);
146 cudnnConvolutionFwdAlgo_t APPLY_SPECIFIC(previous_algo);
147 cudnnConvolutionBwdFilterAlgo_t APPLY_SPECIFIC(previous_bwd_f_algo);
148 cudnnConvolutionBwdDataAlgo_t APPLY_SPECIFIC(previous_bwd_d_algo);
149
150
151
152 int
153 APPLY_SPECIFIC(conv_fwd)(CudaNdarray input, CudaNdarray kerns,
154 CudaNdarray om, cudnnConvolutionDescriptor_t desc,
155 float alpha, float beta, CudaNdarray output) {
156
157 cudnnStatus_t err = CUDNN_STATUS_SUCCESS;
158 if (CudaNdarray_HOST_DIMS(input)[1] != CudaNdarray_HOST_DIMS(kerns)[1]) {
159 PyErr_SetString(PyExc_ValueError,
160 "GpuDnnConv images and kernel must have the same stack size\n");
161 return 1;
162 }
163
164 int nb_dim = CudaNdarray_NDIM(input);
165
166 #ifdef CONV_INPLACE
167 Py_XDECREF(output);
168 output = om;
169 Py_INCREF(output);
170 #else
171 if (CudaNdarray_prep_output(output, nb_dim, CudaNdarray_HOST_DIMS(om)) != 0)
172 return 1;
173 if (beta != 0.0 && CudaNdarray_CopyFromCudaNdarray(output, om))
174 return 1;
175 #endif
176
177 if (CudaNdarray_DIMS(input)[0] == 0 || CudaNdarray_DIMS(kerns)[0] == 0 || CudaNdarray_DIMS(kerns)[1] == 0) {
178 cudaError_t err2 = cudaMemset((output)->devdata, 0,
179 CudaNdarray_SIZE(output) * sizeof(real));
180 if (err2 != cudaSuccess) {
181 PyErr_Format(PyExc_RuntimeError,
182 "GpuDnnConv could not fill the output with zeros: %s",
183 cudaGetErrorString(err2));
184 return 1;
185 }
186 return 0;
187 }
188
189 if (c_set_tensorNd(input, APPLY_SPECIFIC(input)) == -1)
190 return 1;
191 if (c_set_filterNd(kerns, APPLY_SPECIFIC(kerns)) == -1)
192 return 1;
193 if (c_set_tensorNd(output, APPLY_SPECIFIC(output)) == -1)
194 return 1;
195
196 {
197 size_t worksize;
198 void workspace;
199 cudnnConvolutionFwdAlgo_t chosen_algo;
200
201
202 if (CHOOSE_ALGO)
203 {
204
205 // A new convolution implementation should be selected, based either on
206 // timing or heuristics if in one of the two following cases :
207 // - The implementation should only be chosen during the first execution
208 // of an apply node and this is the first execution of the apply node.
209 // - The implementation should be chosen as often as necessary and the
210 // shapes of the inputs differ from the last time an implementation
211 // was chosen.
212 bool reuse_previous_algo;
213 if (CHOOSE_ALGO_ONCE)
214 {
215 // Only choose a new implementation of none has been chosen before.
216 reuse_previous_algo = APPLY_SPECIFIC(previous_algo_set);
217 }
218 else
219 {
220 // Reuse the previous implementation if the inputs and the kernels
221 // have the same shapes as they had when the previous implementation
222 // was selected
223 bool same_shapes = true;
224 for (int i = 0; (i < nb_dim) && same_shapes; i++)
225 {
226 same_shapes &= (CudaNdarray_HOST_DIMS(input)[i] ==
227 APPLY_SPECIFIC(previous_input_shape)[i]);
228 same_shapes &= (CudaNdarray_HOST_DIMS(kerns)[i] ==
229 APPLY_SPECIFIC(previous_kerns_shape)[i]);
230 }
231 reuse_previous_algo = same_shapes;
232 }
233
234 // If the previously choosen implementation can't be reused, select a
235 // new one based on the shapes of the current inputs
236 if (!reuse_previous_algo)
237 {
238
239 // Obtain a convolution algorithm appropriate for the input and kernel
240 // shapes. Either by choosing one according to heuristics or by making
241 // cuDNN time every implementation and choose the best one.
242 if (CHOOSE_ALGO_TIME)
243 {
244 // Time the different implementations to choose the best one
245 int requestedCount = 1;
246 int count;
247 cudnnConvolutionFwdAlgoPerf_t choosen_algo_perf;
248 err = cudnnFindConvolutionForwardAlgorithm(_handle,
249 APPLY_SPECIFIC(input),
250 APPLY_SPECIFIC(kerns),
251 desc,
252 APPLY_SPECIFIC(output),
253 requestedCount,
254 &count,
255 &choosen_algo_perf);
256 if (err != CUDNN_STATUS_SUCCESS) {
257 PyErr_Format(PyExc_RuntimeError,
258 "GpuDnnConv: error selecting convolution algo: %s",
259 cudnnGetErrorString(err));
260 return 1;
261 }
262
263 chosen_algo = choosen_algo_perf.algo;
264 }
265 else
266 {
267 // The implementation should be chosen using heuristics based on the
268 // input shapes and the amount of memory available.
269
270 // Get the amount of available memory
271 size_t free = 0, total = 0;
272 cudaError_t err2 = cudaMemGetInfo(&free, &total);
273 if (err2 != cudaSuccess){
274 cudaGetLastError();
275 fprintf(stderr,
276 "Error when trying to find the memory information"
277 " on the GPU: %s\n", cudaGetErrorString(err2));
278 return 1;
279 }
280
281 // Use heuristics to choose the implementation
282 err = cudnnGetConvolutionForwardAlgorithm(_handle,
283 APPLY_SPECIFIC(input),
284 APPLY_SPECIFIC(kerns),
285 desc,
286 APPLY_SPECIFIC(output),
287 CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT,
288 free,
289 &chosen_algo);
290
291 if (err != CUDNN_STATUS_SUCCESS) {
292 PyErr_Format(PyExc_RuntimeError,
293 "GpuDnnConv: error selecting convolution algo: %s",
294 cudnnGetErrorString(err));
295 return 1;
296 }
297 }
298
299 // Store the shapes of the inputs and kernels as well as the chosen
300 // algorithm for future use.
301 APPLY_SPECIFIC(previous_algo) = chosen_algo;
302 APPLY_SPECIFIC(previous_algo_set) = true;
303 for (int i = 0; i < nb_dim; i++)
304 {
305 APPLY_SPECIFIC(previous_input_shape)[i] =
306 CudaNdarray_HOST_DIMS(input)[i];
307 APPLY_SPECIFIC(previous_kerns_shape)[i] =
308 CudaNdarray_HOST_DIMS(kerns)[i];
309 }
310 }
311 else
312 {
313 // Reuse the previously chosen convolution implementation
314 chosen_algo = APPLY_SPECIFIC(previous_algo);
315 }
316 }
317 else
318 {
319 chosen_algo = CONV_ALGO;
320 }
321
322 if (0){
323 char * a;
324 switch(chosen_algo){
325 case CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM:
326 a = "implicit gemm (0)";
327 break;
328 case CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM:
329 a = "precomp gemm (1)";
330 break;
331 case CUDNN_CONVOLUTION_FWD_ALGO_GEMM:
332 a = "gemm (2)";
333 break;
334 case CUDNN_CONVOLUTION_FWD_ALGO_DIRECT:
335 a = "direct (3)";
336 break;
337 case CUDNN_CONVOLUTION_FWD_ALGO_FFT:
338 a = "fft (4)";
339 break;
340 case CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING:
341 a = "fft tiling (5)";
342 break;
343 #if CUDNN_VERSION > 5000
344 case CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD:
345 a = "winograd (6)";
346 break;
347 #endif
348 }
349 printf("GpuDNNConv: algo %s\n", a);
350 }
351
352 // The FFT implementation (only in V3 and onward) does not support strides,
353 // 1x1 filters or inputs with a spatial dimension larger than 1024.
354 // The tiled-FFT implementation (only in V4 onward) does not support
355 // strides.
356 // If the chosen implementation is FFT or tiled-FFT, validate that it can
357 // be used on the current data and default on a safe implementation if it
358 // can't.
359 // Following code is 2d-specific, but it is fine as FFT and tiled-FFT are
360 // defined only for 2d-filters
361 if ((chosen_algo == CUDNN_CONVOLUTION_FWD_ALGO_FFT ||
362 chosen_algo == CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING) && nb_dim == 4)
363 {
364
365 // Extract the properties of the convolution descriptor
366 int nd;
367 int pad[2];
368 int stride[2];
369 int upscale[2];
370 cudnnConvolutionMode_t mode;
371 cudnnDataType_t data_type;
372 err = cudnnGetConvolutionNdDescriptor(desc, 2, &nd, pad, stride,
373 upscale, &mode, &data_type);
374
375 if (err != CUDNN_STATUS_SUCCESS) {
376 PyErr_Format(PyExc_RuntimeError,
377 "GpuDnnConv: error getting convolution properties: %s",
378 cudnnGetErrorString(err));
379 return 1;
380 }
381
382 // Extract the spatial size of the filters
383 int filter_h = CudaNdarray_HOST_DIMS(kerns)[2];
384 int filter_w = CudaNdarray_HOST_DIMS(kerns)[3];
385
386 // Extract the spatial size of the input
387 int input_h = CudaNdarray_HOST_DIMS(input)[2];
388 int input_w = CudaNdarray_HOST_DIMS(input)[3];
389
390 // Ensure that the selected implementation supports the requested
391 // convolution. Fall back to a safe implementation otherwise.
392 if (chosen_algo == CUDNN_CONVOLUTION_FWD_ALGO_FFT)
393 {
394 if (stride[0] != 1 || stride[1] != 1 || input_h > 1024 ||
395 input_w > 1024 || (filter_h == 1 && filter_w == 1))
396 {
397 chosen_algo = CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM;
398 }
399 }
400 else
401 {
402 // chosen_algo == CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING
403 if (stride[0] != 1 || stride[1] != 1)
404 {
405 chosen_algo = CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM;
406 }
407 }
408 }
409
410 err = cudnnGetConvolutionForwardWorkspaceSize(_handle,
411 APPLY_SPECIFIC(input),
412 APPLY_SPECIFIC(kerns),
413 desc,
414 APPLY_SPECIFIC(output),
415 chosen_algo,
416 &worksize);
417 if (err == CUDNN_STATUS_NOT_SUPPORTED) {
418 // Fallback to none algo if not supported
419 // TODO: Print a warning
420 chosen_algo = CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM;
421
422 err = cudnnGetConvolutionForwardWorkspaceSize(_handle,
423 APPLY_SPECIFIC(input),
424 APPLY_SPECIFIC(kerns),
425 desc,
426 APPLY_SPECIFIC(output),
427 chosen_algo,
428 &worksize);
429 }
430 if (err != CUDNN_STATUS_SUCCESS) {
431 PyErr_Format(PyExc_RuntimeError,
432 "GpuDnnConv: error getting worksize: %s",
433 cudnnGetErrorString(err));
434 return 1;
435 }
436 workspace = get_work_mem(worksize);
437 if (workspace == NULL && worksize != 0)
438 return 1;
439
440 err = cudnnConvolutionForward(
441 _handle,
442 (void )&alpha,
443 APPLY_SPECIFIC(input), CudaNdarray_DEV_DATA(input),
444 APPLY_SPECIFIC(kerns), CudaNdarray_DEV_DATA(kerns),
445 desc,
446 chosen_algo,
447 workspace, worksize,
448 (void )&beta,
449 APPLY_SPECIFIC(output), CudaNdarray_DEV_DATA(output));
450 }
451 if (err != CUDNN_STATUS_SUCCESS) {
452 PyErr_Format(PyExc_RuntimeError, "GpuDnnConv: error doing operation: %s",
453 cudnnGetErrorString(err));
454 return 1;
455 }
456 return 0;
457 }
458
459 #undef DTYPE_INPUT_0
460 #undef TYPENUM_INPUT_0
461 #undef ITEMSIZE_INPUT_0
462 #undef DTYPE_INPUT_1
463 #undef TYPENUM_INPUT_1
464 #undef ITEMSIZE_INPUT_1
465 #undef DTYPE_INPUT_2
466 #undef TYPENUM_INPUT_2
467 #undef ITEMSIZE_INPUT_2
468 #undef DTYPE_INPUT_4
469 #undef TYPENUM_INPUT_4
470 #undef ITEMSIZE_INPUT_4
471 #undef DTYPE_INPUT_5
472 #undef TYPENUM_INPUT_5
473 #undef ITEMSIZE_INPUT_5
474 #undef DTYPE_OUTPUT_0
475 #undef TYPENUM_OUTPUT_0
476 #undef ITEMSIZE_OUTPUT_0
477 #undef APPLY_SPECIFIC
478 #undef CONV_ALGO
479 #undef CHOOSE_ALGO
480 #undef CHOOSE_ALGO_ONCE
481 #undef CHOOSE_ALGO_TIME
482 #undef CONV_INPLACE
483
484 __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b() {
485 // This is only somewhat safe because we:
486 // 1) Are not a virtual class
487 // 2) Do not use any virtual classes in the members
488 // 3) Deal with mostly POD and pointers
489
490 // If this changes, we would have to revise this, but for
491 // now I am tired of chasing segfaults because
492 // initialization code had an error and some pointer has
493 // a junk value.
494 memset(this, 0, sizeof(this));
495 }
496 ~__struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b(void) {
497 cleanup();
498 }
499
500 int init(PyObject
__ERROR, PyObject
storage_V3, PyObject
storage_V5, PyObject
storage_V7, PyObject
storage_V9, PyObject
storage_V11, PyObject
storage_V13, PyObject
storage_V1) {
501 Py_XINCREF(storage_V3);
502 Py_XINCREF(storage_V5);
503 Py_XINCREF(storage_V7);
504 Py_XINCREF(storage_V9);
505 Py_XINCREF(storage_V11);
506 Py_XINCREF(storage_V13);
507 Py_XINCREF(storage_V1);
508 this->storage_V3 = storage_V3;
509 this->storage_V5 = storage_V5;
510 this->storage_V7 = storage_V7;
511 this->storage_V9 = storage_V9;
512 this->storage_V11 = storage_V11;
513 this->storage_V13 = storage_V13;
514 this->storage_V1 = storage_V1;
515
516
517
518
519
520
521
522
523
524 #define DTYPE_INPUT_0 npy_float32
525 #define TYPENUM_INPUT_0 11
526 #define ITEMSIZE_INPUT_0 4
527 #define DTYPE_INPUT_1 npy_float32
528 #define TYPENUM_INPUT_1 11
529 #define ITEMSIZE_INPUT_1 4
530 #define DTYPE_INPUT_2 npy_float32
531 #define TYPENUM_INPUT_2 11
532 #define ITEMSIZE_INPUT_2 4
533 #define DTYPE_INPUT_4 npy_float32
534 #define TYPENUM_INPUT_4 11
535 #define ITEMSIZE_INPUT_4 4
536 #define DTYPE_INPUT_5 npy_float32
537 #define TYPENUM_INPUT_5 11
538 #define ITEMSIZE_INPUT_5 4
539 #define DTYPE_OUTPUT_0 npy_float32
540 #define TYPENUM_OUTPUT_0 11
541 #define ITEMSIZE_OUTPUT_0 4
542 #define APPLY_SPECIFIC(str) str##_node_md48cd7c806151b0105e1fa2b573cc03b_0
543 #define CONV_ALGO CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM
544 #define CHOOSE_ALGO 0
545 #define CHOOSE_ALGO_ONCE 0
546 #define CHOOSE_ALGO_TIME 0
547 #define CONV_INPLACE 1
548 #define FAIL {
549 if (!PyErr_Occurred()) {
550 PyErr_SetString(PyExc_RuntimeError,
551 "Unexpected error in an Op's C code. "
552 "No Python exception was set.");
553 }
554 return 15;
555 }
556
557
558 cudnnStatus_t APPLY_SPECIFIC(err);
559 APPLY_SPECIFIC(input) = NULL;
560 APPLY_SPECIFIC(output) = NULL;
561 APPLY_SPECIFIC(kerns) = NULL;
562 if ((APPLY_SPECIFIC(err) = cudnnCreateTensorDescriptor(&APPLY_SPECIFIC(input))) != CUDNN_STATUS_SUCCESS) {
563 PyErr_Format(PyExc_MemoryError, "could not allocate tensor descriptor "
564 "(inp): %s", cudnnGetErrorString(APPLY_SPECIFIC(err)));
565 FAIL;
566 }
567 if ((APPLY_SPECIFIC(err) = cudnnCreateTensorDescriptor(&APPLY_SPECIFIC(output))) != CUDNN_STATUS_SUCCESS) {
568 PyErr_Format(PyExc_MemoryError, "could not allocate tensor descriptor "
569 "(out): %s", cudnnGetErrorString(APPLY_SPECIFIC(err)));
570 FAIL;
571 }
572 if ((APPLY_SPECIFIC(err) = cudnnCreateFilterDescriptor(&APPLY_SPECIFIC(kerns))) != CUDNN_STATUS_SUCCESS) {
573 PyErr_Format(PyExc_MemoryError, "could not allocate filter descriptor: %s",
574 cudnnGetErrorString(APPLY_SPECIFIC(err)));
575 FAIL;
576 }
577
578 for (int i = 0; i < 5; i++)
579 {
580 APPLY_SPECIFIC(previous_input_shape)[i] = 0;
581 APPLY_SPECIFIC(previous_kerns_shape)[i] = 0;
582 APPLY_SPECIFIC(previous_output_shape)[i] = 0;
583 }
584
585 APPLY_SPECIFIC(previous_algo_set) = false;
586
587 // Select default implementations for the case where the convolution
588 // implementations should be selected based on the size of the data.
589 APPLY_SPECIFIC(previous_algo) = CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM;
590 APPLY_SPECIFIC(previous_bwd_f_algo) = CUDNN_CONVOLUTION_BWD_FILTER_ALGO_0;
591 APPLY_SPECIFIC(previous_bwd_d_algo) = CUDNN_CONVOLUTION_BWD_DATA_ALGO_0;
592
593
594 #undef FAIL
595 #undef DTYPE_INPUT_0
596 #undef TYPENUM_INPUT_0
597 #undef ITEMSIZE_INPUT_0
598 #undef DTYPE_INPUT_1
599 #undef TYPENUM_INPUT_1
600 #undef ITEMSIZE_INPUT_1
601 #undef DTYPE_INPUT_2
602 #undef TYPENUM_INPUT_2
603 #undef ITEMSIZE_INPUT_2
604 #undef DTYPE_INPUT_4
605 #undef TYPENUM_INPUT_4
606 #undef ITEMSIZE_INPUT_4
607 #undef DTYPE_INPUT_5
608 #undef TYPENUM_INPUT_5
609 #undef ITEMSIZE_INPUT_5
610 #undef DTYPE_OUTPUT_0
611 #undef TYPENUM_OUTPUT_0
612 #undef ITEMSIZE_OUTPUT_0
613 #undef APPLY_SPECIFIC
614 #undef CONV_ALGO
615 #undef CHOOSE_ALGO
616 #undef CHOOSE_ALGO_ONCE
617 #undef CHOOSE_ALGO_TIME
618 #undef CONV_INPLACE
619 this->__ERROR = __ERROR;
620 return 0;
621 }
622 void cleanup(void) {
623 __label_1:
624
625 double __DUMMY_1;
626 __label_3:
627
628 double __DUMMY_3;
629 __label_5:
630
631 double __DUMMY_5;
632 __label_7:
633
634 double __DUMMY_7;
635 __label_9:
636
637 double __DUMMY_9;
638 __label_11:
639
640 double __DUMMY_11;
641 __label_13:
642
643 double __DUMMY_13;
644 __label_16:
645
646 #define DTYPE_INPUT_0 npy_float32
647 #define TYPENUM_INPUT_0 11
648 #define ITEMSIZE_INPUT_0 4
649 #define DTYPE_INPUT_1 npy_float32
650 #define TYPENUM_INPUT_1 11
651 #define ITEMSIZE_INPUT_1 4
652 #define DTYPE_INPUT_2 npy_float32
653 #define TYPENUM_INPUT_2 11
654 #define ITEMSIZE_INPUT_2 4
655 #define DTYPE_INPUT_4 npy_float32
656 #define TYPENUM_INPUT_4 11
657 #define ITEMSIZE_INPUT_4 4
658 #define DTYPE_INPUT_5 npy_float32
659 #define TYPENUM_INPUT_5 11
660 #define ITEMSIZE_INPUT_5 4
661 #define DTYPE_OUTPUT_0 npy_float32
662 #define TYPENUM_OUTPUT_0 11
663 #define ITEMSIZE_OUTPUT_0 4
664 #define APPLY_SPECIFIC(str) str##_node_md48cd7c806151b0105e1fa2b573cc03b_0
665 #define CONV_ALGO CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM
666 #define CHOOSE_ALGO 0
667 #define CHOOSE_ALGO_ONCE 0
668 #define CHOOSE_ALGO_TIME 0
669 #define CONV_INPLACE 1
670
671
672 if (APPLY_SPECIFIC(input) != NULL)
673 cudnnDestroyTensorDescriptor(APPLY_SPECIFIC(input));
674 if (APPLY_SPECIFIC(output) != NULL)
675 cudnnDestroyTensorDescriptor(APPLY_SPECIFIC(output));
676 if (APPLY_SPECIFIC(kerns) != NULL)
677 cudnnDestroyFilterDescriptor(APPLY_SPECIFIC(kerns));
678
679 #undef DTYPE_INPUT_0
680 #undef TYPENUM_INPUT_0
681 #undef ITEMSIZE_INPUT_0
682 #undef DTYPE_INPUT_1
683 #undef TYPENUM_INPUT_1
684 #undef ITEMSIZE_INPUT_1
685 #undef DTYPE_INPUT_2
686 #undef TYPENUM_INPUT_2
687 #undef ITEMSIZE_INPUT_2
688 #undef DTYPE_INPUT_4
689 #undef TYPENUM_INPUT_4
690 #undef ITEMSIZE_INPUT_4
691 #undef DTYPE_INPUT_5
692 #undef TYPENUM_INPUT_5
693 #undef ITEMSIZE_INPUT_5
694 #undef DTYPE_OUTPUT_0
695 #undef TYPENUM_OUTPUT_0
696 #undef ITEMSIZE_OUTPUT_0
697 #undef APPLY_SPECIFIC
698 #undef CONV_ALGO
699 #undef CHOOSE_ALGO
700 #undef CHOOSE_ALGO_ONCE
701 #undef CHOOSE_ALGO_TIME
702 #undef CONV_INPLACE
703 double __DUMMY_16;
704
705 Py_XDECREF(this->storage_V3);
706 Py_XDECREF(this->storage_V5);
707 Py_XDECREF(this->storage_V7);
708 Py_XDECREF(this->storage_V9);
709 Py_XDECREF(this->storage_V11);
710 Py_XDECREF(this->storage_V13);
711 Py_XDECREF(this->storage_V1);
712 }
713 int run(void) {
714 int __failure = 0;
715
716 PyObject
py_V1;
717 CudaNdarray * V1;
718 PyObject
py_V3;
719 CudaNdarray * V3;
720 PyObject
py_V5;
721 CudaNdarray * V5;
722 PyObject
py_V7;
723 CudaNdarray * V7;
724 PyObject
py_V9;
725
726 cudnnConvolutionDescriptor_t V9;
727
728 PyObject
py_V11;
729
730 typedef npy_float32 dtype_V11;
731
732 npy_float32 V11;
733
734 PyObject
py_V13;
735
736 typedef npy_float32 dtype_V13;
737
738 npy_float32 V13;
739
740 {
741
742 py_V1 = PyList_GET_ITEM(storage_V1, 0);
743 {Py_XINCREF(py_V1);}
744
745 if (py_V1 == Py_None)
746 {
747 V1 = NULL;
748 }
749 else
750 {
751
752 assert(py_V1->ob_refcnt >= 2); // There should be at least one ref from the container object,
753 // and one ref from the local scope.
754
755 if (CudaNdarray_Check(py_V1))
756 {
757 //fprintf(stderr, "c_extract CNDA object w refcnt %p %i\n", py_V1, (py_V1->ob_refcnt));
758 V1 = (CudaNdarray
)py_V1;
759 //std::cerr << "c_extract " << V1 << '\n';
760
761
762 if (V1->nd != 4)
763 {
764 PyErr_Format(PyExc_RuntimeError,
765 "c_extract: Some CudaNdarray has rank %i, it was supposed to have rank 4",
766 V1->nd);
767 V1 = NULL;
768 {
769 __failure = 2;
770 if (!PyErr_Occurred()) {
771 PyErr_SetString(PyExc_RuntimeError,
772 "Unexpected error in an Op's C code. "
773 "No Python exception was set.");
774 }
775 goto __label_2;};
776 }
777 //std::cerr << "c_extract " << V1 << " nd check passed\n";
778
779
780 assert(V1);
781 Py_INCREF(py_V1);
782 }
783 else if (py_V1 == Py_None)
784 {
785 PyErr_SetString(PyExc_TypeError,
786 "expected a CudaNdarray, not None");
787 V1 = NULL;
788 {
789 __failure = 2;
790 if (!PyErr_Occurred()) {
791 PyErr_SetString(PyExc_RuntimeError,
792 "Unexpected error in an Op's C code. "
793 "No Python exception was set.");
794 }
795 goto __label_2;};
796 }
797 else
798 {
799 //fprintf(stderr, "FAILING c_extract CNDA object w refcnt %p %i\n", py_V1, (py_V1->ob_refcnt));
800 PyErr_SetString(PyExc_TypeError, "Argument not a CudaNdarray");
801 V1 = NULL;
802 {
803 __failure = 2;
804 if (!PyErr_Occurred()) {
805 PyErr_SetString(PyExc_RuntimeError,
806 "Unexpected error in an Op's C code. "
807 "No Python exception was set.");
808 }
809 goto __label_2;};
810 }
811 //std::cerr << "c_extract done " << V1 << '\n';
812
813
814 }
815
816 {
817
818 py_V3 = PyList_GET_ITEM(storage_V3, 0);
819 {Py_XINCREF(py_V3);}
820
821 assert(py_V3->ob_refcnt >= 2); // There should be at least one ref from the container object,
822 // and one ref from the local scope.
823
824 if (CudaNdarray_Check(py_V3))
825 {
826 //fprintf(stderr, "c_extract CNDA object w refcnt %p %i\n", py_V3, (py_V3->ob_refcnt));
827 V3 = (CudaNdarray
)py_V3;
828 //std::cerr << "c_extract " << V3 << '\n';
829
830
831 if (V3->nd != 4)
832 {
833 PyErr_Format(PyExc_RuntimeError,
834 "c_extract: Some CudaNdarray has rank %i, it was supposed to have rank 4",
835 V3->nd);
836 V3 = NULL;
837 {
838 __failure = 4;
839 if (!PyErr_Occurred()) {
840 PyErr_SetString(PyExc_RuntimeError,
841 "Unexpected error in an Op's C code. "
842 "No Python exception was set.");
843 }
844 goto __label_4;};
845 }
846 //std::cerr << "c_extract " << V3 << " nd check passed\n";
847
848
849 assert(V3);
850 Py_INCREF(py_V3);
851 }
852 else if (py_V3 == Py_None)
853 {
854 PyErr_SetString(PyExc_TypeError,
855 "expected a CudaNdarray, not None");
856 V3 = NULL;
857 {
858 __failure = 4;
859 if (!PyErr_Occurred()) {
860 PyErr_SetString(PyExc_RuntimeError,
861 "Unexpected error in an Op's C code. "
862 "No Python exception was set.");
863 }
864 goto __label_4;};
865 }
866 else
867 {
868 //fprintf(stderr, "FAILING c_extract CNDA object w refcnt %p %i\n", py_V3, (py_V3->ob_refcnt));
869 PyErr_SetString(PyExc_TypeError, "Argument not a CudaNdarray");
870 V3 = NULL;
871 {
872 __failure = 4;
873 if (!PyErr_Occurred()) {
874 PyErr_SetString(PyExc_RuntimeError,
875 "Unexpected error in an Op's C code. "
876 "No Python exception was set.");
877 }
878 goto __label_4;};
879 }
880 //std::cerr << "c_extract done " << V3 << '\n';
881
882
883 {
884
885 py_V5 = PyList_GET_ITEM(storage_V5, 0);
886 {Py_XINCREF(py_V5);}
887
888 assert(py_V5->ob_refcnt >= 2); // There should be at least one ref from the container object,
889 // and one ref from the local scope.
890
891 if (CudaNdarray_Check(py_V5))
892 {
893 //fprintf(stderr, "c_extract CNDA object w refcnt %p %i\n", py_V5, (py_V5->ob_refcnt));
894 V5 = (CudaNdarray
)py_V5;
895 //std::cerr << "c_extract " << V5 << '\n';
896
897
898 if (V5->nd != 4)
899 {
900 PyErr_Format(PyExc_RuntimeError,
901 "c_extract: Some CudaNdarray has rank %i, it was supposed to have rank 4",
902 V5->nd);
903 V5 = NULL;
904 {
905 __failure = 6;
906 if (!PyErr_Occurred()) {
907 PyErr_SetString(PyExc_RuntimeError,
908 "Unexpected error in an Op's C code. "
909 "No Python exception was set.");
910 }
911 goto __label_6;};
912 }
913 //std::cerr << "c_extract " << V5 << " nd check passed\n";
914
915
916 assert(V5);
917 Py_INCREF(py_V5);
918 }
919 else if (py_V5 == Py_None)
920 {
921 PyErr_SetString(PyExc_TypeError,
922 "expected a CudaNdarray, not None");
923 V5 = NULL;
924 {
925 __failure = 6;
926 if (!PyErr_Occurred()) {
927 PyErr_SetString(PyExc_RuntimeError,
928 "Unexpected error in an Op's C code. "
929 "No Python exception was set.");
930 }
931 goto __label_6;};
932 }
933 else
934 {
935 //fprintf(stderr, "FAILING c_extract CNDA object w refcnt %p %i\n", py_V5, (py_V5->ob_refcnt));
936 PyErr_SetString(PyExc_TypeError, "Argument not a CudaNdarray");
937 V5 = NULL;
938 {
939 __failure = 6;
940 if (!PyErr_Occurred()) {
941 PyErr_SetString(PyExc_RuntimeError,
942 "Unexpected error in an Op's C code. "
943 "No Python exception was set.");
944 }
945 goto __label_6;};
946 }
947 //std::cerr << "c_extract done " << V5 << '\n';
948
949
950 {
951
952 py_V7 = PyList_GET_ITEM(storage_V7, 0);
953 {Py_XINCREF(py_V7);}
954
955 assert(py_V7->ob_refcnt >= 2); // There should be at least one ref from the container object,
956 // and one ref from the local scope.
957
958 if (CudaNdarray_Check(py_V7))
959 {
960 //fprintf(stderr, "c_extract CNDA object w refcnt %p %i\n", py_V7, (py_V7->ob_refcnt));
961 V7 = (CudaNdarray
)py_V7;
962 //std::cerr << "c_extract " << V7 << '\n';
963
964
965 if (V7->nd != 4)
966 {
967 PyErr_Format(PyExc_RuntimeError,
968 "c_extract: Some CudaNdarray has rank %i, it was supposed to have rank 4",
969 V7->nd);
970 V7 = NULL;
971 {
972 __failure = 8;
973 if (!PyErr_Occurred()) {
974 PyErr_SetString(PyExc_RuntimeError,
975 "Unexpected error in an Op's C code. "
976 "No Python exception was set.");
977 }
978 goto __label_8;};
979 }
980 //std::cerr << "c_extract " << V7 << " nd check passed\n";
981
982
983 assert(V7);
984 Py_INCREF(py_V7);
985 }
986 else if (py_V7 == Py_None)
987 {
988 PyErr_SetString(PyExc_TypeError,
989 "expected a CudaNdarray, not None");
990 V7 = NULL;
991 {
992 __failure = 8;
993 if (!PyErr_Occurred()) {
994 PyErr_SetString(PyExc_RuntimeError,
995 "Unexpected error in an Op's C code. "
996 "No Python exception was set.");
997 }
998 goto __label_8;};
999 }
1000 else
1001 {
1002 //fprintf(stderr, "FAILING c_extract CNDA object w refcnt %p %i\n", py_V7, (py_V7->ob_refcnt));
1003 PyErr_SetString(PyExc_TypeError, "Argument not a CudaNdarray");
1004 V7 = NULL;
1005 {
1006 __failure = 8;
1007 if (!PyErr_Occurred()) {
1008 PyErr_SetString(PyExc_RuntimeError,
1009 "Unexpected error in an Op's C code. "
1010 "No Python exception was set.");
1011 }
1012 goto __label_8;};
1013 }
1014 //std::cerr << "c_extract done " << V7 << '\n';
1015
1016
1017 {
1018
1019 py_V9 = PyList_GET_ITEM(storage_V9, 0);
1020 {Py_XINCREF(py_V9);}
1021
1022 V9 = (cudnnConvolutionDescriptor_t)PyCapsule_GetPointer(py_V9, NULL);
1023 if (V9 == NULL) {
1024 __failure = 10;
1025 if (!PyErr_Occurred()) {
1026 PyErr_SetString(PyExc_RuntimeError,
1027 "Unexpected error in an Op's C code. "
1028 "No Python exception was set.");
1029 }
1030 goto __label_10;}
1031
1032 {
1033
1034 py_V11 = PyList_GET_ITEM(storage_V11, 0);
1035 {Py_XINCREF(py_V11);}
1036
1037 if (!PyObject_TypeCheck(py_V11, &PyFloat32ArrType_Type))
1038 {
1039 PyErr_Format(PyExc_ValueError,
1040 "Scalar check failed (npy_float32)");
1041 {
1042 __failure = 12;
1043 if (!PyErr_Occurred()) {
1044 PyErr_SetString(PyExc_RuntimeError,
1045 "Unexpected error in an Op's C code. "
1046 "No Python exception was set.");
1047 }
1048 goto __label_12;}
1049 }
1050
1051 PyArray_ScalarAsCtype(py_V11, &V11);
1052
1053 {
1054
1055 py_V13 = PyList_GET_ITEM(storage_V13, 0);
1056 {Py_XINCREF(py_V13);}
1057
1058 if (!PyObject_TypeCheck(py_V13, &PyFloat32ArrType_Type))
1059 {
1060 PyErr_Format(PyExc_ValueError,
1061 "Scalar check failed (npy_float32)");
1062 {
1063 __failure = 14;
1064 if (!PyErr_Occurred()) {
1065 PyErr_SetString(PyExc_RuntimeError,
1066 "Unexpected error in an Op's C code. "
1067 "No Python exception was set.");
1068 }
1069 goto __label_14;}
1070 }
1071
1072 PyArray_ScalarAsCtype(py_V13, &V13);
1073
1074 {
1075 // Op class GpuDnnConv
1076
1077 #define APPLY_SPECIFIC(str) str##_node_md48cd7c806151b0105e1fa2b573cc03b_0
1078 #define CONV_ALGO CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM
1079 #define CHOOSE_ALGO 0
1080 #define CHOOSE_ALGO_ONCE 0
1081 #define CHOOSE_ALGO_TIME 0
1082 #define CONV_INPLACE 1
1083 {
1084 if (APPLY_SPECIFIC(conv_fwd)(V3, V5, V7, V9, V11, V13, &V1) != 0) {
1085 {
1086 __failure = 15;
1087 if (!PyErr_Occurred()) {
1088 PyErr_SetString(PyExc_RuntimeError,
1089 "Unexpected error in an Op's C code. "
1090 "No Python exception was set.");
1091 }
1092 goto __label_15;}
1093 }
1094 }
1095 #undef APPLY_SPECIFIC
1096 #undef CONV_ALGO
1097 #undef CHOOSE_ALGO
1098 #undef CHOOSE_ALGO_ONCE
1099 #undef CHOOSE_ALGO_TIME
1100 #undef CONV_INPLACE
1101 __label_15:
1102
1103 double __DUMMY_15;
1104
1105 }
1106 __label_14:
1107
1108 {Py_XDECREF(py_V13);}
1109
1110 double __DUMMY_14;
1111
1112 }
1113 __label_12:
1114
1115 {Py_XDECREF(py_V11);}
1116
1117 double __DUMMY_12;
1118
1119 }
1120 __label_10:
1121
1122 {Py_XDECREF(py_V9);}
1123
1124 double __DUMMY_10;
1125
1126 }
1127 __label_8:
1128
1129 //std::cerr << "cleanup " << py_V7 << " " << V7 << "\n";
1130 //fprintf(stderr, "c_cleanup CNDA py_object w refcnt %p %i\n", py_V7, (py_V7->ob_refcnt));
1131 if (V7)
1132 {
1133 //fprintf(stderr, "c_cleanup CNDA cn_object w refcnt %p %i\n", V7, (V7->ob_refcnt));
1134 Py_XDECREF(V7);
1135 }
1136 //std::cerr << "cleanup done" << py_V7 << "\n";
1137
1138 {Py_XDECREF(py_V7);}
1139
1140 double __DUMMY_8;
1141
1142 }
1143 __label_6:
1144
1145 //std::cerr << "cleanup " << py_V5 << " " << V5 << "\n";
1146 //fprintf(stderr, "c_cleanup CNDA py_object w refcnt %p %i\n", py_V5, (py_V5->ob_refcnt));
1147 if (V5)
1148 {
1149 //fprintf(stderr, "c_cleanup CNDA cn_object w refcnt %p %i\n", V5, (V5->ob_refcnt));
1150 Py_XDECREF(V5);
1151 }
1152 //std::cerr << "cleanup done" << py_V5 << "\n";
1153
1154 {Py_XDECREF(py_V5);}
1155
1156 double __DUMMY_6;
1157
1158 }
1159 __label_4:
1160
1161 //std::cerr << "cleanup " << py_V3 << " " << V3 << "\n";
1162 //fprintf(stderr, "c_cleanup CNDA py_object w refcnt %p %i\n", py_V3, (py_V3->ob_refcnt));
1163 if (V3)
1164 {
1165 //fprintf(stderr, "c_cleanup CNDA cn_object w refcnt %p %i\n", V3, (V3->ob_refcnt));
1166 Py_XDECREF(V3);
1167 }
1168 //std::cerr << "cleanup done" << py_V3 << "\n";
1169
1170 {Py_XDECREF(py_V3);}
1171
1172 double __DUMMY_4;
1173
1174 }
1175 __label_2:
1176
1177 if (!__failure) {
1178
1179 //std::cerr << "sync\n";
1180 if (NULL == V1) {
1181 // failure: sync None to storage
1182 Py_XDECREF(py_V1);
1183 py_V1 = Py_None;
1184 Py_INCREF(py_V1);
1185 }
1186 else
1187 {
1188 if (py_V1 != (PyObject*)V1)
1189 {
1190 Py_XDECREF(py_V1);
1191 py_V1 = (PyObject*)V1;
1192 Py_INCREF(py_V1);
1193 }
1194 assert(py_V1->ob_refcnt);
1195 }
1196
1197 PyObject* old = PyList_GET_ITEM(storage_V1, 0);
1198 {Py_XINCREF(py_V1);}
1199 PyList_SET_ITEM(storage_V1, 0, py_V1);
1200 {Py_XDECREF(old);}
1201 }
1202
1203 //std::cerr << "cleanup " << py_V1 << " " << V1 << "\n";
1204 //fprintf(stderr, "c_cleanup CNDA py_object w refcnt %p %i\n", py_V1, (py_V1->ob_refcnt));
1205 if (V1)
1206 {
1207 //fprintf(stderr, "c_cleanup CNDA cn_object w refcnt %p %i\n", V1, (V1->ob_refcnt));
1208 Py_XDECREF(V1);
1209 }
1210 //std::cerr << "cleanup done" << py_V1 << "\n";
1211
1212 {Py_XDECREF(py_V1);}
1213
1214 double __DUMMY_2;
1215
1216 }
1217
1218
1219 if (__failure) {
1220 // When there is a failure, this code puts the exception
1221 // in __ERROR.
1222 PyObject* err_type = NULL;
1223 PyObject* err_msg = NULL;
1224 PyObject* err_traceback = NULL;
1225 PyErr_Fetch(&err_type, &err_msg, &err_traceback);
1226 if (!err_type) {err_type = Py_None;Py_INCREF(Py_None);}
1227 if (!err_msg) {err_msg = Py_None; Py_INCREF(Py_None);}
1228 if (!err_traceback) {err_traceback = Py_None; Py_INCREF(Py_None);}
1229 PyObject* old_err_type = PyList_GET_ITEM(__ERROR, 0);
1230 PyObject* old_err_msg = PyList_GET_ITEM(__ERROR, 1);
1231 PyObject* old_err_traceback = PyList_GET_ITEM(__ERROR, 2);
1232 PyList_SET_ITEM(__ERROR, 0, err_type);
1233 PyList_SET_ITEM(__ERROR, 1, err_msg);
1234 PyList_SET_ITEM(__ERROR, 2, err_traceback);
1235 {Py_XDECREF(old_err_type);}
1236 {Py_XDECREF(old_err_msg);}
1237 {Py_XDECREF(old_err_traceback);}
1238 }
1239 // The failure code is returned to index what code block failed.
1240 return __failure;
1241
1242 }
1243 };
1244 }
1245
1246
1247 static int __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b_executor(__struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b *self) {
1248 return self->run();
1249 }
1250
1251 static void __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b_destructor(PyObject *capsule) {
1252 __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b self = (__struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b )PyCapsule_GetContext(capsule);
1253 delete self;
1254 }
1255
1256 //////////////////////
1257 //// Functions
1258 //////////////////////
1259 static PyObject * instantiate(PyObject * self, PyObject argtuple) {
1260 assert(PyTuple_Check(argtuple));
1261 if (8 != PyTuple_Size(argtuple)){
1262 PyErr_Format(PyExc_TypeError, "Wrong number of arguments, expected 8, got %i", (int)PyTuple_Size(argtuple));
1263 return NULL;
1264 }
1265 __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b
struct_ptr = new __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b();
1266 if (struct_ptr->init( PyTuple_GET_ITEM(argtuple, 0),PyTuple_GET_ITEM(argtuple, 1),PyTuple_GET_ITEM(argtuple, 2),PyTuple_GET_ITEM(argtuple, 3),PyTuple_GET_ITEM(argtuple, 4),PyTuple_GET_ITEM(argtuple, 5),PyTuple_GET_ITEM(argtuple, 6),PyTuple_GET_ITEM(argtuple, 7) ) != 0) {
1267 delete struct_ptr;
1268 return NULL;
1269 }
1270 PyObject
thunk = PyCapsule_New((void
)(&__struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b_executor), NULL, __struct_compiled_op_md48cd7c806151b0105e1fa2b573cc03b_destructor);
1271 if (thunk != NULL && PyCapsule_SetContext(thunk, struct_ptr) != 0) {
1272 PyErr_Clear();
1273 Py_DECREF(thunk);
1274 thunk = NULL;
1275 }
1276
1277 return thunk; }
1278
1279 //////////////////////
1280 //// Module init
1281 //////////////////////
1282 static PyMethodDef MyMethods[] = {
1283 {"instantiate", instantiate, METH_VARARGS, "undocumented"} ,
1284 {NULL, NULL, 0, NULL}
1285 };
1286 static struct PyModuleDef moduledef = {
1287 PyModuleDef_HEAD_INIT,
1288 "md48cd7c806151b0105e1fa2b573cc03b",
1289 NULL,
1290 -1,
1291 MyMethods,
1292 };
1293
1294 PyMODINIT_FUNC PyInit_md48cd7c806151b0105e1fa2b573cc03b(void) {
1295 import_array();
1296
1297
1298 {
1299 cudnnStatus_t err;
1300 if ((err = cudnnCreate(&_handle)) != CUDNN_STATUS_SUCCESS) {
1301 PyErr_Format(PyExc_RuntimeError, "could not create cuDNN handle: %s",
1302 cudnnGetErrorString(err));
1303 #if PY_MAJOR_VERSION >= 3
1304 return NULL;
1305 #else
1306 return;
1307 #endif
1308 }
1309 }
1310
1311 PyObject *m = PyModule_Create(&moduledef);
1312 return m;
1313 }
1314
===============================
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda\cuda_ndarray.cuh(17): warning C4005: 'PyString_Check': macro redefinition
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\include\numpy/npy_3kcompat.h(71): note: see previous definition of 'PyString_Check'
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda\cuda_ndarray.cuh(18): warning C4005: 'PyString_FromString': macro redefinition
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\include\numpy/npy_3kcompat.h(73): note: see previous definition of 'PyString_FromString'
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda\cuda_ndarray.cuh(19): warning C4005: 'PyString_AsString': macro redefinition
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\include\numpy/npy_3kcompat.h(80): note: see previous definition of 'PyString_AsString'
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda\cuda_ndarray.cuh(20): warning C4005: 'PyString_FromStringAndSize': macro redefinition
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\include\numpy/npy_3kcompat.h(74): note: see previous definition of 'PyString_FromStringAndSize'
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda\cuda_ndarray.cuh(21): warning C4005: 'PyString_Size': macro redefinition
C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\include\numpy/npy_3kcompat.h(82): note: see previous definition of 'PyString_Size'
mod.cu(77): error: identifier "cudnnSetFilterNdDescriptor_v4" is undefined

1 error detected in the compilation of "C:/Users/Dell/AppData/Local/Temp/tmpxft_00001cec_00000000-13_mod.cpp1.ii".
mod.cu

['nvcc', '-shared', '-O3', '-arch=sm_61', '--compiler-bindir', 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin', '-Xlinker', '/DEBUG', '-D HAVE_ROUND', '-m64', '-Xcompiler', '-DCUDA_NDARRAY_CUH=mc72d035fdf91890f3b36710688069b2e,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,/Zi,/MD', '-I"C:\Users\Dell\AppData\Local\Theano\compiledir_Windows-10-10.0.16299-SP0-Intel64_Family_6_Model_158_Stepping_9_GenuineIntel-3.5.4-64\cuda_ndarray"', '-I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda"', '-I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\include"', '-I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\include"', '-I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof"', '-L"C:\Users\Dell\AppData\Local\Theano\compiledir_Windows-10-10.0.16299-SP0-Intel64_Family_6_Model_158_Stepping_9_GenuineIntel-3.5.4-64\cuda_ndarray"', '-L"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\libs"', '-L"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu"', '-o', 'C:\Users\Dell\AppData\Local\Theano\compiledir_Windows-10-10.0.16299-SP0-Intel64_Family_6_Model_158_Stepping_9_GenuineIntel-3.5.4-64\tmpczqo7j1q\md48cd7c806151b0105e1fa2b573cc03b.pyd', 'mod.cu', '-lcudart', '-lcublas', '-lcuda_ndarray', '-lcudnn', '-lpython35']
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\generate.py", line 42, in story
feats = compute_features(z['net'], im).flatten()
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\generate.py", line 183, in compute_features
deterministic=True).eval())
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\graph.py", line 516, in eval
self._fn_cache[inputs] = theano.function(inputs, self)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\compile\function.py", line 326, in function
output_keys=output_keys)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\compile\pfunc.py", line 486, in pfunc
output_keys=output_keys)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\compile\function_module.py", line 1795, in orig_function
defaults)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\compile\function_module.py", line 1661, in create
input_storage=input_storage_lists, storage_map=storage_map)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\link.py", line 699, in make_thunk
storage_map=storage_map)[:3]
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\vm.py", line 1047, in make_all
impl=impl))
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\op.py", line 935, in make_thunk
no_recycling)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\op.py", line 839, in make_c_thunk
output_storage=node_output_storage)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\cc.py", line 1190, in make_thunk
keep_lock=keep_lock)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\cc.py", line 1131, in compile
keep_lock=keep_lock)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\cc.py", line 1586, in cthunk_factory
key=key, lnk=self, keep_lock=keep_lock)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\cmodule.py", line 1159, in module_from_key
module = lnk.compile_cmodule(location)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof\cc.py", line 1489, in compile_cmodule
preargs=preargs)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda\nvcc_compiler.py", line 405, in compile_str
'for cmd', ' '.join(cmd))
Exception: ('The following error happened while compiling the node', GpuDnnConv{algo='small', inplace=True}(CudaNdarrayConstant{[[[[ 0. 0. 0. ..., 0. 0.
0. ]
[ 0. 41.1096611 44.12927246 ..., -13.86440277
-12.77532673 0. ]
[ 0. 38.08794403 40.2077179 ..., -12.92387867
-12.93900013 0. ]
...,
[ 0. -78.63896179 -81.44239044 ..., -89.66077423
-89.75489044 0. ]
[ 0. -92.56134033 -94.34616852 ..., -86.61238861
-88.74545288 0. ]
[ 0. 0. 0. ..., 0. 0.
0. ]]

[[ 0. 0. 0. ..., 0. 0.
0. ]
[ 0. 46.28872299 50.27122116 ..., -38.85561371
-40.60744476 0. ]
[ 0. 34.54266357 38.50683212 ..., -40.45137787
-38.49893951 0. ]
...,
[ 0. -91.41646576 -94.57372284 ..., -102.50077057
-101.76644135 0. ]
[ 0. -102.4013443 -104.18616486 ..., -99.45238495
-101.58544922 0. ]
[ 0. 0. 0. ..., 0. 0.
0. ]]

[[ 0. 0. 0. ..., 0. 0.
0. ]
[ 0. 52.28178406 57.42016602 ..., -49.85338593
-53.91872787 0. ]
[ 0. 37.6416626 41.60583115 ..., -51.28987885
-53.84211349 0. ]
...,
[ 0. -100.00496674 -102.87272644 ..., -109.40177155
-110.32433319 0. ]
[ 0. -111.30234528 -113.08716583 ..., -106.35338593
-108.4864502 0. ]
[ 0. 0. 0. ..., 0. 0.
0. ]]]]}, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='valid', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0}), '\n', 'nvcc return status', 2, 'for cmd', 'nvcc -shared -O3 -arch=sm_61 --compiler-bindir C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin -Xlinker /DEBUG -D HAVE_ROUND -m64 -Xcompiler -DCUDA_NDARRAY_CUH=mc72d035fdf91890f3b36710688069b2e,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,/Zi,/MD -I"C:\Users\Dell\AppData\Local\Theano\compiledir_Windows-10-10.0.16299-SP0-Intel64_Family_6_Model_158_Stepping_9_GenuineIntel-3.5.4-64\cuda_ndarray" -I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\sandbox\cuda" -I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\include" -I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\include" -I"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\theano\gof" -L"C:\Users\Dell\AppData\Local\Theano\compiledir_Windows-10-10.0.16299-SP0-Intel64_Family_6_Model_158_Stepping_9_GenuineIntel-3.5.4-64\cuda_ndarray" -L"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\libs" -L"C:\Users\Dell\Anaconda3\envs\tensorflow-gpu" -o C:\Users\Dell\AppData\Local\Theano\compiledir_Windows-10-10.0.16299-SP0-Intel64_Family_6_Model_158_Stepping_9_GenuineIntel-3.5.4-64\tmpczqo7j1q\md48cd7c806151b0105e1fa2b573cc03b.pyd mod.cu -lcudart -lcublas -lcuda_ndarray -lcudnn -lpython35', "[GpuDnnConv{algo='small', inplace=True}(CudaNdarrayConstant{[[[[ 0. 0. 0. ..., 0. 0.\n 0. ]\n [ 0. 41.1096611 44.12927246 ..., -13.86440277\n -12.77532673 0. ]\n [ 0. 38.08794403 40.2077179 ..., -12.92387867\n -12.93900013 0. ]\n ..., \n [ 0. -78.63896179 -81.44239044 ..., -89.66077423\n -89.75489044 0. ]\n [ 0. -92.56134033 -94.34616852 ..., -86.61238861\n -88.74545288 0. ]\n [ 0. 0. 0. ..., 0. 0.\n 0. ]]\n\n [[ 0. 0. 0. ..., 0. 0.\n 0. ]\n [ 0. 46.28872299 50.27122116 ..., -38.85561371\n -40.60744476 0. ]\n [ 0. 34.54266357 38.50683212 ..., -40.45137787\n -38.49893951 0. ]\n ..., \n [ 0. -91.41646576 -94.57372284 ..., -102.50077057\n -101.76644135 0. ]\n [ 0. -102.4013443 -104.18616486 ..., -99.45238495\n -101.58544922 0. ]\n [ 0. 0. 0. ..., 0. 0.\n 0. ]]\n\n [[ 0. 0. 0. ..., 0. 0.\n 0. ]\n [ 0. 52.28178406 57.42016602 ..., -49.85338593\n -53.91872787 0. ]\n [ 0. 37.6416626 41.60583115 ..., -51.28987885\n -53.84211349 0. ]\n ..., \n [ 0. -100.00496674 -102.87272644 ..., -109.40177155\n -110.32433319 0. ]\n [ 0. -111.30234528 -113.08716583 ..., -106.35338593\n -108.4864502 0. ]\n [ 0. 0. 0. ..., 0. 0.\n 0. ]]]]}, <CudaNdarrayType(float32, 4D)>, <CudaNdarrayType(float32, (True, False, False, False))>, <CDataType{cudnnConvolutionDescriptor_t}>, Constant{1.0}, Constant{0.0})]")

Random Nearest captions followed by an error

on running generate.story(z, './images/ex2.jpg') which is image of a flower, i get random nearest neighbor captions followed by an error as below:

generate.story(z, './images/ex2.jpg')
NEAREST-CAPTIONS:
b'A black and white cityscape shows lots of people , mainly a tall , smiling man in suit and tie , who is paying attention to a woman standing beside a second smiling man in glasses and headset , who is also holding a microphone and notepad .'
b'A doll , wearing clothing and a knit hat with ears , and a teddy bear looking up and standing side by side on a wooden bridge with wooden fences on both sides and some dead leaves , with trees with green leaves and bushes in background .'
b'Visible through a windshield : in the distance , sidewalks , lined with snow , utility poles , retail outposts , and a few approaching vehicles , in the foreground , a crosswalk with a turning truck at one side and two large vehicles directly past it .'
b'A cloudy sky rests over gently rolling hills with melting snow and lots of green , while in the foreground rests a big mound of snow and a person , hurling through the air on a skateboard , hunched over , their arms out like a bird .'
b'A woman with elbows and arms folded on table , smiling for camera , with a cloth with a tray with plate on rice covered with sour cream , by an upside down coffee cup by spoon on a saucer , inside room with lattice over window .'

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\generate.py", line 59, in story
print('')
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\skipthoughts.py", line 84, in encode
X = preprocess(X)
File "C:\Users\Dell\Documents\Neural_storyteller\neural-storyteller-master\skipthoughts.py", line 149, in preprocess
sents = sent_detector.tokenize(t)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1237, in tokenize
return list(self.sentences_from_text(text, realign_boundaries))
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1285, in sentences_from_text
return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1276, in span_tokenize
return [(sl.start, sl.stop) for sl in slices]
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1276, in
return [(sl.start, sl.stop) for sl in slices]
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1316, in _realign_boundaries
for sl1, sl2 in _pair_iter(slices):
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 312, in _pair_iter
prev = next(it)
File "C:\Users\Dell\Anaconda3\envs\tensorflow-gpu\lib\site-packages\nltk\tokenize\punkt.py", line 1289, in _slices_from_text
for match in self._lang_vars.period_context_re().finditer(text):
TypeError: cannot use a string pattern on a bytes-like object

ValueError: could not broadcast input array from shape (3600) into shape (2400)

ValueError: could not broadcast input array from shape (3600) into shape (2400)

Loading biases...
Traceback (most recent call last):
File "/home/mcis-lap-25/PycharmProjects/storyteller/neural-storyteller-master/demo.py", line 3, in
generate.story(z, '/home/mcis-lap-25/Desktop/images.jpeg')
File "/home/mcis-lap-25/PycharmProjects/storyteller/neural-storyteller-master/generate.py", line 39, in story
rawim, im = load_image(image_loc)
File "/home/mcis-lap-25/PycharmProjects/storyteller/neural-storyteller-master/generate.py", line 143, in load_image
MEAN_VALUE = numpy.array([103.939, 116.779, 123.68]).reshape((244,244,3))
ValueError: cannot reshape array of size 3 into shape (244,244,3)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.