carpedm20 / attentive-reader-tensorflow Goto Github PK
View Code? Open in Web Editor NEWin progress
in progress
Looks like I keep running out of memory on my machine. Is there a way to access the prebuilt vocab file from somewhere?
Thanks!
Is DeepbiLSTM not complete ? I was trying to figure out the code which performs the final answer prediction. I couldn't figure out that. Please help.
I am getting the following error on running this on version 0.9.
I have already #from tensorflow.models.rnn import rnn, rnn_cell statements and replaced them with tf.nn.**.
[] Building Deep LSTM...
[] Loading vocab from data/cnn/cnn.vocab100000 ...
[*] Loading vocab finished.
Traceback (most recent call last):
File "main.py", line 49, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 44, in main
FLAGS.data_dir, FLAGS.dataset)
File "/home/development/samarth/Workspace/Accenture_VA/attentive-reader-tensorflow-master/model/deep_lstm.py", line 91, in train
self.prepare_model(data_dir, dataset_name, vocab_size)
File "/home/development/samarth/Workspace/Accenture_VA/attentive-reader-tensorflow-master/model/deep_lstm.py", line 69, in prepare_model
for idx, nstarts in enumerate(tf.unpack(self.nstarts))])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 251, in slice
return gen_array_ops.slice(input, begin, size, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1634, in _slice
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2262, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1702, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1054, in _SliceShape
input_shape.assert_has_rank(ndims)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 621, in assert_has_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (32, 768) must have rank 3
The ReadMe is out of sync with main.py, there is no option of -is-train.
Hello. Thanks so much for working in this project.
I a running into an issue when loading the checkpointed model. I get the floowing error after running python main.py --dataset cnn --forward_only True
File "main.py", line 49, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 46, in main
model.load(sess, FLAGS.checkpoint_dir, FLAGS.dataset)
File "/Users/raphaelabberbock/Desktop/junk/attentive2/model/base_model.py", line 29, in load
self.saver = tf.train.Saver()
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 705, in __init__
raise ValueError("No variables to save")
ValueError: No variables to save
An issue has already been opened regarding this matter, but I did not see a resolution to the problem.
Any help would be much appreciated.
Thanks,
Raffi
envy@ub1404:/os_pri/github/attentive-reader-tensorflow$ python main.py --dataset cnn/os_pri/github/attentive-reader-tensorflow$
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
Using gpu device 0: GeForce GTX 950M (CNMeM is disabled, CuDNN 4007)
{'batch_size': 32,
'checkpoint_dir': 'checkpoint',
'data_dir': 'data',
'dataset': 'cnn',
'decay': 0.95,
'epoch': 25,
'forward_only': False,
'learning_rate': 5e-05,
'model': 'LSTM',
'momentum': 0.9,
'vocab_size': 10000}
[] Creating checkpoint directory...
F tensorflow/stream_executor/cuda/cuda_driver.cc:302] current context was not created by the StreamExecutor cuda_driver API: 0x3668d40; a CUDA runtime call was likely performed without using a StreamExecutor context
[ub1404:05800] ** Process received signal ***
[ub1404:05800] Signal: Aborted (6)
[ub1404:05800] Signal code: (-6)
[ub1404:05800] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340) [0x7f6148b10340]
[ub1404:05800] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7f6148771cc9]
[ub1404:05800] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7f61487750d8]
[ub1404:05800] [ 3] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(+0x224d924) [0x7f6115157924]
[ub1404:05800] [ 4] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(+0x21a9c83) [0x7f61150b3c83]
[ub1404:05800] [ 5] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN9perftools8gputools4cuda10CUDADriver13CreateContextEiNS0_13DeviceOptionsEPP8CUctx_st+0x37) [0x7f61150c1e67]
[ub1404:05800] [ 6] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN9perftools8gputools4cuda12CUDAExecutor4InitEiNS0_13DeviceOptionsE+0x143) [0x7f61150c71b3]
[ub1404:05800] [ 7] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN9perftools8gputools14StreamExecutor4InitEiNS0_13DeviceOptionsE+0x1c) [0x7f611506ac4c]
[ub1404:05800] [ 8] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN9perftools8gputools4cuda12CudaPlatform19GetUncachedExecutorERKNS0_20StreamExecutorConfigE+0x1e2) [0x7f61150cc092]
[ub1404:05800] [ 9] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN9perftools8gputools4cuda12CudaPlatform11GetExecutorERKNS0_20StreamExecutorConfigE+0x85) [0x7f61150cc5e5]
[ub1404:05800] [10] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN9perftools8gputools4cuda12CudaPlatform17ExecutorForDeviceEi+0x68) [0x7f61150ccd48]
[ub1404:05800] [11] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN10tensorflow17GPUMachineManagerEv+0x282) [0x7f6114e06212]
[ub1404:05800] [12] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN10tensorflow20BaseGPUDeviceFactory17GetValidDeviceIdsEPSt6vectorIiSaIiEE+0x20) [0x7f6114e044d0]
[ub1404:05800] [13] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN10tensorflow20BaseGPUDeviceFactory13CreateDevicesERKNS_14SessionOptionsERKSsPSt6vectorIPNS_6DeviceESaIS8_EE+0xf0) [0x7f6114e04d50]
[ub1404:05800] [14] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN10tensorflow13DeviceFactory10AddDevicesERKNS_14SessionOptionsERKSsPSt6vectorIPNS_6DeviceESaIS8_EE+0x106) [0x7f6114fdd0d6]
[ub1404:05800] [15] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN10tensorflow20DirectSessionFactory10NewSessionERKNS_14SessionOptionsE+0x51) [0x7f6114dc8b71]
[ub1404:05800] [16] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(_ZN10tensorflow10NewSessionERKNS_14SessionOptionsEPPNS_7SessionE+0x127) [0x7f6114fff6d7]
[ub1404:05800] [17] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(TF_NewSession+0x21) [0x7f6114fce0e1]
[ub1404:05800] [18] /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so(+0x1327c3b) [0x7f6114231c3b]
[ub1404:05800] [19] python(PyEval_EvalFrameEx+0x40d) [0x49968d]
[ub1404:05800] [20] python(PyEval_EvalCodeEx+0x2ac) [0x4a090c]
[ub1404:05800] [21] python(PyEval_EvalFrameEx+0x7d2) [0x499a52]
[ub1404:05800] [22] python() [0x4a1c9a]
[ub1404:05800] [23] python() [0x4dfe94]
[ub1404:05800] [24] python(PyObject_Call+0x36) [0x505f96]
[ub1404:05800] [25] python() [0x4de41a]
[ub1404:05800] [26] python() [0x5039eb]
[ub1404:05800] [27] python(PyEval_EvalFrameEx+0x965) [0x499be5]
[ub1404:05800] [28] python(PyEval_EvalFrameEx+0xc72) [0x499ef2]
[ub1404:05800] [29] python(PyEval_EvalCodeEx+0x2ac) [0x4a090c]
[ub1404:05800] *** End of error message ***
Aborted (core dumped)
envy@ub1404:
Hi, I tried to train the model but it doesn't work because there are too many methods deprecated in tensorflow for example the module tensorflow.models does not exist anymore and so on. Can you fix it? Thanks you so much. Antonio
rzai@rzai00:/prj/attentive-reader-tensorflow$ python data_utils.py data cnn/prj/attentive-reader-tensorflow$
Using gpu device 0: GeForce GTX 1080 (CNMeM is disabled)
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally
[] Combining all contexts for cnn in data/cnn/questions/training ...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 380298/380298 [02:07<00:00, 2993.01it/s]
[] Writing data/cnn/cnn.context ...
[] Create vocab from data/cnn/cnn.context to data/cnn/cnn.vocab100000 ...
Creating vocabulary data/cnn/cnn.vocab100000
data_utils.py:82: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
texts = [word for word in context.lower().split() if word not in cachedStopWords]
Tokenize : -3708.7829s
[] Convert data in data/cnn/questions/training into vocab indicies...
0%| | 0/380298 [00:00<?, ?it/s]
Traceback (most recent call last):
File "data_utils.py", line 258, in
prepare_data(data_dir, dataset_name, int(vocab_size))
File "data_utils.py", line 229, in prepare_data
questions_to_token_ids(train_path, vocab_fname, vocab_size)
File "data_utils.py", line 206, in questions_to_token_ids
data_to_token_ids(fname, fname + ".ids%s" % vocab_size, vocab)
File "data_utils.py", line 185, in data_to_token_ids
tokens_file.writelines(results)
AttributeError: 'GFile' object has no attribute 'writelines'
rzai@rzai00:
Is there any hint about annotating the answer in story? i. e. the result
Neither the official or this implementation demos the result picture generation
I've trained a model and the model has saved in the correct folder.
When running python main.py --dataset cnn --forward_only True
I get the error:
`Traceback (most recent call last):
File "main.py", line 63, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/default/_app.py", line 30, in run
sys.exit(main(sys.argv))
File "main.py", line 59, in main
model.load(sess, FLAGS.checkpoint_dir, FLAGS.dataset)
File "/home/dan/Desktop/attentive-reader-tensorflow/model/base_model.py", line 29, in load
self.saver = tf.train.Saver()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 705, in init
raise ValueError("No variables to save")``
This came after resolving another issue in main.py
.
model.load(FLAGS.checkpoint_dir)
-> model.load(sess, FLAGS.checkpoint_dir, FLAGS.dataset)
I believe that load function is not working as expected. I think that the likely cause for this is the Session has not initialized with any variables and hence cannot instantiate the TF Saver function, causing the error above.
I plan to investigate further but want to see if you had an insight into this. Have you managed to load a model using this command?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.