Giter Club home page Giter Club logo

tensorflow-summarization's Introduction

Tensorflow Seq2seq Text Summarization

This branch uses new tf.contrib.seq2seq APIs in tensorflow r1.1. For r1.0 users, please check Branch tf1.0


This is an implementation of sequence-to-sequence model using a bidirectional GRU encoder and a GRU decoder. This project aims to help people start working on Abstractive Short Text Summarization immediately. And hopefully, it may also work on machine translation tasks.

Dataset

Please check harvardnlp/sent-summary.

Pre-trained Models

Download

Usage

Setup Environment

With GPU

If you want to train the model and have Nvidia GPUs (like GTX 1080, GTX Titan, etc), please setup CUDA environment and install tensorflow-gpu.

> pip3 install -U tensorflow-gpu==1.1

You can check whether the GPU works by

> python3
>>> import tensorflow
>>>

and make sure there are no error outputs.

Without GPU

If you don't have a GPU, you can still use the pretrained models and generate summaries using your CPU.

> pip3 install -U tensorflow==1.1

Model and Data

Files should be organized like this.

Please find these files in the harvardnlp/sent-summary and rename them as

duc2003/input.txt -> test.duc2003.txt
duc2004/input.txt -> test.duc2004.txt
Giga/input.txt -> test.giga.txt

Train Model

> python3 script/train.py can reproduce the experiments shown below.

By doing so, it will train 200k batches first. Then do generation on [giga, duc2003, duc2004] with beam_size in [1, 10] respectively every 20k batches. It will terminate at 300k batches. Also, the model will be saved every 20k batches.

Test Model

> python3 script/test.py will automatically use the most updated model to do generation.

To do customized test, please put input data as

data/test.your_test_name.txt

Change script/test.py line 13-14 from

datasets = ["giga", "duc2003", "duc2004"]
geneos = [True, False, False]

to

datasets = ["your_test_name"]
geneos = [True]

For advanced users, python3 src/summarization.py -h can print help. Please check the code for details.

Implementation Details

Bucketing

In tensorflow r0.11 and earlier, using bucketing is recommended. r1.0 provides dynamic rnn seq2seq framework which is much easier to understand than the tricky bucketing mechanism.

We use dynamic rnn to generate compute graph. There is only one computing graph in our implemention. However, we still split the dataset into several buckets and use data from the same bucket to create a batch. By doing so, we can add less padding, leading to a better efficiency.

Attention Mechanism

The attention mechanism follows Bahdanau et. al.

We follow the implementation in tf.contrib.seq2seq. We refine the softmax function in attention so that paddings always get 0.

Beam Search

For simplicity and flexibility, we implement the beam search algorithm in python while leave the network part in tensorflow. In testing, we consider batch_size as beam_size. The tensorflow graph will generate only 1 word, then some python code will create a new batch according to the result. By iteratively doing so, beam search result is generated.

Check step_beam(...) in bigru_model.py for details.

Results

We train the model for 300k batches with batch size 80. We clip all summaries to 75 bytes. For DUC datasets, we eliminate EOS and generate 12 words. For GIGA dataset, we let the model to generate EOS.

Negative Log Likelihood of Sentence

Rouge Evaluation

Dataset Beam Size R1-R R1-P R1-F R2-R R2-P R2-F RL-R RL-P RL-F
duc2003 1 0.25758 0.23003 0.24235 0.07511 0.06611 0.07009 0.22608 0.20174 0.21262
duc2003 10 0.27312 0.23864 0.25416 0.08977 0.07732 0.08286 0.24129 0.21074 0.22449
duc2004 1 0.27584 0.25971 0.26673 0.08328 0.07832 0.08046 0.24253 0.22853 0.23461
duc2004 10 0.28024 0.25987 0.26889 0.09377 0.08631 0.08959 0.24849 0.23048 0.23844
giga 1 0.3185 0.38779 0.3391 0.14542 0.17537 0.15393 0.29925 0.363 0.3181
giga 10 0.30179 0.41224 0.33635 0.14378 0.1951 0.15936 0.28447 0.38733 0.31664

Requirement

  • Python3
  • Tensorflow r1.1

TODO

  • Improve automatic scripts by parameterizing magic numbers.
  • Some tricks caused by new tensorflow seq2seq framework.

tensorflow-summarization's People

Contributors

leix28 avatar leopard1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-summarization's Issues

About ROUGE

May I ask if it is possible for us to access the code of ROUGE evaluation? Thank you!

Placement of dataset

Hello,

I managed to run training with train.article.txt and train.title.txt but now I'm confused about how to get the test data sets setup properly.

The paths are shown below, but how am I supposed to convert these into a data/test.duc2004.txt which is being loaded by the test script? What exactly is supposed to be in this single file?

data/DUC2004/task1_ref0.txt
data/DUC2004/task1_ref1.txt
data/DUC2004/task1_ref3.txt
data/DUC2004/task1_ref2.txt
data/DUC2004/input.txt

How to train the modal with my own data set

Hello Sir,

I'm new on this. But I want to build the same modal like this can you please tell me how I can train this modal with my own data set.
What changes are required in this code ?

Thanks.

Benchmarks?

I know its a little late to ask about it at this point, but can anyone post the time it took to run for batches of dataset using tensorflow and tensorflow-gpu ?

Thanks!

About data

Hi
I would like to ask about the data provided in the project that we can download conveniently.
Is the data preprocessed by the script of facebook as many papers done?
https://github.com/facebookarchive/NAMAS

Thank you very much and best regards!

beam search

beam search 在当一个序列达到end后的处理貌似不对。

SGD size

Hi brother

i hava a messy below, when training in BiGRUModel.py :
loss_t = tf.contrib.seq2seq.sequence_loss(
outputs_logits, self.decoder_targets, weights,
average_across_timesteps=False,
average_across_batch=False)
self.loss = tf.reduce_sum(loss_t) / self.batch_size

here loss_t is a matric of shape [batch_size, seqenceLen], why here divide self.batch_size rather than self.batch_size * seqncelen?
what is a example represent in this model? a word or a summary?

thank you so much

How can i train further using the pretrained checkpoint files?

I tried to load the checkpoint files for further training:
In train.py >
try: global_step = tf.contrib.framework.load_variable("model", "model.ckpt-300000") except Exception as err: global_step = 0

Gets called to checkpoint_utils.py >
def _get_checkpoint_filename(filepattern, name): """Returns checkpoint filename given directory or specific filepattern.""" if gfile.IsDirectory(filepattern): return saver.latest_checkpoint(filepattern, latest_filename = name) return filepattern

Gets called to saver.py >
def get_checkpoint_state(.....): try: if file_io.file_exists(coord_checkpoint_filename): print('*****file_io.file_exists******') file_content = file_io.read_file_to_string(coord_checkpoint_filename)

The file is not being detected.
But on specifying the complete path in train.py : (adding '.meta' or '.index' or '.data-00000-of-00001') >
global_step = tf.contrib.framework.load_variable("model", "model.ckpt-300000.meta")

the file is detected, but the next line 'file_io.read_file_to_string(coord_checkpoint_filename)' throws > " 'utf-8' codec can't decode byte 0xc3 in position 1: invalid continuation byte "

Can anyone help me out with this issue?
Thank you

Summary Length based on data size

I was looking into using your library for some article summarization but wanted to know if you've done any testing on datasets where the article length is more akin to a page than a couple of sentences?

How might your models react to this length of input?

And how are you determining/limiting the output summary length? Because I've looked through the code and can't find a clear explanation of that anywhere/where it is determined.

Thanks!

list index out of range

While testing on giga word dataset ... i faced with this error

Traceback (most recent call last):
File "src/summarization.py", line 241, in
tf.app.run()
File "C:\Users\spars\Anaconda3\envs\tensor_model\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\spars\Anaconda3\envs\tensor_model\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "C:\Users\spars\Anaconda3\envs\tensor_model\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "src/summarization.py", line 229, in main
decode()
File "src/summarization.py", line 186, in decode
doc_dict = data_util.load_dict(FLAGS.data_dir + "/doc_dict.txt")
File "D:\ML\Summary_maker\src\data_util.py", line 30, in load_dict
tok2id = dict(map(lambda x: (x[1], int(x[0])), dict_data))
File "D:\ML\Summary_maker\src\data_util.py", line 30, in
tok2id = dict(map(lambda x: (x[1], int(x[0])), dict_data))
IndexError: list index out of range

Please Help

InvalidArgumentError

I don't understand how to debug this error.

018-06-10 23:25:50.340196: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
Traceback (most recent call last):
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1350, in _do_call
return fn(*args)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1329, in _run_fn
status, run_metadata)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[10,55] = -1 is not in [0, 2168)
[[Node: embedding_1/Gather = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/durga/PycharmProjects/7.Recent/start_to_end.py", line 159, in
validation_data=(x_test, y_test))
File "/home/durga/.local/lib/python3.5/site-packages/keras/models.py", line 963, in fit
validation_steps=validation_steps)
File "/home/durga/.local/lib/python3.5/site-packages/keras/engine/training.py", line 1705, in fit
validation_steps=validation_steps)
File "/home/durga/.local/lib/python3.5/site-packages/keras/engine/training.py", line 1235, in _fit_loop
outs = f(ins_batch)
File "/home/durga/.local/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2478, in call
**self.session_kwargs)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1128, in _run
feed_dict_tensor, options, run_metadata)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1344, in _do_run
options, run_metadata)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1363, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[10,55] = -1 is not in [0, 2168)
[[Node: embedding_1/Gather = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast)]]

Caused by op 'embedding_1/Gather', defined at:
File "/home/durga/PycharmProjects/7.Recent/start_to_end.py", line 146, in
model.add(Embedding(vocab_size, 32))
File "/home/durga/.local/lib/python3.5/site-packages/keras/models.py", line 467, in add
layer(x)
File "/home/durga/.local/lib/python3.5/site-packages/keras/engine/topology.py", line 619, in call
output = self.call(inputs, **kwargs)
File "/home/durga/.local/lib/python3.5/site-packages/keras/layers/embeddings.py", line 138, in call
out = K.gather(self.embeddings, inputs)
File "/home/durga/.local/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 1211, in gather
return tf.gather(reference, indices)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 2585, in gather
params, indices, validate_indices=validate_indices, name=name)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1864, in gather
validate_indices=validate_indices, name=name)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
op_def=op_def)
File "/home/durga/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1625, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): indices[10,55] = -1 is not in [0, 2168)
[[Node: embedding_1/Gather = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast)]]

Data Processing

I have DUC data but that is in different format(binary). Can you please tell me how to convert it to your required format?
What has to be contents of doc_dict and sum_dict file?

Can work on CUDA 9.1?

Dear all,

I used CUDA 9.1 and could not run file train.py. I wonder that this code can run on CUDA 9.1 or not?

Thank you

Tensorboard Visulasation

I was working on chinese article summarization. However, when I tried to generate a graph using event files... It was showing no graph. If you have the .png file of graph.. can you share it.
Also I wanted to visualize the whole loss graph instead for certain iterations in an event file.
Can you help me it.
image

Cannot work with higher TF version

I tried to run your code with higher TF version (1.5) but there are something wrong with these codes:
decoder_cell = tf.contrib.seq2seq.DynamicAttentionWrapper(
decoder_cell, attention, state_size * 2)
wrapper_state = tf.contrib.seq2seq.DynamicAttentionWrapperState(
self.init_state, self.prev_att)

DynamicAttentionWrapper and DynamicAttentionWrapperState already deprecated in this TF version 1.5.
They replaced by AttentionWrapper and AttentionWrapperState but the input parameters are quiet complexity
Could you please help us update your code to compatible with new TF version
Thank you so much

Dataset

I haven't find any files in harvardnlp/sent-summary, please provide these files so that I can train and contribute.

How to do Rouge Evaluation?

I have used the pretrained model and the summary texts are generated. How do I evaluate it using Rouge? where to find the reference summaries for the testsets?

Lots of <UNK> in real application

I found that the algorithm perform very well on the test sets, but very poorly (many )if I just pick some random articles online.

I believe the root problem is that the coverage of the dictionary is not good enough.
Maybe you could consider building the dictionary from a larger dataset ^^

Evaluation metric

Can anyone explain me what ppl value implies while running the train dataset?What is the formula for it?How do they compute rouge values for this code?

Pretrained model is trained on?

I have used your pretrained model to generate summaries for giga word , duc2003 and duc2004. I would like to know what is the pretrained model trained on and how it is generating only titles for the giga word corpus and summaries for the other two datasets?

Purpose of doc_dict.txt and sum_dict.txt

Hey,

I am quite new in machine learning and I would like to use your code to train my own text summarization model. What I am currently wondering about is, what exactly are you needing the doc_dict.txt and sum_dict.txt files for?

How would I have to adapt them, if I would like to generate my own training data?

Thanks for any help!

TypeError: 'NoneType' object is not subscriptable

Traceback (most recent call last):
  File "src/summarization.py", line 241, in <module>
    tf.app.run()
  File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "src/summarization.py", line 229, in main
    decode()
  File "src/summarization.py", line 190, in decode
    data = data_util.load_test_data(FLAGS.test_file, doc_dict)
  File "/Users/ifadardin/Documents/Python/TensorFlow-Summarization-master/script/src/data_util.py", line 181, in load_test_data
    docid, cover = corpus_map2id(docs, doc_dict[0])
TypeError: 'NoneType' object is not subscriptable

I keep getting this error message when i run test.py i wonder what went wrong?
I use TensorFlow r1.1

AttributeError

AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'DynamicAttentionWrapper'
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'DynamicAttentionWrapperState'

I was getting this errors, which removed by removing the Dynamic from both DynamicAttentionWrapper and DynamicAttentionWrapperState

After that I am getting this error:
File "C:\Users\yaser.sakkaf\Downloads\TensorFlow-Summarization-master\src\bigr
u_model.py", line 90, in init
self.init_state, self.prev_att)
TypeError: new() missing 3 required positional arguments: 'time', 'alignment
s', and 'alignment_history'

Please help.

Attribute error.I am using MAC with python 3.6 and tensorflow installed

Feb 13 23:49 summarization.py[line:195] INFO Creating 1 layers of 400 units.
Traceback (most recent call last):
File "src/summarization.py", line 241, in
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "src/summarization.py", line 229, in main
decode()
File "src/summarization.py", line 196, in decode
model = create_model(sess, True)
File "src/summarization.py", line 75, in create_model
dtype=dtype)
File "/Users/twister_mn/headlinesg/tfsum/src/bigru_model.py", line 87, in init
decoder_cell = tf.contrib.seq2seq.DynamicAttentionWrapper(
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'DynamicAttentionWrapper'

Is there a demo to check the efficiency?

I have been working on the Tensorflow summarization and tried it using 8 GPUs. Even for training for 1 month I was getting the 's in the result. I would like to know whether your repository gives proper result or not? and what is the efficiency of the repository for summarization.
Kindly, let me know where I can see the demo.

Demo to test the model

Hello,
please can you put a demo that show how to use the trained model and output the summer of a given text
Best
Lafi

File not found error

Following those instructions, I got an error of file not found. attachment is the screen shot of error.
please help

capture

Training on a smaller dataset doubt.

I don't have a Titan X and before doing the training on the whole dataset on AWS I would like to train it locally and see how the model converges. So for that apart from reducing the dataset size to 500 what all changes in the parameters are needed to be done so that I can achieve the training of the model in couple of days ?
Also could you give a link to sum_dict and doc_dict .txt files so that I can the test model according the trained weights you have already provided.
Thank You.

输出都是_UNK

输出都是_UNK 词表用的也是自己生成的,生成一篇摘要,90%都是_UNK是什么原因呢? 训练5万条数据对,困惑度到了70多 一万step

tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,3] = 37849 is not in [0, 30000)

i don't understand why I am getting this error

here is the complete summary of errors

Oct 04 14:15 saver.py[line:1455] INFO Restoring parameters from model/model.ckpt-300000
Traceback (most recent call last):
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 1039, in _do_call
return fn(*args)
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _run_fn
status, run_metadata)
File "C:\Python3\lib\contextlib.py", line 66, in exit
next(self.gen)
File "C:\Python3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,23] = 56047 is not in [0, 30000)
[[Node: seq2seq/encoder/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@seq2seq/encoder/embedding"], validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](seq2seq/encoder/embedding/read, _recv_Placeholder_0)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "src/summarization.py", line 241, in
tf.app.run()
File "C:\Python3\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "src/summarization.py", line 229, in main
decode()
File "src/summarization.py", line 214, in decode
sess, encoder_inputs, encoder_len, geneos=FLAGS.geneos)
File "C:\Users\vinayp\Desktop\BEP\textsum\src\bigru_model.py", line 227, in step_beam
outputs = session.run(output_feed, input_feed)
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 778, in run
run_metadata_ptr)
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "C:\Python3\lib\site-packages\tensorflow\python\client\session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,23] = 56047 is not in [0, 30000)
[[Node: seq2seq/encoder/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@seq2seq/encoder/embedding"], validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](seq2seq/encoder/embedding/read, _recv_Placeholder_0)]]

Caused by op 'seq2seq/encoder/embedding_lookup', defined at:
File "src/summarization.py", line 241, in
tf.app.run()
File "C:\Python3\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "src/summarization.py", line 229, in main
decode()
File "src/summarization.py", line 196, in decode
model = create_model(sess, True)
File "src/summarization.py", line 75, in create_model
dtype=dtype)
File "C:\Users\vinayp\Desktop\BEP\textsum\src\bigru_model.py", line 67, in init
encoder_emb, self.encoder_inputs)
File "C:\Python3\lib\site-packages\tensorflow\python\ops\embedding_ops.py", line 119, in embedding_lookup
params[0], ids, validate_indices=validate_indices, name=name))
File "C:\Python3\lib\site-packages\tensorflow\python\ops\embedding_ops.py", line 41, in _do_gather
params, ids, name=name, validate_indices=validate_indices)
File "C:\Python3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1207, in gather
validate_indices=validate_indices, name=name)
File "C:\Python3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
op_def=op_def)
File "C:\Python3\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Python3\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in init
self._traceback = _extract_stack()

also i wanted to ask you how much time did it take to train on titanX gpu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.