Giter Club home page Giter Club logo

convnetquake's Issues

./runtests.sh

I have the next error when run runtest.sh
image
=============================================== ERRORS ================================================
___________________________ ERROR collecting quakenet/test/data_io_test.py ____________________________
../MY_VIRTUALENVS/ConvNetQuake/local/lib/python2.7/site-packages/_pytest/python.py:507: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=importmode)
../MY_VIRTUALENVS/ConvNetQuake/local/lib/python2.7/site-packages/py/_path/local.py:701: in pyimport
import(modname)
E File "/home/jorge/ConvNetQuake-master/quakenet/test/data_io_test.py", line 14
E def test_write_stream()
E ^
E SyntaxError: invalid syntax

---------- coverage: platform linux2, python 2.7.17-final-0 ----------
Name Stmts Miss Cover

quakenet/init.py 5 0 100%
quakenet/config.py 12 12 0%
quakenet/data_conversion.py 39 39 0%
quakenet/data_io.py 24 24 0%
quakenet/data_pipeline.py 85 61 28%
quakenet/models.py 85 85 0%
quakenet/synth_data.py 88 88 0%
quakenet/test/init.py 0 0 100%
quakenet/test/data_conversion_test.py 0 0 100%
quakenet/test/data_pipeline_test.py 19 11 42%
quakenet/test/model_test.py 0 0 100%
quakenet/test/settings.py 7 0 100%

TOTAL 364 320 12%

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================= 1 error in 1.45 seconds =======================================

Error when traning the model

Hello,
when I ran "./bin/train --dataset data/6_clusters/train --checkpoint_dir output/convnetquake --n_clusters 6",there is a error,
Traceback (most recent call last):
File "./bin/train", line 88, in
main(args)
File "./bin/train", line 70, in main
summary_step=10)
File "/playground/xu_zhen/fst/ConvNetQuake-master/bin/tflib/model.py", line 243, in train
coord.join(threads)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "/root/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/queue_runner_impl.py", line 238, in _run
enqueue_callable()
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1231, in _single_operation_run
target_list_as_strings, status, None)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Name: , Feature: end_time (data type: int64) is required but could not be found.
[[Node: inputs/ParseSingleExample/ParseExample/ParseExample = ParseExample[Ndense=6, Nsparse=0, Tdense=[DT_INT64, DT_STRING, DT_INT64, DT_INT64, DT_INT64, DT_INT64], dense_shapes=[[], [], [], [], [], []], sparse_types=[], _device="/job:localhost/replica:0/task:0/device:CPU:0"](inputs/ParseSingleExample/ExpandDims, inputs/ParseSingleExample/ParseExample/ParseExample/names, inputs/ParseSingleExample/ParseExample/ParseExample/dense_keys_0, inputs/ParseSingleExample/ParseExample/ParseExample/dense_keys_1, inputs/ParseSingleExample/ParseExample/ParseExample/dense_keys_2, inputs/ParseSingleExample/ParseExample/ParseExample/dense_keys_3, inputs/ParseSingleExample/ParseExample/ParseExample/dense_keys_4, inputs/ParseSingleExample/ParseExample/ParseExample/dense_keys_5, inputs/ParseSingleExample/ParseExample/Const, inputs/ParseSingleExample/ParseExample/ParseExample/names, inputs/ParseSingleExample/ParseExample/Const, inputs/ParseSingleExample/ParseExample/Const, inputs/ParseSingleExample/ParseExample/Const, inputs/ParseSingleExample/ParseExample/Const)]]

How can I solve this?

can't run the train step in model.py

First thank you for share this great code!
I tried to use the trained model of cluster 6 to classify my own data,however,the result is not ideal:for 10000 events it identify about 2000.
So I tried to train the model myself by running:
./bin/train --dataset data/6_clusters/detection/train/ --checkpoint_dir output/convnetquake_6/ --n_clusters 6, in the program tflib/model.py line 44:
data = sess.run(tofetch, options=run_options, run_metadata=run_metadata)
will stop and give this message:
2017-11-29 23:07:44.380113: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Name: , Feature: end_time (data type: int64) is required but could not be found.

It seems to be a version error,any version higher than 0.11 won't work.

Model Question: Prediction Time

If we were to use the model to test it with real time data, and the model forecasts an earthquake, what is the expected time after the prediction or the magnitude of the earthquake?

Tensorflow version error

When installing the dependencies, I get an error as following:

Collecting tensorflow==0.11.0rc1 (from -r requirements.txt (line 15))
  Could not find a version that satisfies the requirement tensorflow==0.11.0rc1 (from -r requirements.txt (line 15)) (from versions: 0.12.0rc0, 0.12.0rc1, 0.12.0, 0.12.1, 1.0.0, 1.0.1, 1.1.0rc0, 1.1.0rc1, 1.1.0rc2, 1.1.0, 1.2.0rc0)
No matching distribution found for tensorflow==0.11.0rc1 (from -r requirements.txt (line 15))

Can I use the newest version of Tensorflow to run this repo, such as TensorFlow 1.2rc0 ?

Data can't be downloaded

Hello,

Unfortunately dropbox won't let users download folders over a certain size. I get the error 'The zip file is too large'.

It would be much appreciated if you could zip everything into one zip file, and then upload it to Dropbox! I don't think there is a restriction on file size for 1 file, only for folders.

Thanks!

Error when inputting custom data

Hello,

I've successfully ran ConvNetQuake on the example .mseed data. When I feed my own data though, converting the .mseed data to tf records first, I get this error:

./bin/predict_from_tfrecords.py \
> --dataset data/tfrecordtz \
> --checkpoint_dir models/convnetquake \
> --n_clusters 6 \
> --max_windows 2678400 \
> --output data/output/tznew
/opt/conda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:From /data/tflib/model.py:35 in __init__.: all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Please use tf.global_variables instead.
Catalog created to store events data/output/tznew/catalog_detection.csv
WARNING:tensorflow:From ./bin/predict_from_tfrecords.py:89 in main.: initialize_local_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.local_variables_initializer` instead.
Loaded model at step 32000 from snapshot models/convnetquake/model-32000.
Predicting using model at step 32000
Evaluation completed (1 epochs).
joining data threads
Prediction took 0.0 min 0.0909900665283 seconds
Traceback (most recent call last):
  File "./bin/predict_from_tfrecords.py", line 178, in <module>
    main(args)
  File "./bin/predict_from_tfrecords.py", line 150, in main
    coord.join(threads)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/training/coordinator.py", line 386, in join
    six.reraise(*self._exc_info_to_raise)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/training/queue_runner_impl.py", line 234, in _run
    sess.run(enqueue_op)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 766, in run
    run_metadata_ptr)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 964, in _run
    feed_dict_string, options, run_metadata)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1014, in _do_run
    target_list, options, run_metadata)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1034, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 6003 values, but the requested shape has 3003
	 [[Node: validation_inputs/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](validation_inputs/DecodeRaw, validation_inputs/Reshape/shape)]]

Caused by op u'validation_inputs/Reshape', defined at:
  File "./bin/predict_from_tfrecords.py", line 178, in <module>
    main(args)
  File "./bin/predict_from_tfrecords.py", line 57, in main
    is_training=False)
  File "/data/quakenet/data_pipeline.py", line 152, in __init__
    samples = self._reader.read()
  File "/data/quakenet/data_pipeline.py", line 81, in read
    example = self._parse_example(serialized_example)
  File "/data/quakenet/data_pipeline.py", line 109, in _parse_example
    data = tf.reshape(data, [self.n_traces, self.win_size])
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2448, in reshape
    name=name)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
    op_def=op_def)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2240, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/opt/conda/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1128, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 6003 values, but the requested shape has 3003
	 [[Node: validation_inputs/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](validation_inputs/DecodeRaw, validation_inputs/Reshape/shape)]]

Any ideas about how to solve this? I think it is a problem with how I have formatted the .mseed file, but I'm not sure what's wrong. Thanks!!

runtests.sh

As I run the test (runtests.sh), I get an error with py.test
Is this due to the TensorFlow version I'm using (1.2) ?

"
usage: py.test [options] [file_or_dir] [file_or_dir] [...]
py.test: error: unrecognized arguments: --cov=quakenet
inifile: None
rootdir: ~/ConvNetQuake
"

cant read issue

Hi
I managed to install this thing into a conda env.
If I try to run next command:
./bin/preprocess/cluster_events --src data/catalogs/OK_2014-2015-2016.csv --dst data/6_clusters/ --n_components 6 --model KMeans

I get this output...

./bin/preprocess/cluster_events: line 18: Clustering of events from a catalog.

We want to be able to predict the origin of an earthquake from its trace only.
This script labels events using an off-the-shelf clustering algorithm on their
geographic coordinates.

The classification we will solve seek to retrieve this label from the traces.

e.g.,
./bin/preprocess/cluster_events --src data/catalogs/OK_2014-2015-2016.csv--dst data/6_clusters --n_components 6 --model KMeans
: File name too long
from: can't read /var/mail/os
from: can't read /var/mail/quakenet
from: can't read /var/mail/quakenet
from: can't read /var/mail/mpl_toolkits.mplot3d
from: can't read /var/mail/quakenet.data_io
from: can't read /var/mail/obspy.core.utcdatetime
from: can't read /var/mail/sklearn.cluster
from: can't read /var/mail/sklearn.mixture

from: can't read /var/mail/openquake.hazardlib.geo.geodetic
./bin/preprocess/cluster_events: line 40: syntax error near unexpected token newline' ./bin/preprocess/cluster_events: line 40: gflags.DEFINE_string('

any ideas whats happening?
thanks

Error in runtests.sh

There is a missing file or function in your data_pipeline.py file.
How do I solve this error? Please make suggestions about this.
FAILED quakenet/test/data_pipeline_test.py::test_data_generator_is_instantiated - AttributeError: module 'quakenet.data_pipeline' has no attribute 'DatasetGenerator'
FAILED quakenet/test/data_pipeline_test.py::test_data_generator_generates_samples - AttributeError: module 'quakenet.data_pipeline' has no attribute 'DatasetGenerator'

Screenshot from 2023-06-01 17-08-17

different window_size can't be used

Hi!I find the window_size can't be changed from default 10s to other numbers,if do so,in the train part it will fail when starting the train step loop.

To be more detailed,it will fail at subroutine _train_step in tflib/model.py:
data = sess.run(tofetch, options=run_options, run_metadata=run_metadata)

I guess it is related to the shape of tensor somewhere,but I don't know how to fix.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.