Giter Club home page Giter Club logo

learn_prox_ops's People

Contributors

dependabot[bot] avatar timmeinhardt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learn_prox_ops's Issues

DataLossError

Hi,

I have managed to succesfully setup the code and the dependencies on my machine. I was also successfully able to run the command

 python src/experiment_deblurring.py print_config

But when I move on to the next step and try to run this command

 python src/experiment_deblurring.py with experiment_name=experiment_a image_name=barbara elemental.optimal_DNCNN_experiment_a

I get an error

DataLossError (see above for traceback): not an sstable (bad magic number)
 [[Node: save/RestoreV2_4 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_4/tensor_names, save/RestoreV2_4/shape_and_slices)]]

 [[Node: save/RestoreV2_19/_63 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_134_save/RestoreV2_19", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

Upon some googling I found out that this error can be a result of the model file not being compatible with my system. I am currently using tensorflow 1.3.0 for the GPU as suggested in the readme. What should I do?

The full stacktrace of the error message is posted below:

2018-04-09 17:11:20.068781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0)
INFO:tensorflow:Restoring parameters from models/DNCNN__gaussian_0.02__40-40-
1__128/model.ckpt
INFO - tensorflow - Restoring parameters from models/DNCNN__gaussian_0.02__40-40-
1__128/model.ckpt
2018-04-09 17:11:20.266064: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266064: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266119: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266084: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266242: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266339: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266498: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266534: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266666: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266672: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.266690: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.267034: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.267252: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.267297: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.267314: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.267884: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268141: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268219: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268291: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268315: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268376: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268515: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268811: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268954: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268967: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.268997: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269120: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269249: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269332: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269358: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269375: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269409: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269562: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
2018-04-09 17:11:20.269663: W tensorflow/core/framework/op_kernel.cc:1192] Data loss: not an sstable (bad magic number)
ERROR - deblurring - Failed after 0:00:04!
Traceback (most recent calls WITHOUT Sacred internals):
File "src/experiment_deblurring.py", line 365, in main
cnn_func = init_cnn_func() if elemental['denoising_prior'] == 'CNN' else None
File "/home/uzair/Denoising_Raeid/learn_prox_ops/src/experiment_ingredients.py", line 185, in init_cnn_func
nn_deployer = Deployer(FLAGS)
File "/home/uzair/Denoising_Raeid/learn_prox_ops/src/tf_solver.py", line 77, in init
saver.restore(self.sess, opt.model_path)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/training/saver.py", line 1560, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
options, run_metadata)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.DataLossError: not an sstable (bad magic number)
[[Node: save/RestoreV2_4 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_4/tensor_names, save/RestoreV2_4/shape_and_slices)]]
[[Node: save/RestoreV2_19/_63 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_134_save/RestoreV2_19", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

Caused by op 'save/RestoreV2_4', defined at:
File "src/experiment_deblurring.py", line 344, in
@ex.automain
File "/home/uzair/.local/lib/python3.4/site-packages/sacred/experiment.py", line 130, in automain
self.run_commandline()
File "/home/uzair/.local/lib/python3.4/site-packages/sacred/experiment.py", line 242, in run_commandline
return self.run(cmd_name, config_updates, named_configs, {}, args)
File "/home/uzair/.local/lib/python3.4/site-packages/sacred/experiment.py", line 187, in run
run()
File "/home/uzair/.local/lib/python3.4/site-packages/sacred/run.py", line 223, in call
self.result = self.main_function(*args)
File "/home/uzair/.local/lib/python3.4/site-packages/sacred/config/captured_function.py", line 47, in captured_function
result = wrapped(*args, **kwargs)
File "src/experiment_deblurring.py", line 365, in main
cnn_func = init_cnn_func() if elemental['denoising_prior'] == 'CNN' else None
File "/home/uzair/.local/lib/python3.4/site-packages/sacred/config/captured_function.py", line 47, in captured_function
result = wrapped(*args, **kwargs)
File "/home/uzair/Denoising_Raeid/learn_prox_ops/src/experiment_ingredients.py", line 185, in init_cnn_func
nn_deployer = Deployer(FLAGS)
File "/home/uzair/Denoising_Raeid/learn_prox_ops/src/tf_solver.py", line 76, in init
saver = tf.train.Saver()
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/training/saver.py", line 1140, in init
self.build()
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/training/saver.py", line 1172, in build
filename=self._filename)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/training/saver.py", line 688, in build
restore_sequentially, reshape)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/training/saver.py", line 407, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/training/saver.py", line 247, in restore_op
[spec.tensor.dtype])[0])
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/ops/gen_io_ops.py", line 663, in restore_v2
dtypes=dtypes, name=name)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/uzair/.local/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 1204, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

DataLossError (see above for traceback): not an sstable (bad magic number)
[[Node: save/RestoreV2_4 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_4/tensor_names, save/RestoreV2_4/shape_and_slices)]]
[[Node: save/RestoreV2_19/_63 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_134_save/RestoreV2_19", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]

module 'proximal' has no attribute 'patch_BM3D'

Hello,I meet some issues,I use Ubuntu16.04 LTS and my gpu is GeForce GTX 1060
when I use the command "python3 src/experiment_demosaicking.py grid_search_all_images with elemental.denoising_prior=BM3D",I got a Traceback "AttributeError: module 'proximal' has no attribute 'patch_BM3D'",but I can find "patch_NLM" in the path "proximal/prox_fns/patch_NLM.py",Is it missing a "patch_BM3D.py" there?

Error when training the DNCNN

I tried to train the model by running the command

   python3 src/tf_solver.py --sigma_noise 0.02 --batch_size 128 --network DNCNN --channels 1 --pipeline bsds500 --device_name /gpu:0 --train_epochs 100

But I received an error

Traceback (most recent call last):
     File "src/tf_solver.py", line 326, in <module>
          main()
     File "src/tf_solver.py", line 322, in main
          solver = Solver()
     File "src/tf_solver.py", line 120, in __init__
          pipelines = PIPELINES[opt.pipelines](opt, test_epochs=None)
     File "/home/uzair/Denoising_Raeid/learn_prox_ops/src/data.py", line 143, in __init__
           opt.train_epochs)
     File "/home/uzair/Denoising_Raeid/learn_prox_ops/src/data.py", line 239, in tf_data_pipeline
           min_after_dequeue=min_after_dequeue)
     File "/home/uzair/Denoising_Raeid/learn_prox_ops/src/utilities.py", line 291, in 
             tf_shuffle_batch_join
            tensor_list_list, enqueue_many)
      TypeError: _store_sparse_tensors_join() missing 1 required positional argument: 'keep_input'

How can I fix this? Thanks

Unreasonable results of optimal_BM3D_experiment_a

Hi, Tim,

I find the optimal BM3D setting yields extremely poor results.

The code of
python src/experiment_deblurring.py with experiment_name=experiment_a image_name=barbara elemental.optimal_DNCNN_experiment_a
produces good results of

INFO - main - Input PSNR: 22.757000
INFO - main - Final PSNR: 26.032000

However, shifting DNCNN to BM3D prior by
python src/experiment_deblurring.py with experiment_name=experiment_a image_name=barbara elemental.optimal_BM3D_experiment_a
leads to extremely poor results

INFO - main - Input PSNR: 22.757000
INFO - main - Final PSNR: 5.494800

with verbose setting=1,
I find the objective value explodes even in iteration 1

iter 1: ||r||_2 = 4199973199709416.500, eps_pri = 4199973199898.533, ||s||_2 = 27999821280051452.000, eps_dual = 61527351260.748, PSNR: 5.4948 dB
iter 2: ||r||_2 = 1238656270915858769903616.000, eps_pri = 1238656270938838138880.000, ||s||_2 = 8257708468100152428593152.000, eps_dual = 17468928722936238080.000, PSNR: 5.332 dB

I was wondering whether you can help me to figure it out?
Thank you very much!

No data/deblurring_grey file

I try to reproduce the experiments in Table 3 of your paper. When I run python src/experiment_deblurring.py with experiment_name=experiment_a image_name=barbara elemental.optimal_DNCNN_experiment_a, it fails with error

File "/LearnProxOpt/src/data.py", line 75, in load_deblurring_grey_data
    experiment_data = experiments_data[experiment_name]
KeyError: 'experiment_a'

I found the reason coming from no file for 'data/deblurring_grey'. (See line 60 of ./src/data.py for more information)

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.