Giter Club home page Giter Club logo

noise2noise's Introduction

yu4u's github stats

noise2noise's People

Contributors

yu4u avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

noise2noise's Issues

L0 Loss?

Thanks so much for your implementation.

However I noticed that there is no annealed L0 loss implemented in this current version. Do you plan to update to include this in the future? Especially I notice that L0 showed much better performances in some of the tasks from the original paper.

I can't use other images for training

Hi, I am trying to train with loss = l0 using my own image instead of image_dir: 291 and test_dir: Set14. Then the following code will appear and training will not proceed.

Epoch 00001: UpdateAnnealingParameter reducing gamma to 2.0.

I don't know what this is going on.
How can I help my training to succeed?

Computing Resources

Could you give an idea of what computing resources you used to train your models and how long training took (along with how many epochs you trained for)?

windows ERROR when train (EOFError: Ran out of input)

Hello yu4u,
Thank you for your code.
I got a problem when trying to train the model under windows.
Does any one have the same error or can anyone help me?

windows 10, python 3.6.5, tensorflow 1.8.0, keras 2.2.0, cv2 3.4.1, numpy 1.14.5

EOFError: Ran out of input

Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Anaconda\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Anaconda\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Anaconda\lib\site-packages\keras\utils\data_utils.py", line 548, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "C:\Anaconda\lib\site-packages\keras\utils\data_utils.py", line 522, in
initargs=(seqs,))
File "C:\Anaconda\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\Anaconda\lib\multiprocessing\pool.py", line 174, in init
self._repopulate_pool()
File "C:\Anaconda\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "C:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_noise_model..gaussian_noise'

C:\Anaconda\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Traceback (most recent call last):
File "", line 1, in
File "C:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Anaconda\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

I want to use our own noisy data to train the model

Thanks for your sharing! I have a question about which part of the code I should modify if I have my own different noisy picture.Because the training code is adding noise to the clean image.I want to train the model using my own nosiy image including real nosie.Hope to get your answer, I will be very grateful.

Can not generate the weight file when use my train images

Thank you for your share.

My train command:

python train.py --image_dir dataset/train --test_dir dataset/test --image_size 128 --batch_size 8 --lr 0.001 --source_noise_model text,0,50 --target_noise_model text,0,50 --val_noise_model text,25,25 --loss mae --output_path text_noise
When i use your train images, It's working but the result is not very good. So I use my train images: (All files is "JPG" and the size is : 100K~200K), but It's stop here

image

Thank you

> @HongChow

          > @HongChow

training data : https://cv.snu.ac.kr/research/VDSR/train_data.zip keras=2.1.6 and tensorflow-gpu==1.5.0 but the loss is nan of inf such as : 1/1000 [..............................] - ETA: 48:13 - loss: 33339.6055 - PSNR: 5.4144 3/1000 [..............................] - ETA: 16:30 - loss: nan - PSNR: 4.5721 5/1000 [..............................] - ETA: 10:10 - loss: nan - PSNR: 4.0657 7/1000 [..............................] - ETA: 7:27 - loss: nan - PSNR: 3.8570 9/1000 [..............................] - ETA: 5:56 - loss: nan - PSNR: 3.9201 11/1000 [..............................] - ETA: 4:58 - loss: nan - PSNR: 3.9512 @yu4u could you please help with this?

I also encountered a similar problem, have you found a solution?

@tangrc hi, it seems that upgrading to the latest pytorch fixs this problem.

Originally posted by @HongChow in #60 (comment)

I have GPU GeForce GTX 1050 and 32GB RAM but still issuse of OOM

199/200 [============================>.] - ETA: 0s - loss: 17700.6033 - PSNR: 10.1373Traceback (most recent call last):
File "train.py", line 112, in
main()
File "train.py", line 106, in main
callbacks=callbacks)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1861, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1141, in fit
return_dict=True)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1389, in evaluate
tmp_logs = self.test_function(iterator)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 828, in call
result = self._call(*args, **kwds)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 726, in _initialize
*args, **kwds))
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/eager/function.py", line 3206, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().wrapped(*args, **kwds)
File "/usr/local/lib64/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:1233 test_function  *
    return step_function(self, iterator)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:1224 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:1217 run_step  **
    outputs = model.test_step(data)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/training.py:1183 test_step
    y_pred = self(x, training=False)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:998 __call__
    input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
/usr/local/lib64/python3.6/site-packages/tensorflow/python/keras/engine/input_spec.py:207 assert_input_compatibility
    ' input tensors. Inputs received: ' + str(inputs))

ValueError: Layer model expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=uint8>, <tf.Tensor 'IteratorGetNext:1' shape=(None, None, None, None) dtype=uint8>]

2165-01-12 17:11:32.737853: W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]

loss nan or inf

training data : https://cv.snu.ac.kr/research/VDSR/train_data.zip
keras=2.1.6 and tensorflow-gpu==1.5.0
but the loss is nan of inf such as :
1/1000 [..............................] - ETA: 48:13 - loss: 33339.6055 - PSNR: 5.4144
3/1000 [..............................] - ETA: 16:30 - loss: nan - PSNR: 4.5721
5/1000 [..............................] - ETA: 10:10 - loss: nan - PSNR: 4.0657
7/1000 [..............................] - ETA: 7:27 - loss: nan - PSNR: 3.8570
9/1000 [..............................] - ETA: 5:56 - loss: nan - PSNR: 3.9201
11/1000 [..............................] - ETA: 4:58 - loss: nan - PSNR: 3.9512
@yu4u could you please help with this?

Training data pairs

In the original paper, they described that the network is trained using independently corrupted input and target pairs.

In addition, the example training pairs in Figure 3 and 4 of the original paper is supposed to be considered that a row is a training pair. Or should it be that a column is a training pair?

However, the implementation below always uses the corrupted images of the same source image.

Is it possible to use training pairs (x,y) where x and y are both corrupted images of different source images? To be clear, x is a corrupted image of the source image 1 while y x is a corrupted image of the source image 2.

x[sample_id] = self.source_noise_model(clean_patch)

y[sample_id] = self.target_noise_model(clean_patch)

There's some problem in the training process

/usr/lib/python3.7/queue.py in get(self, block, timeout)
169 while not self._qsize():
--> 170 self.not_empty.wait()
171 elif timeout < 0:

/usr/lib/python3.7/threading.py in wait(self, timeout)
295 if timeout is None:
--> 296 waiter.acquire()
297 gotit = True

KeyboardInterrupt:

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
/content/drive/MyDrive/n2n/train_US.py in ()
131
132 if name == 'main':
--> 133 main()
134

/content/drive/MyDrive/n2n/train_US.py in main()
125 validation_data=val_generator,
126 verbose=1,
--> 127 callbacks=callbacks)
128
129 np.savez(str(output_path.joinpath("history.npz")), history=hist.history)

/tensorflow-1.15.2/python3.7/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/tensorflow-1.15.2/python3.7/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1730 use_multiprocessing=use_multiprocessing,
1731 shuffle=shuffle,
-> 1732 initial_epoch=initial_epoch)
1733
1734 @interfaces.legacy_generator_methods_support

/tensorflow-1.15.2/python3.7/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
183 batch_index = 0
184 while steps_done < steps_per_epoch:
--> 185 generator_output = next(output_generator)
186
187 if not hasattr(generator_output, 'len'):

/tensorflow-1.15.2/python3.7/keras/utils/data_utils.py in get(self)
623 except Exception:
624 self.stop()
--> 625 six.reraise(*sys.exc_info())
626
627

/usr/local/lib/python3.7/dist-packages/six.py in reraise(tp, value, tb)
701 if value.traceback is not tb:
702 raise value.with_traceback(tb)
--> 703 raise value
704 finally:
705 value = None

/tensorflow-1.15.2/python3.7/keras/utils/data_utils.py in get(self)
617 inputs = self.sequence[idx]
618 finally:
--> 619 self.queue.task_done()
620
621 if inputs is not None:

/usr/lib/python3.7/queue.py in task_done(self)
72 if unfinished <= 0:
73 if unfinished < 0:
---> 74 raise ValueError('task_done() called too many times')
75 self.all_tasks_done.notify_all()
76 self.unfinished_tasks = unfinished

ValueError: task_done() called too many times/usr/lib/python3.7/queue.py in get(self, block, timeout)
169 while not self._qsize():
--> 170 self.not_empty.wait()
171 elif timeout < 0:

/usr/lib/python3.7/threading.py in wait(self, timeout)
295 if timeout is None:
--> 296 waiter.acquire()
297 gotit = True

KeyboardInterrupt:

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
/content/drive/MyDrive/n2n/train_US.py in ()
131
132 if name == 'main':
--> 133 main()
134

/content/drive/MyDrive/n2n/train_US.py in main()
125 validation_data=val_generator,
126 verbose=1,
--> 127 callbacks=callbacks)
128
129 np.savez(str(output_path.joinpath("history.npz")), history=hist.history)

/tensorflow-1.15.2/python3.7/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/tensorflow-1.15.2/python3.7/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1730 use_multiprocessing=use_multiprocessing,
1731 shuffle=shuffle,
-> 1732 initial_epoch=initial_epoch)
1733
1734 @interfaces.legacy_generator_methods_support

/tensorflow-1.15.2/python3.7/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
183 batch_index = 0
184 while steps_done < steps_per_epoch:
--> 185 generator_output = next(output_generator)
186
187 if not hasattr(generator_output, 'len'):

/tensorflow-1.15.2/python3.7/keras/utils/data_utils.py in get(self)
623 except Exception:
624 self.stop()
--> 625 six.reraise(*sys.exc_info())
626
627

/usr/local/lib/python3.7/dist-packages/six.py in reraise(tp, value, tb)
701 if value.traceback is not tb:
702 raise value.with_traceback(tb)
--> 703 raise value
704 finally:
705 value = None

/tensorflow-1.15.2/python3.7/keras/utils/data_utils.py in get(self)
617 inputs = self.sequence[idx]
618 finally:
--> 619 self.queue.task_done()
620
621 if inputs is not None:

/usr/lib/python3.7/queue.py in task_done(self)
72 if unfinished <= 0:
73 if unfinished < 0:
---> 74 raise ValueError('task_done() called too many times')
75 self.all_tasks_done.notify_all()
76 self.unfinished_tasks = unfinished

ValueError: task_done() called too many times

rational for picking loss function

Is there some reasoning you can share about why you used a different loss function for each type of noise generation?
gaussian : mse
text : mae
impulse random : L0

How do I use the trained model file to remove noise from my photos?

I use this command.
!python3 test_model.py --weight_file clean.hdf5 --image_dir cleanimage --output_dir Resdir
But it won't export the original size image.

It generates three image combinations.

What should I do to export the noise-reduced image and keep the original size?

Adding poisson noise model

The paper also describes training with poisson noise model. But it is a bit unclear how this can be achieved.
I am following these instructions
https://stackoverflow.com/questions/19289470/adding-poisson-noise-to-an-image#
This is adding noise, but also changes the intensity of the image. Further in the paper although they say that poisson is signal dependent, they add noise with a fixed lambda = 30. To achieve this I am taking dividing the image(0-255) with 255 and multiply it with 30 and then sampling a poisson filter and adding it to my image. Am I going in the wrong direction?

denoising text noise

hello,

I tried to clean up an image enriched with text but the result is far from what I expected from the example presented : where can such a difference come from ? the fact that the fonts and colours used (especially black) are not the same as those used in training ?

thank you for sharing your work, regards, lacsaP.

$ python test_model.py --image_dir images/test/ --weight_file models/weights.057-4.796-27.68533_text_noise.hdf5 --test_noise_model clean

weights 057-4 796-27 68533_text_noise hdf5

$ python test_model.py --image_dir images/test/ --weight_file models/weights.056-4.172-28.07752_text_clean.hdf5 --test_noise_model clean

weights 056-4 172-28 07752_text_clean hdf5

Ask about image_size

Hi i have a question
What is image_size,My understanding is the length and width of the picture.When I train my own picture, I get an error.But I will not have errors with the data sets you provide.I read the papers are square pictures,i am using a rectangular image, and the length and width of the image are inconsistent.My biggest picture is 2338*1653.I want to know if the image size will make an impression on the training.thanks a lot

Deleting dataset causes error (Google Colab)

Hi, im was training my ai and I run the ai on google colab, I have some images wich is clean image and dirty image i was curious about replacing my dataset with new image, before im replacing the file its just works fine and when Im replaced with new dataset and this problem comes up

Traceback (most recent call last):
  File "noise2noise/test_model.py", line 70, in <module>
    main()
  File "noise2noise/test_model.py", line 47, in main
    h, w, _ = image.shape
AttributeError: 'NoneType' object has no attribute 'shape'

Im not disappointed with the result but this problem was kinda easy to fix by my self like reset the runtime the problem was gone. Any clues?

Out of memory

Any recommendation to handle OOM problems?

I'm using tensorflow with GPU (GTX 1060 6GB) and it dies fairly quickly with higher resolutions. I know in some cases dividing the images into slices and stitching them back together works, but I don't know if in this case that's an option.

Is validation loss reasonable?

Hi, I am wondering if using the validation loss as a stop criterion is reasonable if we assume that we have only noisy training data? Or is it nit used? I just commented the line with the validation data and then I got the error message:

Traceback (most recent call last):
File "train_speech.py", line 105, in
main()
File "train_speech.py", line 100, in main
callbacks=callbacks)
File "/net/home/Dokumente/PA/virtualenv/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/net/home/Dokumente/PA/virtualenv/lib/python3.6/site-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/net/home/Dokumente/PA/virtualenv/lib/python3.6/site-packages/keras/engine/training_generator.py", line 251, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "/net/home/Dokumente/PA/virtualenv/lib/python3.6/site-packages/keras/callbacks.py", line 79, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/net/home/Dokumente/PA/virtualenv/lib/python3.6/site-packages/keras/callbacks.py", line 429, in on_epoch_end
filepath = self.filepath.format(epoch=epoch + 1, **logs)
KeyError: 'val_loss'

Do not wonder about the different file names, I rewrote it for speech but with validation data my code works.

How do I get this to work and can it work for what I want it to?

Basically, I have some image that has some artifacts and noise, can this software denoise and remove some of the artifacts?

Also, how do I get this to work? I downloaded it but I donno what to commands to put in. I downloaded some sort of gausian clean model from the README and tried to make some command out of it, but doesn't really work?
python test_model.py --weight_file clean.hdf5 --image_dir test.png

Test does not work

Hi!

I started my testing with this command:

python3 test_model.py --weight_file gaussian/weights.045-72.507-30.07326.hdf5 --image_dir dataset/Set14

But I get this error message:
QObject::moveToThread: Current thread (0x5588fcb685f0) is not the object's thread (0x5588fda7b000).
Cannot move to target thread (0x5588fcb685f0)

Speicherzugriffsfehler (Speicherabzug geschrieben) [in english: Memory access error (memory dumped)]

AttributeError: Can't pickle local object

Hello,

In case anybody else is trying this repo on a Windows machine, you're likely to get errors along the lines of: AttributeError: Can't pickle local object . I don't have much Python experience, and I tried a variety of the suggested solutions via Google, but in the end the solution was as simple as switching from multiprocessing to multithreading on Windows (due to its different fork model). You can do this simply by setting use_multiprocessing=False on line 95 in train.py.

Yusuke, thank you so much for sharing this wonderful repo. I'm a computer graphics programmer by trade, and I've been meaning to get my feet wet with machine learning for a while now, and your repo was the perfect gentle introduction, with applications directly to my job. Much appreciated!

Thanks,

Jaap Suter

cant picke local object runtime error

Hi,

I am trying to run your training script and am getting the following errors. Any ideas ?

Thanks

Paul

D:\src\NoiseToNoise>python train.py --image_dir dataset/291 --test_dir dataset/Set14 --image_size 128 --batch_size 8 --lr 0.001 --output_path gaussian
Using TensorFlow backend.
2018-08-14 17:24:32.123287: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

Epoch 1/60
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Python36\lib\site-packages\keras\utils\data_utils.py", line 548, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "C:\Python36\lib\site-packages\keras\utils\data_utils.py", line 522, in
initargs=(seqs,))
File "C:\Python36\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\Python36\lib\multiprocessing\pool.py", line 174, in init
self._repopulate_pool()
File "C:\Python36\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "C:\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Python36\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_noise_model..gaussian_noise'
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Python36\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Python36\lib\site-packages\keras\utils\data_utils.py", line 548, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "C:\Python36\lib\site-packages\keras\utils\data_utils.py", line 522, in
initargs=(seqs,))
File "C:\Python36\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\Python36\lib\multiprocessing\pool.py", line 174, in init
self._repopulate_pool()
File "C:\Python36\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "C:\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Python36\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_noise_model..gaussian_noise'

run error

Excuse me, I run your program to report an error. I don't know what the problem is. Can you take a look at it? Thank you

File "F:/noise2noise/noise2noise-master/train.py", line 115, in
main()
File "F:/noise2noise/noise2noise-master/train.py", line 109, in main
callbacks=callbacks)
File "D:\Python\env\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1433, in fit_generator
steps_name='steps_per_epoch')
File "D:\Python\env\lib\site-packages\tensorflow\python\keras\engine\training_generator.py", line 322, in model_iteration
steps_name='validation_steps')
File "D:\Python\env\lib\site-packages\tensorflow\python\keras\engine\training_generator.py", line 220, in model_iteration
batch_data = _get_next_batch(generator, mode)
File "D:\Python\env\lib\site-packages\tensorflow\python\keras\engine\training_generator.py", line 372, in _get_next_batch
'or (x, y). Found: ' + str(generator_output))
ValueError: Output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: [array([[[[ 26, 11, 0],
[ 25, 0, 0],
[ 9, 9, 13],
...,
[ 30, 0, 23],
[ 0, 50, 16],
[ 0, 0, 41]],

    [[  7, 111,  48],
     [ 29, 109, 154],
     [ 20,  55,  50],
     ...,
     [ 61, 132, 202],
     [ 17, 174, 177],
     [114, 184, 172]],

    [[ 56, 142,  82],
     [ 60, 152, 141],
     [  0,  64,  45],
     ...,
     [ 44,  61,  97],
     [ 97,  88,   0],
     [  0,  50,  79]],

    ...,

    [[ 87,  80, 113],
     [ 80,  96,  78],
     [158,  79,  91],
     ...,
     [ 55,  72,  80],
     [ 40,  92,  59],
     [ 78,  85,  47]],

    [[ 38,  92,  24],
     [ 73, 108,  85],
     [ 58,  81,  27],
     ...,
     [  8,  29,  51],
     [ 51,  21,  24],
     [  0,  82,  57]],

    [[ 25,  14,   0],
     [ 26,   0,   0],
     [  3,   0,  54],
     ...,
     [ 18,  46,  41],
     [ 22,   0,  40],
     [  7,   0,   0]]]], dtype=uint8), array([[[[  6,   6,   6],
     [  6,   6,   6],
     [  6,   6,   6],
     ...,
     [  6,   6,   6],
     [  6,   6,   6],
     [  6,   6,   6]],

    [[ 25, 104,  84],
     [ 34, 150, 156],
     [ 22,  50,  76],
     ...,
     [ 34, 150, 156],
     [ 52, 168, 176],
     [ 83, 167, 149]],

    [[ 25, 104,  84],
     [ 23, 107, 143],
     [ 22,  50,  76],
     ...,
     [ 50, 109, 111],
     [ 49,  91,  46],
     [ 25,  75,  79]],

    ...,

    [[ 87, 103,  87],
     [ 87, 103,  87],
     [109, 111,  82],
     ...,
     [ 38,  41,  54],
     [ 39,  56,  55],
     [ 75,  54,  46]],

    [[ 80,  81,  51],
     [ 87,  88,  72],
     [ 81, 110,  79],
     ...,
     [ 25,  55,  55],
     [ 38,  41,  54],
     [ 25,  55,  55]],

    [[  6,   6,   6],
     [  6,   6,   6],
     [  6,   6,   6],
     ...,
     [  6,   6,   6],
     [  6,   6,   6],
     [  6,   6,   6]]]], dtype=uint8)]

Target Images

Hi, I'm a bit confused, what is the purpose of target images? are target images same as source images?

AttributeError: module 'tensorflow' has no attribute 'log'

Hello, may I ask how to deal with Traceback error? Thanks.

I use Ubuntu 18.04, with GPU GeForce RTX 2080 with Max-Q.

Traceback (most recent call last): File "train.py", line 113, in <module> main() File "train.py", line 87, in main model.compile(optimizer=opt, loss=loss_type, metrics=[PSNR]) File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 222, in compile masks=masks) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 871, in _handle_metrics self._per_output_metrics[i], target, output, output_mask) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 842, in _handle_per_output_metrics metric_fn, y_true, y_pred, weights=weights, mask=mask) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py", line 1033, in call_metric_function update_ops = metric_fn.update_state(y_true, y_pred, sample_weight=weights) File "/usr/local/lib/python3.6/dist-packages/keras/utils/metrics_utils.py", line 42, in decorated update_op = update_state_fn(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/keras/metrics.py", line 318, in update_state matches = self._fn(y_true, y_pred, **self._fn_kwargs) File "/home/x/文档/GitHub/noise2noise/model.py", line 45, in PSNR return 10.0 * tf_log10((max_pixel ** 2) / (K.mean(K.square(y_pred - y_true)))) File "/home/x/文档/GitHub/noise2noise/model.py", line 37, in tf_log10 numerator = tf.log(x) AttributeError: module 'tensorflow' has no attribute 'log'

16bit grayscale image

Hi,
I try to modify the code in order to read 16bit gray scale tiff images.

First I directly read the image but I found out that cv2.imread(str(image_path)) correctly read the images but cast the image to 8 bit depth. So I modified cv2.imread(str(image_path)) by adding cv2.imread(str(image_path), cv2.IMREAD_ANYDEPTH)

For some reason with this option image.shape became bidimensional, so I modified
h, w = image.shape
_ = 1 # i know my images are 1 channel

but i get lost with the next problem:
"StopIteration: could not broadcast input array from shape (128,128) into shape (128,128,3)"

Hi ! Any kinds of pictures can be fit for your net?

thank you for your wonderful codes! And I want to test your code on another dataset , which is about 3000+ pictures, do I need to do some data argument? And I resize all the pictures to 450*450 pixels. Is that suitable?

Correct my understanding if I'm wrong but why do I feel that this is very low quality research?

Hi,

To be fair its not really an issue with your code, but why do I feel this is very low quality research?? They are essentially saying if you don't have the clean photo for training and have different noisy versions BASED ON the clean photo, you can get similar results. Is that not common sense??? To me first of all:

  1. They basically discovered the power of averaging big whoop.
  2. in how many real world scenarios is this actually useful?? When will you have a number of different noisy photos that happen to be based on the same CLEAN photo but with juts different distributions of noises overlaid on top of it.
  3. If you take a step back and use common sense, the noisy images must be generated from the SAME clean photo. So of course the information in the clean photo is already in there!! You've just artificially degraded the information on purpose, wtf is the point of that in the first place?

As for the claim that the intention is to train for images specific to the noise of the device, fine, that is definitely doable but exactly how much value does that add?
And actually I even doubt if its practical cause you need to take multiple images of the exactly same "ground truth" for it to work, you must have a seriously crappy device if that is your intention.

I know this might not be the right place to share my comments but I was seriously disappointed in this piece of research and I don't know if anyone else feels the same or if I actually missed out something important.

resume training

hi @yu4u
thank you for your great work
could you please explean how to resume my training from i stop it (if possible)
also how can i use the network to clean up my noisy photos
thank you

Small number of training images

Please correct me if I'm wrong. I can see that the training dataset consists of only 291 images. Isn't that extremely small? The original paper uses 50k images of the size 256x256. I don't think that 291 images are enough to achieve the desirable result.

P.S. how much time does it take to train the network for gaussian noise?

Tensorflow error

Trying to get this to work. Please help.

Using TensorFlow backend.

2019-10-13 01:35:21.059688: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2019-10-13 01:35:21.063914: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From C:\Users\jlege\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
2019-10-13 01:35:22.330879: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-10-13 01:35:22.424836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.4805
pciBusID: 0000:08:00.0
2019-10-13 01:35:22.430364: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2019-10-13 01:35:22.434807: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found
2019-10-13 01:35:22.439268: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cufft64_100.dll'; dlerror: cufft64_100.dll not found
2019-10-13 01:35:22.443703: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'curand64_100.dll'; dlerror: curand64_100.dll not found
2019-10-13 01:35:22.448700: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cusolver64_100.dll'; dlerror: cusolver64_100.dll not found
2019-10-13 01:35:22.453224: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cusparse64_100.dll'; dlerror: cusparse64_100.dll not found
2019-10-13 01:35:22.457750: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudnn64_7.dll'; dlerror: cudnn64_7.dll not found
2019-10-13 01:35:22.462243: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2019-10-13 01:35:22.471335: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-10-13 01:35:22.478832: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-10-13 01:35:22.482639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]
WARNING:tensorflow:From C:\Users\jlege\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\backend\tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

Press any key to continue . . .

How do clean document scans with this?

I have document scans to be cleaned. How do I go about training a new model for it. I can also get clean counter-parts of the same for noisy-clean pair, but dont know how to initiate training for them. And how many of them would I need for training to get significant results?

How should I denoise an already noisy image

First of all, thank you for your contribution. I have a lot of questions about the function of your project. When testing, I pass an image, then Gaussian blurring, and then denoise the image obtained after Gaussian blurring. How should I denoise an already noisy image?

remove other watermarking

Hello, can this only remove the watermark generated by the model? If I find a watermark with text on the Internet, it cannot be removed
image
image

EEE students, Beginners in the field need assistance

Hello,
We are Bachelor's degree students and would like to use your repository for our graduation project.
In general we would appreciate any help we can get.

For the first steps we are trying to test out the weights you have stored in hdf5 files, and probably we are making a ton of mistakes.
If you have the time, we would like to chat and contact via email: [email protected]
We would like to share with you how we tried to load the model or see how you recommend doing it.
Then run some images with gaussian noise through the model.

We will appreciate any assistance!
Thank you

Screenshot 2023-01-11 212503

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.