rdn.model.load_weights('image-super-resolution/weights/sample_weights/rdn-C6-D20-G64-G064-x2/ArtefactCancelling/rdn-C6-D20-G64-G064-x2_ArtefactCancelling_epoch219.hdf5')
for f in files:
img = Image.open(f)
lr_img = np.array(img)
sr_img = rdn.predict(lr_img)
hr_img = Image.fromarray(sr_img)
hr_img.save("model_"+f)
print(f+" is done")
2019-07-15 10:54:26.136079: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2019-07-15 10:54:26.149052: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2019-07-15 10:54:26.149107: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:163] retrieving CUDA diagnostic information for host: host_tower
2019-07-15 10:54:26.149118: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:170] hostname: cledl2-Precision-7820-Tower
2019-07-15 10:54:26.149170: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:194] libcuda reported version is: 390.116.0
2019-07-15 10:54:26.149202: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:198] kernel reported version is: 390.116.0
2019-07-15 10:54:26.149212: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:305] kernel version seems to match DSO: 390.116.0
2019-07-15 10:58:10.703357: W tensorflow/core/framework/allocator.cc:122] Allocation of 5865369600 exceeds 10% of system memory.
b320_150dpi_25-58.jpg is done
2019-07-15 11:00:52.459730: W tensorflow/core/framework/allocator.cc:122] Allocation of 4101580800 exceeds 10% of system memory.
b320_150dpi_25-64.jpg is done
2019-07-15 11:03:38.891812: W tensorflow/core/framework/allocator.cc:122] Allocation of 4270080000 exceeds 10% of system memory.
b320_150dpi_25-10.jpg is done
2019-07-15 11:06:22.367807: W tensorflow/core/framework/allocator.cc:122] Allocation of 4185067520 exceeds 10% of system memory.
b320_150dpi_25-35.jpg is done
2019-07-15 11:10:14.411686: W tensorflow/core/framework/allocator.cc:122] Allocation of 6017474560 exceeds 10% of system memory.
b210_150dpi_91.jpg is done
b209_150dpi_15.jpg is done
b320_150dpi_25-12.jpg is done
2019-07-15 11:26:49.040817: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at concat_op.cc:153 : Resource exhausted: OOM when allocating tensor with shape[1,2368,1386,1280] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
Traceback (most recent call last):
File "inference.py", line 38, in <module>
sr_img = rdn.predict(lr_img)
File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/ISR-2.0.5-py3.5.egg/ISR/models/imagemodel.py", line 21, in predict
File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/engine/training.py", line 1169, in predict
steps=steps)
File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/engine/training_arrays.py", line 294, in predict_loop
batch_outs = f(ins_batch)
File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/backend/tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/Keras-2.2.4-py3.5.egg/keras/backend/tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1439, in __call__
run_metadata_ptr)
File "/home/user/Documents/experiment/SIP_env/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1,2368,1386,1280] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node LRLs_Concat/concat}} = ConcatV2[N=20, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](LRL_1/add, LRL_2/add, LRL_3/add, LRL_4/add, LRL_5/add, LRL_6/add, LRL_7/add, LRL_8/add, LRL_9/add, LRL_10/add, LRL_11/add, LRL_12/add, LRL_13/add, LRL_14/add, LRL_15/add, LRL_16/add, LRL_17/add, LRL_18/add, LRL_19/add, LRL_20/add, RDB_Concat_20_1/concat/axis)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
And more often the program gets killed by os.