Giter Club home page Giter Club logo

handwriting-ocr's People

Contributors

aminsama avatar androbin avatar breta01 avatar jpahm avatar kascesar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

handwriting-ocr's Issues

Error while running the OCR.ipynb

When I am running the first cell of file, I am facing with this error while I have cloned the repo using git LFS


ValueError Traceback (most recent call last)
in ()
11 # Import costume functions, corresponding to notebooks
12 from ocr.normalization import imageNorm, letterNorm
---> 13 from ocr import page, words, charSeg
14 from ocr.helpers import implt, resize
15 from ocr.tfhelpers import Graph

~\handwriting-ocr\ocr\charSeg.py in ()
11 print("Loading Segmantation model:")
12 segCNNGraph = Graph('models/gap-clas/CNN-CG')
---> 13 segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')
14
15 def classify(img, step=2, RNN=False):

~\handwriting-ocr\ocr\tfhelpers.py in init(self, loc, operation, input_name)
20 self.sess = tf.Session(graph=self.graph)
21 with self.graph.as_default():
---> 22 saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
23 saver.restore(self.sess, loc)
24 self.op = self.graph.get_operation_by_name(operation).outputs[0]

c:\users\ali.vahid\appdata\local\continuum\anaconda3\envs\hwr\lib\site-packages\tensorflow\python\training\saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs)
1684 clear_devices=clear_devices,
1685 import_scope=import_scope,
-> 1686 **kwargs)
1687 if meta_graph_def.HasField("saver_def"):
1688 return Saver(saver_def=meta_graph_def.saver_def, name=import_scope)

c:\users\ali.vahid\appdata\local\continuum\anaconda3\envs\hwr\lib\site-packages\tensorflow\python\framework\meta_graph.py in import_scoped_meta_graph(meta_graph_or_file, clear_devices, graph, import_scope, input_map, unbound_inputs_col_name, restore_collections_predicate)
502 importer.import_graph_def(
503 input_graph_def, name=(import_scope or ""), input_map=input_map,
--> 504 producer_op_list=producer_op_list)
505
506 scope_to_prepend_to_names = "/".join(

c:\users\ali.vahid\appdata\local\continuum\anaconda3\envs\hwr\lib\site-packages\tensorflow\python\framework\importer.py in import_graph_def(graph_def, input_map, return_elements, name, op_dict, producer_op_list)
281 # Set any default attr values that aren't present.
282 if node.op not in op_dict:
--> 283 raise ValueError('No op named %s in defined operations.' % node.op)
284 op_def = op_dict[node.op]
285 for attr_def in op_def.attr:

ValueError: No op named StackV2 in defined operations.

i couldn't quite understand how to use this

hey, i'm building a platform for exam papers classifications using django ...
and i need this to detect names of the students
unfortunately, i couldn't quite understand how to use it since i don't have any experience in building ml models

i would appreciate it if you could explain the steps needed to send an image through your algorithm and get a response .. i really need this for a school project

Question about a function [question]

Hello,

About resizing scanned documents, for this line :

imageEdges = edgesDet(image, 200, 250, height)

Why have you chosen 200 and 250 ?

And here, why 5,5,5,5 ?
Inside:

img = cv2.copyMakeBorder(img, 5, 5, 5, 5, cv2.BORDER_CONSTANT, value=[0, 0, 0])

Thanks,

Bug while importing some files

Hello Breta,

Pleaae help,
My tensorflow version is 1.4.0 and also the file is Bi-RNN.data-00000-of-00001 is present.
Still I got an exception in the first cell of the code.
The exception was as follows:

Loading Segmantation model:
INFO:tensorflow:Restoring parameters from models/gap-clas/CNN-CG
INFO:tensorflow:Restoring parameters from models/gap-clas/RNN/Bi-RNN-new

OutOfRangeError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
1322 try:
-> 1323 return fn(*args)
1324 except errors.OpError as e:

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1301 feed_dict, fetch_list, target_list,
-> 1302 status, run_metadata)
1303

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in exit(self, type_arg, value_arg, traceback_arg)
472 compat.as_text(c_api.TF_Message(self.status.status)),
--> 473 c_api.TF_GetCode(self.status.status))
474 # Delete the underlying status object from memory otherwise it stays alive

OutOfRangeError: Read fewer bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

During handling of the above exception, another exception occurred:

OutOfRangeError Traceback (most recent call last)
in ()
12 # Import costume functions, corresponding to notebooks
13 from ocr.normalization import imageNorm, letterNorm
---> 14 from ocr import page, words, charSeg
15 from ocr.helpers import implt, resize
16 from ocr.tfhelpers import Graph

~\handwriting-ocr-master\ocr\charSeg.py in ()
11 print("Loading Segmantation model:")
12 segCNNGraph = Graph('models/gap-clas/CNN-CG')
---> 13 segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')
14
15 def classify(img, step=2, RNN=False):

~\handwriting-ocr-master\ocr\tfhelpers.py in init(self, loc, operation, input_name)
21 with self.graph.as_default():
22 saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
---> 23 saver.restore(self.sess, loc)
24 self.op = self.graph.get_operation_by_name(operation).outputs[0]
25

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\training\saver.py in restore(self, sess, save_path)
1664 if context.in_graph_mode():
1665 sess.run(self.saver_def.restore_op_name,
-> 1666 {self.saver_def.filename_tensor_name: save_path})
1667 else:
1668 self._build_eager(save_path, build_save=False, build_restore=True)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
887 try:
888 result = self._run(None, fetches, feed_dict, options_ptr,
--> 889 run_metadata_ptr)
890 if run_metadata:
891 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1118 if final_fetches or final_targets or (handle and feed_dict_tensor):
1119 results = self._do_run(handle, final_targets, final_fetches,
-> 1120 feed_dict_tensor, options, run_metadata)
1121 else:
1122 results = []

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1315 if handle is None:
1316 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1317 options, run_metadata)
1318 else:
1319 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
1334 except KeyError:
1335 pass
-> 1336 raise type(e)(node_def, op, message)
1337
1338 def _extend_graph(self):

OutOfRangeError: Read fewer bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

Caused by op 'save/RestoreV2_67', defined at:
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 477, in start
ioloop.IOLoop.instance().start()
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell
handler(stream, idents, msg)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes
if self.run_code(code, result):
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 14, in
from ocr import page, words, charSeg
File "", line 1017, in _handle_fromlist
File "", line 219, in _call_with_frames_removed
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "C:\Users\yaser.sakkaf\handwriting-ocr-master\ocr\charSeg.py", line 13, in
segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')
File "C:\Users\yaser.sakkaf\handwriting-ocr-master\ocr\tfhelpers.py", line 22, in init
saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1810, in import_meta_graph
**kwargs)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 660, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\importer.py", line 313, in import_graph_def
op_def=op_def)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
op_def=op_def)
File "C:\Users\yaser.sakkaf\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

OutOfRangeError (see above for traceback): Read fewer bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

Error while training CTC model with tf gpu either v1.4 or v1.8 (potentially same error would happen for other models)

I see a very weird error while I try to train the CTC model when the gpu version of tensorflow is used. the cpu version does not have problem. the error generated from the line train_step.run(fd)
try:
for i_batch in range(TRAIN_STEPS):
fd = train_iterator.next_feed(BATCH_SIZE)
train_step.run(fd) <---------

the error is:
NotFoundError: Resource __per_step_4/_tensor_arraysmap/TensorArray_1_85/N10tensorflow11TensorArrayE does not exist.

I could not find helpful materials to figure out what is the problem.

the total log is attached.

NotFoundError Traceback (most recent call last)
/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1321 try:
-> 1322 return fn(*args)
1323 except errors.OpError as e:

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
1306 return self._call_tf_sessionrun(
-> 1307 options, feed_dict, fetch_list, target_list, run_metadata)
1308

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
1408 self._session, options, feed_dict, fetch_list, target_list,
-> 1409 run_metadata)
1410 else:

NotFoundError: Resource __per_step_4/_tensor_arraysmap/TensorArray_1_85/N10tensorflow11TensorArrayE does not exist.
[[Node: gradients/map/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3 = TensorArrayGradV3[_class=["loc:@map/TensorArray_1"], source="gradients", _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/TensorArray_1/_153, map/while/Exit_1/_155)]]
[[Node: gradients/map/while/map/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency/_249 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_2317_...dependency", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

During handling of the above exception, another exception occurred:

NotFoundError Traceback (most recent call last)
in ()
6 for i_batch in range(TRAIN_STEPS):
7 fd = train_iterator.next_feed(BATCH_SIZE)
----> 8 train_step.run(fd)
9 if i_batch % LOSS_ITER == 0:
10 # Plotting loss

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in run(self, feed_dict, session)
2375 none, the default session will be used.
2376 """
-> 2377 _run_using_default_session(self, feed_dict, self.graph, session)
2378
2379 _gradient_registry = registry.Registry("gradient")

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in _run_using_default_session(operation, feed_dict, graph, session)
5213 "the operation's graph is different from the session's "
5214 "graph.")
-> 5215 session.run(operation, feed_dict)
5216
5217

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
898 try:
899 result = self._run(None, fetches, feed_dict, options_ptr,
--> 900 run_metadata_ptr)
901 if run_metadata:
902 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1133 if final_fetches or final_targets or (handle and feed_dict_tensor):
1134 results = self._do_run(handle, final_targets, final_fetches,
-> 1135 feed_dict_tensor, options, run_metadata)
1136 else:
1137 results = []

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1314 if handle is None:
1315 return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1316 run_metadata)
1317 else:
1318 return self._do_call(_prun_fn, handle, feeds, fetches)

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1333 except KeyError:
1334 pass
-> 1335 raise type(e)(node_def, op, message)
1336
1337 def _extend_graph(self):

NotFoundError: Resource __per_step_4/_tensor_arraysmap/TensorArray_1_85/N10tensorflow11TensorArrayE does not exist.
[[Node: gradients/map/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3 = TensorArrayGradV3[_class=["loc:@map/TensorArray_1"], source="gradients", _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/TensorArray_1/_153, map/while/Exit_1/_155)]]
[[Node: gradients/map/while/map/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency/_249 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_2317_...dependency", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

Caused by op 'gradients/map/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3', defined at:
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 486, in start
self.io_loop.start()
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tornado/platform/asyncio.py", line 127, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.5/asyncio/base_events.py", line 345, in run_forever
self._run_once()
File "/usr/lib/python3.5/asyncio/base_events.py", line 1312, in _run_once
handle._run()
File "/usr/lib/python3.5/asyncio/events.py", line 125, in _run
self._callback(*self._args)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tornado/ioloop.py", line 759, in _run_callback
ret = callback()
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tornado/stack_context.py", line 276, in null_wrapper
return fn(*args, **kwargs)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 536, in
self.io_loop.add_callback(lambda : self._handle_events(self.socket, 0))
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
self._handle_recv()
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tornado/stack_context.py", line 276, in null_wrapper
return fn(*args, **kwargs)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2662, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2785, in _run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2903, in run_ast_nodes
if self.run_code(code, result):
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 8, in
train_step = optimizer.minimize(loss, name='train_step')
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 414, in minimize
grad_loss=grad_loss)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 526, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 494, in gradients
gate_gradients, aggregation_method, stop_gradients)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 636, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 385, in _MaybeCompile
return grad_fn() # Exit early
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 636, in
lambda: grad_fn(op, *out_grads))
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/tensor_array_grad.py", line 161, in _TensorArrayGatherGrad
.grad(source=grad_source, flow=flow))
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 849, in grad
return self._implementation.grad(source, flow=flow, name=name)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 241, in grad
handle=self._handle, source=source, flow_in=flow, name=name)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 6229, in tensor_array_grad_v3
name=name)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

...which was originally created as op 'map/TensorArrayStack/TensorArrayGatherV3', defined at:
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"main", mod_spec)
[elided 23 identical lines from previous traceback]
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 9, in
dtype=tf.float32)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/functional_ops.py", line 424, in map_fn
results_flat = [r.stack() for r in r_a]
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/functional_ops.py", line 424, in
results_flat = [r.stack() for r in r_a]
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 893, in stack
return self._implementation.stack(name=name)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 291, in stack
return self.gather(math_ops.range(0, self.size()), name=name)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 305, in gather
element_shape=element_shape)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 6018, in tensor_array_gather_v3
flow_in=flow_in, dtype=dtype, element_shape=element_shape, name=name)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1718, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

NotFoundError (see above for traceback): Resource __per_step_4/_tensor_arraysmap/TensorArray_1_85/N10tensorflow11TensorArrayE does not exist.
[[Node: gradients/map/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3 = TensorArrayGradV3[_class=["loc:@map/TensorArray_1"], source="gradients", _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/TensorArray_1/_153, map/while/Exit_1/_155)]]
[[Node: gradients/map/while/map/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency/_249 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_2317_...dependency", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

models/gap-clas/RNN/Bi-RNN-new.meta does not exist

I'm trying to get the OCR notebook to run, but get an error in the first cell:

OSError: File models/gap-clas/RNN/Bi-RNN-new.meta does not exist.
There are indeed no files with a -new prefix in that directory. Maybe you forgot to check in something?

Accuracy of the complete OCR

Does the accuracy of the complete OCR depend on the accuracy of all the models (Gap classifier, word classifier, char classifier)

Gap Classifier - BiRNN Training issue

Hello there.
I was not able to train charclassifier, hence tried out BiRNN-GapClassifier. I'm getting a similar issue. I'm really not able to understand where the error is happening. Would you kindly help me out?


AttributeError Traceback (most recent call last)
in ()
6 slider_size,
7 slider_step,
----> 8 train=True)
9 test_iterator = BucketDataIterator(testImages,
10 testGaplines,

in init(self, images, gaplines, gap_span, num_buckets, slider, slider_step, imgprocess, train)
12
13 self.train = train
---> 14 length = [(image.shape[1]-slider[1])//slider_step for image in images]
15
16 # Creating indices from gaplines

in (.0)
12
13 self.train = train
---> 14 length = [(image.shape[1]-slider[1])//slider_step for image in images]
15
16 # Creating indices from gaplines

AttributeError: 'NoneType' object has no attribute 'shape'

NotFoundError:Failed to find any matching files for models/char-clas/en/CharClassifier

Even though CharClassifier is present in the respective location, it gives the following error.
NotFoundError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to find any matching files for models/char-clas/en/CharClassifier
[[Node: save/RestoreV2_1 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_1/tensor_names, save/RestoreV2_1/shape_and_slices)]]

CTC model error: ValueError: Cannot feed value of shape (53, 1, 120) for Tensor 'inputs:0', which has shape '(?, ?, 3600)'

WordCycler(images,
labels,
wordClass3,
stats='CTC',
slider=(60, 2),
ctc=True)

STATS: CTC


ValueError Traceback (most recent call last)
in ()
23 stats='CTC',
24 slider=(60, 2),
---> 25 ctc=True)
26
27

in init(self, images, labels, charClass, stats, slider, ctc, seq2seq, charRNN)
20 self.stats = stats
21
---> 22 self.evaluate()
23
24 @AbstractMethod

in evaluate(self)
40 print()
41 print("STATS:", self.stats)
---> 42 print(self.labels[1], ':', self.recogniseWord(self.images[1]))
43 start_time = time.time()
44 correctLetters = 0

in recogniseWord(self, img)
22 pred = self.charClass.eval_feed({'inputs:0': input_seq,
23 'inputs_length:0': [length],
---> 24 'keep_prob:0': 1})[0]
25
26 word = ''

/mnt/tensor2/handwriting-ocr/ocr/tfhelpers.py in eval_feed(self, feed)
30 def eval_feed(self, feed):
31 """ Run the specified operation with given feed """
---> 32 return self.sess.run(self.op, feed_dict=feed)
33
34 def run_op(self, op, feed, output=True):

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
898 try:
899 result = self._run(None, fetches, feed_dict, options_ptr,
--> 900 run_metadata_ptr)
901 if run_metadata:
902 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/mnt/tensor2/python3/HR/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1109 'which has shape %r' %
1110 (np_val.shape, subfeed_t.name,
-> 1111 str(subfeed_t.get_shape())))
1112 if not self.graph.is_feedable(subfeed_t):
1113 raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (53, 1, 120) for Tensor 'inputs:0', which has shape '(?, ?, 3600)'

GapClassificationDM giving this error

Hi,

Actually, similar error here also.

ValueError Traceback (most recent call last)
in ()
3 LAST_INDEX = 0
4 # Class cycling through text positions
----> 5 cycler = Cycler(crop, bBoxes, LAST_INDEX)

in init(self, image, boxes, idx)
33 VBox([bNex, bNexi])])
34
---> 35 self.nextImgButton()
36
37 def saveLetter(self, b):

in nextImgButton(self, b)
109 clear_output()
110 display(self.buttons)
--> 111 self.nextImg()
112
113 def nextImg(self):

in nextImg(self)
128 self.actual = imageNorm(img, self.height, borderSize=int(self.width/2))
129 implt(self.actual, 'gray', 'Preprocess')
--> 130 self.segment()
131
132 # Printing index for recovery

in segment(self)
58 # img = img[:, self.width//2-15: self.width//2+15]
59
---> 60 if charSeg.classify(img)[0] == 1:
61 cv2.line(ii,
62 ((int)(pos+self.width/2),0),

~\OneDrive\handwriting-ocr-master\ocr\charSeg.py in classify(img, step, RNN)
29 input_seq[:] = [img[:, loc * step: loc * step + slider[1]].flatten()
30 for loc in range(length)]
---> 31 pred = segCNNGraph.run(input_seq)
32
33 return pred

~\OneDrive\handwriting-ocr-master\ocr\tfhelpers.py in run(self, data)
27 def run(self, data):
28 """ Run the specified operation on given data """
---> 29 return self.sess.run(self.op, feed_dict={self.input: data})
30
31 def eval_feed(self, feed):

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1102 'Cannot feed value of shape %r for Tensor %r, '
1103 'which has shape %r'
-> 1104 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
1105 if not self.graph.is_feedable(subfeed_t):
1106 raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (31, 3600) for Tensor 'x:0', which has shape '(?, 1800)'

ASKING ABOUT STEP FOR MAKING HANDWRITING RECOGNITION

Good evening i'm eko
from Indonesia. i'm interested about your code to make handwriting recognition and i interest too about making it. if you allow me to know about the steps to create it would you like to tell me the step by step ( from create data train and data validation until can read our handwriting) the flow about it as i need it to complete my final .
Thank you..
Regards
Eko

How to create a training set for CTC Classifier

The author has mentioned that in order to train model on a new dataset we have to create one.

The original dataset of author present in data/word2 contains images file adjusted to a height of 60x but what are the other text files associated with every image.

I mean I can create a grayscale image of my handwritting and adjust it height to 60x to create my own dataset but how can I create the same text files as shown in the original dataset.

Issue while training WordClassifier-CTC using IAM Dataset


InvalidArgumentError Traceback (most recent call last)
/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1360 try:
-> 1361 return fn(*args)
1362 except errors.OpError as e:

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1339 return tf_session.TF_Run(session, options, feed_dict, fetch_list,
-> 1340 target_list, status, run_metadata)
1341

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in exit(self, type_arg, value_arg, traceback_arg)
515 compat.as_text(c_api.TF_Message(self.status.status)),
--> 516 c_api.TF_GetCode(self.status.status))
517 # Delete the underlying status object from memory otherwise it stays alive

InvalidArgumentError: Cannot parse tensor from proto: dtype: DT_UINT8
tensor_shape {
dim {
size: 1
}
}
tensor_content: "\000\000\000\000"

 [[Node: DMT/_239 = Const[dtype=DT_UINT8, value=<Invalid TensorProto: dtype: DT_UINT8 tensor_shape { dim { size: 1 } } tensor_content: "\000\000\000\000">, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

During handling of the above exception, another exception occurred:

InvalidArgumentError Traceback (most recent call last)
in ()
8 for i_batch in range(TRAIN_STEPS):
9 fd = train_iterator.next_feed(BATCH_SIZE)
---> 10 train_step.run(fd)
11
12 if i_batch % LOSS_ITER == 0:

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in run(self, feed_dict, session)
2283 none, the default session will be used.
2284 """
-> 2285 _run_using_default_session(self, feed_dict, self.graph, session)
2286
2287 _gradient_registry = registry.Registry("gradient")

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _run_using_default_session(operation, feed_dict, graph, session)
4934 "the operation's graph is different from the session's "
4935 "graph.")
-> 4936 session.run(operation, feed_dict)
4937
4938

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
903 try:
904 result = self._run(None, fetches, feed_dict, options_ptr,
--> 905 run_metadata_ptr)
906 if run_metadata:
907 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1135 if final_fetches or final_targets or (handle and feed_dict_tensor):
1136 results = self._do_run(handle, final_targets, final_fetches,
-> 1137 feed_dict_tensor, options, run_metadata)
1138 else:
1139 results = []

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1353 if handle is None:
1354 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1355 options, run_metadata)
1356 else:
1357 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1372 except KeyError:
1373 pass
-> 1374 raise type(e)(node_def, op, message)
1375
1376 def _extend_graph(self):

InvalidArgumentError: Cannot parse tensor from proto: dtype: DT_UINT8
tensor_shape {
dim {
size: 1
}
}
tensor_content: "\000\000\000\000"

 [[Node: DMT/_239 = Const[dtype=DT_UINT8, value=<Invalid TensorProto: dtype: DT_UINT8 tensor_shape { dim { size: 1 } } tensor_content: "\000\000\000\000">, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

I don't quite understand why I am getting this error.

How to create label file in charclas folder

Hi,

I saw some open questions, but I do not understand how en-lables file match to character images?
Any guideline will be a great help as I am going to put my letters for training

Thanks
Sagar

Error Continued...........

Thanks Breta,

Now the output for your images is perfectly working.

However I am getting some issue with the following image.
poss

I tried with and without page.detection(image) but it did not work.
Output is like this:

image

I think there is some problem with bounding boxes. The co-ordinates returned are not proper.
You can try it for yourself. I can send you the image on your email if you give it.

wordClass3 model (CTC) in OCR Evaluator is not working.

Hi Breta, while running OCR Evaluator, all the models are working except for worldClass3 model.
Note :
as mentioned by you in a previous issue, I've changed the wordClass2 model to SeqRNN/Classifier.


ValueError Traceback (most recent call last)
in ()
19 stats='CTC',
20 slider=(60, 2),
---> 21 ctc=True)
22
23 CharCycler(images,

in init(self, images, labels, charClass, stats, slider, ctc, seq2seq, charRNN)
20 self.stats = stats
21
---> 22 self.evaluate()
23
24 @AbstractMethod

in evaluate(self)
40 print()
41 print("STATS:", self.stats)
---> 42 print(self.labels[1], ':', self.recogniseWord(self.images[1]))
43 start_time = time.time()
44 correctLetters = 0

in recogniseWord(self, img)
22 pred = self.charClass.eval_feed({'inputs:0': input_seq,
23 'inputs_length:0': [length],
---> 24 'keep_prob:0': 1})[0]
25
26 word = ''

~\Desktop\Ernst & Young\ICR\MIT\handwriting-ocr-master\ocr\tfhelpers.py in eval_feed(self, feed)
30 def eval_feed(self, feed):
31 """ Run the specified operation with given feed """
---> 32 return self.sess.run(self.op, feed_dict=feed)
33
34 def run_op(self, op, feed, output=True):

~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
903 try:
904 result = self._run(None, fetches, feed_dict, options_ptr,
--> 905 run_metadata_ptr)
906 if run_metadata:
907 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1114 'which has shape %r' %
1115 (np_val.shape, subfeed_t.name,
-> 1116 str(subfeed_t.get_shape())))
1117 if not self.graph.is_feedable(subfeed_t):
1118 raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (135, 1, 120) for Tensor 'inputs:0', which has shape '(?, ?, 3600)'

Problem with fourCornersSort() function [help wanted]

Hello,

I am trying to use your code that crops the image for scanned photos.

But I am getting the following error :

Traceback (most recent call last):
  File "modules/border_check/processor.py", line 145, in <module>
    pageContour = findPageContours(closedEdges, resize(image, height))
  File "modules/border_check/processor.py", line 102, in findPageContours
    pageContour = fourCornersSort(pageContour[:, 0])
  File "modules/border_check/processor.py", line 55, in fourCornersSort
    diff = np.diff(pts, axis=1)
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1935, in diff
    axis = normalize_axis_index(axis, nd)
numpy.core._internal.AxisError: axis 1 is out of bounds for array of dimension 1

Here is what is in line 102 :

pageContour = fourCornersSort(pageContour[:, 0])

This line is in the findPageContours() function.

What I get for the pageContour variable is :

[[  0   0]
 [  0 542]
 [424 542]
 [424   0]]

And for pageContour[:, 0] :

[  0   0 424 424]

Have you an idea of what is the problem?

Thank you and have a great day.

How to score with existing pre-trained models

Hello,

This is really impressive work. I'm also currently working on developing a small OCR for handwritten letters and I'd love to try your implementation with your pre-trained models.
However could you give me indications on how to use those pre-trained models to score a word image ?

Many thanks !

Error while importing charSeq from OCR

Loading Segmantation model:
INFO:tensorflow:Restoring parameters from models/gap-clas/CNN-CG
INFO:tensorflow:Restoring parameters from models/gap-clas/RNN/Bi-RNN-new

OutOfRangeError Traceback (most recent call last)
/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1326 try:
-> 1327 return fn(*args)
1328 except errors.OpError as e:

/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1305 feed_dict, fetch_list, target_list,
-> 1306 status, run_metadata)
1307

/Users/Shilpa/anaconda/lib/python3.6/contextlib.py in exit(self, type, value, traceback)
88 try:
---> 89 next(self.gen)
90 except StopIteration:

/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
465 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466 pywrap_tensorflow.TF_GetCode(status))
467 finally:

OutOfRangeError: Read less bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

During handling of the above exception, another exception occurred:

OutOfRangeError Traceback (most recent call last)
in ()
14 from ocr import page
15 from ocr import words
---> 16 from ocr import charSeg
17 #from ocr import
18 #from ocr import page, words, charSeg

/Users/Shilpa/KeyReply/handwriting-ocr-master/ocr/charSeg.py in ()
11 print("Loading Segmantation model:")
12 segCNNGraph = Graph('models/gap-clas/CNN-CG')
---> 13 segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')
14
15 def classify(img, step=2, RNN=False):

/Users/Shilpa/KeyReply/handwriting-ocr-master/ocr/tfhelpers.py in init(self, loc, operation, input_name)
21 with self.graph.as_default():
22 saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
---> 23 saver.restore(self.sess, loc)
24 self.op = self.graph.get_operation_by_name(operation).outputs[0]
25

/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/training/saver.py in restore(self, sess, save_path)
1558 logging.info("Restoring parameters from %s", save_path)
1559 sess.run(self.saver_def.restore_op_name,
-> 1560 {self.saver_def.filename_tensor_name: save_path})
1561
1562 @staticmethod

/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1122 if final_fetches or final_targets or (handle and feed_dict_tensor):
1123 results = self._do_run(handle, final_targets, final_fetches,
-> 1124 feed_dict_tensor, options, run_metadata)
1125 else:
1126 results = []

/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1319 if handle is None:
1320 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1321 options, run_metadata)
1322 else:
1323 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1338 except KeyError:
1339 pass
-> 1340 raise type(e)(node_def, op, message)
1341
1342 def _extend_graph(self):

OutOfRangeError: Read less bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

Caused by op 'save/RestoreV2_67', defined at:
File "/Users/Shilpa/anaconda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/Users/Shilpa/anaconda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 477, in start
ioloop.IOLoop.instance().start()
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/zmq/eventloop/ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell
handler(stream, idents, msg)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
if self.run_code(code, result):
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 16, in
from ocr import charSeg
File "", line 1009, in _handle_fromlist
File "", line 205, in _call_with_frames_removed
File "", line 961, in _find_and_load
File "", line 950, in _find_and_load_unlocked
File "", line 655, in _load_unlocked
File "", line 678, in exec_module
File "", line 205, in _call_with_frames_removed
File "/Users/Shilpa/KeyReply/handwriting-ocr-master/ocr/charSeg.py", line 13, in
segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')
File "/Users/Shilpa/KeyReply/handwriting-ocr-master/ocr/tfhelpers.py", line 22, in init
saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1698, in import_meta_graph
**kwargs)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 656, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
op_def=op_def)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/Shilpa/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1204, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

OutOfRangeError (see above for traceback): Read less bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

ERROR with IAM dataset

KeyError Traceback (most recent call last)
in ()
11 labels_idx = np.empty(len(labels), dtype=object)
12 for i, label in enumerate(labels):
---> 13 labels_idx[i] = [char2idx(c) for c in label] # char2idx(c, True)-2
14
15 # Split data on train and test dataset

in (.0)
11 labels_idx = np.empty(len(labels), dtype=object)
12 for i, label in enumerate(labels):
---> 13 labels_idx[i] = [char2idx(c) for c in label] # char2idx(c, True)-2
14
15 # Split data on train and test dataset

~\handwriting-ocr-master\ocr\datahelpers.py in char2idx(c, sequence)
30 if sequence:
31 return chars_to_idx[c] + 1
---> 32 return chars_to_idx[c]
33
34 def idx2char(idx, sequence=False):

KeyError: "'"

I think this error is because there is no single quote " ' " in the idx_to_char array or something

Output

Great code. Appreciate it. I tried this but words are printing randomly. Can you please resolve this.

unable to import from ocr

On trying the code in ocr.ipynb
regarding the import from ocr

from ocr.normalization import imageNorm, letterNorm
from ocr import page, words, charSeg
from ocr.helpers import implt, resize
from ocr.tfhelpers import Graph
from ocr.datahelpers import idx2char

It gives the error
OSError: File models/gap-clas/CNN-CG.meta does not exist.

Bug of COLOR_BGR2RGB ?

Hello,

I am trying to use your code that crops the image for scanned photos.

But here is what I get for output (knowing that the original image is a classic image with normal colors).
https://i.imgur.com/8r1WWh6.png

Have you an idea of what is the problem?

Thank you and have a great day.

Tensorflow not found?

When i try to click cell>run all in OCR.ipynb i got this error from cell 1

ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-2-fd9caaa4c04f> in <module>()
      2 import pandas as pd
      3 import matplotlib.pyplot as plt
----> 4 import tensorflow as tf
      5 import cv2
      6 

ModuleNotFoundError: No module named 'tensorflow'

--> 13 segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')


OutOfRangeError Traceback (most recent call last)
in ()
11 # Import costume functions, corresponding to notebooks
12 from ocr.normalization import imageNorm, letterNorm
---> 13 from ocr import page, words, charSeg
14 from ocr.helpers import implt, resize
15 from ocr.tfhelpers import Graph

/home/kafein/Downloads/handwriting-ocr-master/ocr/charSeg.py in ()
11 print("Loading Segmantation model:")
12 segCNNGraph = Graph('models/gap-clas/CNN-CG')
---> 13 segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')
14
15 def classify(img, step=2, RNN=False):

/home/kafein/Downloads/handwriting-ocr-master/ocr/tfhelpers.py in init(self, loc, operation, input_name)
21 with self.graph.as_default():
22 saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
---> 23 saver.restore(self.sess, loc)
24 self.op = self.graph.get_operation_by_name(operation).outputs[0]
25

/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.pyc in restore(self, sess, save_path)
1664 if context.in_graph_mode():
1665 sess.run(self.saver_def.restore_op_name,
-> 1666 {self.saver_def.filename_tensor_name: save_path})
1667 else:
1668 self._build_eager(save_path, build_save=False, build_restore=True)

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
887 try:
888 result = self._run(None, fetches, feed_dict, options_ptr,
--> 889 run_metadata_ptr)
890 if run_metadata:
891 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
1118 if final_fetches or final_targets or (handle and feed_dict_tensor):
1119 results = self._do_run(handle, final_targets, final_fetches,
-> 1120 feed_dict_tensor, options, run_metadata)
1121 else:
1122 results = []

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1315 if handle is None:
1316 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1317 options, run_metadata)
1318 else:
1319 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
1334 except KeyError:
1335 pass
-> 1336 raise type(e)(node_def, op, message)
1337
1338 def _extend_graph(self):

OutOfRangeError: Read less bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

Caused by op u'save/RestoreV2_67', defined at:
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/ipykernel/main.py", line 3, in
app.launch_new_instance()
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelapp.py", line 474, in start
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 887, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 276, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 390, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/zmqshell.py", line 501, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 13, in
from ocr import page, words, charSeg
File "ocr/charSeg.py", line 13, in
segRNNGraph = Graph('models/gap-clas/RNN/Bi-RNN-new', 'prediction')
File "ocr/tfhelpers.py", line 22, in init
saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1810, in import_meta_graph
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/meta_graph.py", line 660, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

OutOfRangeError (see above for traceback): Read less bytes than requested
[[Node: save/RestoreV2_67 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_67/tensor_names, save/RestoreV2_67/shape_and_slices)]]

Softmax function argument error

I was attempting to train the character classifier. I am using tensorflow version 1.4. Is the axis argument not supported by this tensorflow version? Your readme file says that you use version 1.4?

TypeError Traceback (most recent call last)
in ()
95
96 y_conv_out = tf.identity(tf.matmul(h_flat, W_fc1) + b_fc1, name='y_conv')
---> 97 y_conv_softmax = tf.nn.softmax(y_conv_out, axis=1, name='y_conv_softmax')
98
99

TypeError: softmax() got an unexpected keyword argument 'axis'

how to print the text inside the boxes

I ran the code in word detector.py after installing all dependencies and it identifies letters too but I am unable to print the text inside the bounding boxes.

Pls if anyone can tell how to do that.

No activation operation error

In OCR.ipynb for any Classifier other than the charclass this is error we get.

`INFO:tensorflow:Restoring parameters from models/word-clas/en/CTC/Classifier

KeyError Traceback (most recent call last)
in ()
----> 1 charClass = Graph(MODEL_LOC)

~\handwriting-ocr-master\ocr\tfhelpers.py in init(self, loc, operation, input_name)
22 saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
23 saver.restore(self.sess, loc)
---> 24 self.op = self.graph.get_operation_by_name(operation).outputs[0]
25
26 def run(self, data):

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in get_operation_by_name(self, name)
3161 raise TypeError("Operation names are strings (or similar), not %s." %
3162 type(name).name)
-> 3163 return self.as_graph_element(name, allow_tensor=False, allow_operation=True)
3164
3165 def _get_operation_by_name_unsafe(self, name):

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
3033
3034 with self._lock:
-> 3035 return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
3036
3037 def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
3093 if name not in self._nodes_by_name:
3094 raise KeyError("The name %s refers to an Operation not in the "
-> 3095 "graph." % repr(name))
3096 return self._nodes_by_name[name]
3097

KeyError: "The name 'activation' refers to an Operation not in the graph."`

Error on testing

OCR.ipynb works only for image 1 in test folder. It gives problem for rest of the images.

error

The above image shows the same

Recommended training specs

What is the recommended training specifications? How long would it take on a CPU with 16GB of RAM, 3.5ghz?

Unable to train gapClassifier

I am getting the following error on trying to train gapClassifier. For your dataset also am getting this error.

Please help.

ValueError Traceback (most recent call last)
in ()
22 tmpCost = cost.eval(feed_dict={x: trainBatch,
23 targets: labelBatch,
---> 24 keep_prob: 1.0})
25 print('tempcost=',tmpCost)
26 trainPlot.updateCost(tmpCost, i // COST_ITER)

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in eval(self, feed_dict, session)
646
647 """
--> 648 return _eval_using_default_session(self, feed_dict, self.graph, session)
649
650

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in _eval_using_default_session(tensors, feed_dict, graph, session)
4756 "the tensor's graph is different from the session's "
4757 "graph.")
-> 4758 return session.run(tensors, feed_dict)
4759
4760

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1102 'Cannot feed value of shape %r for Tensor %r, '
1103 'which has shape %r'
-> 1104 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
1105 if not self.graph.is_feedable(subfeed_t):
1106 raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (64, 3600) for Tensor 'x:0', which has shape '(?, 7200)'

No output for your own images also.

Hello,

I have started using the charclassifier model.
I am running the OCR.ipynb and using the following image: "data/textdet/2.jpg"
The output is very wierd. It cant detect any character.

I think there is some thing wrong with the bounding boxes that we get after recalculating them as per the ratio

Another error that I am getting is:

**IndexError Traceback (most recent call last)
in ()
1 # Crop image and get bounding boxes
----> 2 crop = page.detection(image)
3 implt(crop)
4 bBoxes = words.detection(crop)

~\handwriting-ocr-master-final\ocr\page.py in detection(image)
17 np.ones((5, 11)))
18 # Countours
---> 19 pageContour = findPageContours(closedEdges, resize(image))
20 # Recalculate to original scale
21 pageContour = pageContour.dot(ratio(image))

~\handwriting-ocr-master-final\ocr\page.py in findPageContours(edges, img)
94
95 # Sort corners and offset them
---> 96 pageContour = fourCornersSort(pageContour[:, 0])
97 return contourOffset(pageContour, (-5, -5))
98

~\handwriting-ocr-master-final\ocr\page.py in fourCornersSort(pts)
47 def fourCornersSort(pts):
48 """ Sort corners: top-left, bot-left, bot-right, top-right"""
---> 49 diff = np.diff(pts, axis=1)
50 summ = pts.sum(axis=1)
51 return np.array([pts[np.argmin(summ)],

~\AppData\Local\Continuum\anaconda3\lib\site-packages\numpy\lib\function_base.py in diff(a, n, axis)
1922 slice1 = [slice(None)]*nd
1923 slice2 = [slice(None)]*nd
-> 1924 slice1[axis] = slice(1, None)
1925 slice2[axis] = slice(None, -1)
1926 slice1 = tuple(slice1)

IndexError: list assignment index out of range**

This for the image: "data/textdet/1.jpg" and 3.jpg and many more.

Please help by checking it for yourself.

Error while executing charclassifier.ipynb

Hello Breta

Appreciate your work, While executing charclassifier with your data folder I'm getting below error. Though I have specified correct paths to the wordloc directory. Please help with this.

Loading chars...
Loading words...
-> Number of words: 5069


TypeError Traceback (most recent call last)
in ()
2
3 # dimension 64x64 = 4096
----> 4 images, labels = loadCharsData(charloc='',wordloc='data/words2/',lang=LANG)
5 labels = np.reshape(labels, (len(labels), 1))
6

C:\Users\xxx\handwriting-ocr-master\ocr\datahelpers.py in loadCharsData(charloc, wordloc, lang)
153 imgs, words, gaplines = loadWordsData(wordloc)
154 if lang != 'cz':
--> 155 words = np.array([unidecode.unidecode(w) for w in words])
156 imgs, chars = words2chars(imgs, words, gaplines)
157

C:\Users\xxx\handwriting-ocr-master\ocr\datahelpers.py in words2chars(images, labels, gaplines)
108 for i, gaps in enumerate(gaplines):
109 for pos in range(len(gaps) - 1):
--> 110 imgs[idx] = images[i][0:height, gaps[pos]:gaps[pos+1]]
111 newLabels.append(char2idx(labels[i][pos]))
112 idx += 1

TypeError: 'NoneType' object is not subscriptable

How to train gap-class models?

Hi,

Kindly help me with how to reproduce gap class models for my dataset. Its an emergency. I am stuck at this point. I understood that pred values are taken from gaps. maybe since my char class -class model is not compatible with gap-class that am not able to separate characters properly. ???

How to use my own model to recognize the handwritten text?

Hello,I have trained a model for recognizing a single Chinese handwritten character. The last thing I want to do is identify the handwritten text, which is to upload a handwritten image, and then identify the character. Please tell me what should I do? Can I learn from your project?
Thanks a lot!

How to segment character from image?

Hi
Thanks for your tutorial, its great help. I was wondering If I have give single image with text, how I can detect word by segmenting each character.
name
Which should result Sagar
Any hints will be great help

Thanks
Sagar

KeyError: "The name 'activation' refers to an Operation not in the graph." on using models/char-clas/en/Bi-RNN/model_1

`INFO:tensorflow:Restoring parameters from models/char-clas/en/Bi-RNN/model_1

KeyError Traceback (most recent call last)
in ()
----> 1 charClass = Graph(MODEL_LOC)

~\OneDrive\handwriting-ocr\ocr\tfhelpers.py in init(self, loc, operation, input_name)
22 saver = tf.train.import_meta_graph(loc + '.meta', clear_devices=True)
23 saver.restore(self.sess, loc)
---> 24 self.op = self.graph.get_operation_by_name(operation).outputs[0]
25
26 def run(self, data):

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in get_operation_by_name(self, name)
3449 raise TypeError("Operation names are strings (or similar), not %s." %
3450 type(name).name)
-> 3451 return self.as_graph_element(name, allow_tensor=False, allow_operation=True)
3452
3453 def _get_operation_by_name_unsafe(self, name):

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
3321
3322 with self._lock:
-> 3323 return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
3324
3325 def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):

C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
3381 if name not in self._nodes_by_name:
3382 raise KeyError("The name %s refers to an Operation not in the "
-> 3383 "graph." % repr(name))
3384 return self._nodes_by_name[name]
3385

KeyError: "The name 'activation' refers to an Operation not in the graph."

`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.