Giter Club home page Giter Club logo

captcha_break's Introduction

captcha_break's People

Contributors

s0ap00 avatar ypwhs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

captcha_break's Issues

base_model与model的关系

大佬 能讲一下为什么训练的是model而测试的是base_model呢?我看了你的解释但是没看懂

在ctc_pytorch的训练过程中,损失值总会变为nan

相关信息

pytorch版本使用的是1.1.0;
系统是Ubuntu16.04;

相关修改

代码基本没有修改,只是改了输入图片的尺寸为96*34后,对应的生成器其中的参数n_input_length = 6 也进行了修改,并未报错。

问题

训练中,loss总是会变为nan,尝试着改batch_size和lr,也只能让其晚出现2个epoch左右,acc至多变为70左右。


求指点啊TAT

如果把模型保存下来,然后重新加载load_model报错

报错是这样的: ValueError: Unknown loss
function:《lambda》

然后我改成这样
model = load_model(os.path.join(KERAS_MODEL_PATH, 'cnn.h5'),
custom_objects={'ctc': lambda y_true, y_pred: y_pred})
还是报错,求解。谢谢

如果我想把训练好的模型保存下来,并做成一个服务,你这个模型改怎么保存啊?

准确率回调函数

您好,在运行ctc_2019.ipynb时,一个epoch跑完以后,进入到evaluate()函数后,就一直不在进行第二个epoch,电脑的性能很好,8G的显卡,是不是evaluate()函数的计算时间太长了,这个跟用不用GPU版的加速训练有关系吗?

about y_pred[:, 2:, :]

def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
y_pred = y_pred[:, 2:, :]
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)

Can any one explain the mean of y_pred[:, 2:, :]?
Does this "2" have any particular mean?
Thanks a lot.

图片切割

这个不需要对图片中字符进行字符切割吗?captcha 验证码

将pytorch_ctc改为py文件时出现问题

您好!我将从ctc_pytorch改为py文件运行,在我本地电脑运行没有错误,改为远程服务器时出现以下错误:
liurui@eversec-desktop:~/yzmre$ python ctc.py
File "ctc.py", line 64
modules[f'conv{name}'] = nn.Conv2d(in_channels, out_channels, kernel_size, padding=(1, 1) if kernel_size == 3 else 0)
^
SyntaxError: invalid syntax
请问您知道这是什么原因吗

ctc_2019.ipynb中,训练完一个epoch后,就不再继续了

相关信息

TensorFlow版本使用的是1.12.0;
系统是Ubuntu16.04;

相关修改

代码基本没有修改,只是改了输入图片的尺寸后,对应的生成器其中的参数也进行了修改并未报错。

相关问题

在运行

model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=Adam(1e-3, amsgrad=True))
model.fit_generator(train_data, epochs=100, validation_data=valid_data, workers=4, use_multiprocessing=True,
                    callbacks=callbacks)

当运行到第一个epoch的999step的时候就停止了,输入只有:

Epoch 1/100
 999/1000 [============================>.] - ETA: 0s - loss: 4.6288WARNING:tensorflow:From /home/shizai/anaconda3/envs/baiqiao/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:4831: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.

求解答。

python 版本?

我的python是3.7版本的,tensorflow-gpu 安装不了了,最好能把依赖包版本给出来比较好。

ctc.ipynb merge([gru_1, gru_1b], mode='sum') 找不到这个函数

WARNING:tensorflow:From /home/qgb/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
/root/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:8: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(32, (3, 3), activation="relu")

/root/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:9: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(32, (3, 3), activation="relu")
if name == 'main':
/root/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:17: UserWarning: Update your GRU call to the Keras 2 API: GRU(128, return_sequences=True, name="gru1", kernel_initializer="he_normal")
/root/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:18: UserWarning: Update your GRU call to the Keras 2 API: GRU(128, return_sequences=True, go_backwards=True, name="gru1_b", kernel_initializer="he_normal")

TypeError Traceback (most recent call last)
in
17 gru_1 = GRU(rnn_size, return_sequences=True, init='he_normal', name='gru1')(x)
18 gru_1b = GRU(rnn_size, return_sequences=True, go_backwards=True, init='he_normal', name='gru1_b')(x)
---> 19 gru1_merged = merge([gru_1, gru_1b], mode='sum')
20
21 gru_2 = GRU(rnn_size, return_sequences=True, init='he_normal', name='gru2')(gru1_merged)

TypeError: 'module' object is not callable

InvalidArgumentError (see above for traceback): Saw a non-null label (index >= num_classes - 1) following a null label, batch: 75 num_classes: 36 labels: [[Node: ctc/CTCLoss = CTCLoss[ctc_merge_repeated=true, ignore_longer_outputs_than_inputs=false, preprocess_collapse_repeated=false, _device="/job:localhost/replica:0/task:0/cpu:0"](ctc/Log/_97, ctc/ToInt64/_99, ctc/ToInt32_2/_101, ctc/ToInt32_1/_103)]]

我现在把需要训练的图片存入到samples文件下,比如:AB67HG.jpeg,然后我读入数据
rnn_size = 128

characters = string.digits + string.ascii_uppercase

width,height,n_len,n_class= 140,44,6,len(characters)
folder = "sample"
imageList = os.listdir(folder)
imageList = [os.path.join(folder,item) for item in imageList if os.path.isfile(os.path.join(folder,item))]
image_size = len(imageList)

image_array = np.zeros((image_size,height,width,3),dtype=np.uint8)
image_names = []

for i in range(image_size):
img = Image.open(imageList[i])
image_array [i]=img
image_names.append(imageList[i][-11:-5].upper()

产生数据

def gen(batch_size=128):
X = np.zeros((batch_size, width, height, 3), dtype=np.uint8)
y = np.zeros((batch_size, n_len), dtype=np.uint8)
while True:
for i in range(batch_size):
num = random.randint(0, image_size - 1)
random_str = image_names[num]
X[i] = np.array(image_array[num]).transpose(1, 0, 2)
y[i] = [characters.find(x) for x in random_str]
yield [X, y, np.ones(batch_size) * int(conv_shape[1] - 2), np.ones(batch_size) * n_len], np.ones(batch_size)

显示这个错误,
InvalidArgumentError (see above for traceback): Saw a non-null label (index >= num_classes - 1) following a null label, batch: 75 num_classes: 36 labels:
[[Node: ctc/CTCLoss = CTCLoss[ctc_merge_repeated=true, ignore_longer_outputs_than_inputs=false, preprocess_collapse_repeated=false, _device="/job:localhost/replica:0/task:0/cpu:0"](ctc/Log/_97, ctc/ToInt64/_99, ctc/ToInt32_2/_101, ctc/ToInt32_1/_103)]]
我使用的keras是2.0.3,tensorflow-gpu(1.2.1),用cpu可以运行,但是gpu出问题

训练报错

我的环境如下:
win10 Anaconda3 python3.6 tensorflow1.13.1-gpu cudn10.0

tensorflow已经安装好,并且使用import tensorflow as tf测试安装没有问题

运行ctc_2019.ipynb时,前面都通过,运行到训练脚本部分,后台一直报错如下:
Traceback (most recent call last):
File "", line 1, in
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'CaptchaSequence' on <module 'main' (built-in)>
Traceback (most recent call last):
File "", line 1, in
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'CaptchaSequence' on <module 'main' (built-in)>
Traceback (most recent call last):
File "", line 1, in
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'CaptchaSequence' on <module 'main' (built-in)>
Traceback (most recent call last):
File "", line 1, in
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "d:\softwares\anaconda3-64\envs\shy\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
AttributeError: Can't get attribute 'CaptchaSequence' on <module 'main' (built-in)>
[I 10:42:01.964 NotebookApp] Saving file at /ctc_2019.ipynb

您好,关于四层全连接层的问题

您好,请问模型中的四层全连接层是否对应着验证码中的四个字符?或者说全连接层的个数和验证码中字符的个数有无对应关系,因为当我再减小全连接层个数的时候会报错模型期望的数组个数和实际得到的数组个数不一致,但是当将n_class参数也进行相应的修改,比如n_class变为5,全连接层也为5个时,就不会报错。

PNG该如何修改参数?

因为要读取PNG, 所以将参数改为
width, height, n_len, n_classes = 120, 32, 4, len(characters)

last_channel = 4

model = Model(n_classes, input_shape=(4, height, width))

报错信息是:
“RuntimeError: Expected input_lengths to have value at most 7, but got value 12 (while checking arguments for ctc_loss_gpu)”

烦请指教,谢谢!

小白求问,3点疑惑

您好,我看到您了程序,有3点疑惑:
1)由于我们使用的是循环神经网络,所以默认丢掉前面两个输出,因为它们通常无意义,且会影响模型的输出。
请问这是为什么啊?还有,“前面两个输出”指的是什么?

2) “y_pred 是模型的输出,是按顺序输出的37个字符的概率,因为我们这里用到了循环神经网络,所以需要一个空白字符的概念;”, 为什么“用到了循环神经网络,所以需要一个空白字符的概念”?

3)如果是实际中一段的语音,比如 0.2s---1.8s 是“哦”, 3.4s----6.4s是“嗨”, 这样的事件,该如何制作相应的标签?

希望您可以留下邮箱或者QQ微信,希望可以得到您的回复。

model variable vs base_model

Hello ypwhs,
On the ctc code you are creating a model called model:
model = Model(input=[input_tensor, labels, input_length, label_length], output=[loss_out])

And that is the one you train, but when you make the prediction you use a the model called base_model.
y_pred = base_model.predict(X_test)

Why is that? Is it correct?
Shouldn't you use model.predict? If yes, what would be a correct way to call it, since is waiting for 4 parameters, what should it be sent on "labels", "input_length" and "label_length".

Thank you.

用keras2的api改写后,无法收敛

因为我使用的keras版本是keras2, 所以把网络结构定义和训练的代码做了一些修改,但是输出的每个字符的准确率一直在0.028左右徘徊,这个就是完全随机1/36的概率,烦请看一下代码是不是有问题
我使用的python版本是2.7.13,keras版本是2.0.2,都是使用anaconda安装的
我已经尝试过修改参数初始化方式(使用Xavier方法),修改优化方法(sgd,RMSprop等),调节学习率(0.1-10),修改batch_size等都没有效果。
我修改后的代码如下:

from keras.models import *
from keras.layers import *
from keras.optimizers import *

input_tensor = Input(shape = (height, width, 3))
x = input_tensor

for i in range(4):
    x = Conv2D(32 * 2 ** i, (3, 3), activation = 'relu')(x)
    x = Conv2D(32 * 2 ** i, (3, 3), activation = 'relu')(x)
    x = MaxPooling2D((2, 2))(x)

x = Flatten()(x)
x = Dropout(rate = 0.25)(x)
x = [Dense(n_class, activation='softmax', name='c{}'.format(i))(x) for i in range(n_len)]

model = Model(inputs = input_tensor, outputs = x)

model.compile(optimizer = 'adadelta', loss='categorical_crossentropy', metrics=['accuracy']) 
from keras.callbacks import EarlyStopping
#early_stop = EarlyStopping(monitor='val_loss', patience=2)
model.fit_generator(gen(), steps_per_epoch = 1600, epochs = 10, validation_steps = 40, validation_data = gen())

你好,cnn模型不加BN层不收敛

你好,我使用你的cnn模型时并不能收敛,loss越来越高。我加了BN解决了问题,但是我不太明白为什么一定需要加BN。
此外,我发现在使用ImageCaptcha生成验证码的时候,当使用单一字体验证码时,模型能够很好的学习,但我使用20种不同字体去生成验证码,训练集能够很好的学习,但是验证集一直的loss一直没有下降,样本数量30000,这种过拟合的情况需要怎么解决呢?谢谢。

不能多线程运行?

您说:添加 workers=4 参数让 Keras 自动实现多进程生成数据,摆脱 python 单线程效率低的缺点。
然而我加了后反而没法运行,这是为什么?怀疑和版本有关,能提供keras版本吗?

还有个问题,图片不用变成灰度图吗,用彩图训练不太好吧

关于keras某些包的更改

你好!
我在使用Python3执行到这一句时报错了:
from keras.utils.visualize_util import plot
提示无法找到模块
搜索之后发现visualize_util已经被更名为了vis_utils,
plot被更名为了plot_model.

Perform bad with same code.

Hi, I have tried with epoch 5 and 20 serveral times, all failed, the code is same with your example.
But the trained model is very disappointed:

image

I can't understand why, could you give me a tip for this?

My env:
python2.7
Keras (1.2.0)
tensorflow-gpu (0.12.1)

Model loading Issues

Hi @ypwhs,

I tired to load the 'model.h5' weights to the CTC model, by using Keras 2.1 and Python 3.6
The network architecture is as the same as yours.

After evaluation, the accuracy is 0.005.
Any ideas on it?
Thanks.

Any way to make captcha_break more generic?

I found it can only recognize captcha generate from the python captcha lib.
It didn't work when I give it different style 4 size char+number .
Since it is hard to mock a captcha with different style every time. I wonder is there any more generic way to do this?

What I can imagine is :

  1. use many different font (simple, but only the specical font would help)
  2. some preprocess, like convert RGB to greyscale first, to beat reversed out image(反白图,字白,底有颜色) .But this would lost some feature too, because some great captchas use one color one char, just have high(maybe not so high) contrast with the surrounding area,
  3. Use some deep learning method , learn char from unlabeled captcha dataset, then label the learned features/pattern to AZ 09 , then use that to build model. I think this way is best, but I have no idea how to start. I have heard deconvolution or some clustering method can generate some pattern , but I am not very familiar with these technique.

Could you give me some tips?

cnn模型构建的问题

在构建cnn模型的时候:

input_tensor = Input((height, width, 3))
    x = input_tensor
    for i, n_cnn in enumerate([2, 2, 2, 2, 2]):
      ############
        for j in range(n_cnn):
            # 这里为什么要循环两次呢?
            x = Conv2D(32 * 2 ** min(i, 3), kernel_size=3, padding='same', kernel_initializer='he_uniform')(x)
            x = BatchNormalization()(x)
            x = Activation('relu')(x)
        x = MaxPooling2D(2)(x)

    x = Flatten()(x)
    x = [Dense(n_class, activation='softmax', name='c%d' % (i + 1))(x) for i in range(n_len)]
    model = Model(inputs=input_tensor, outputs=x)

楼主,请教

我参考你的pytorch代码,改为读取图片,下载了350张验证码,100张做验证集,5张测试,其它的都训练。
Loss 0.001 : 250 epoch
Loss 0.0001 : 30 epoch

Train Loss =0.0021 , Acc = 0.9687 (比较满意)
但:
Valid Loss = 0.7251 , Acc = 0.4862 (50%不到)

pth保存后,又从train 和 valid 里各复制了几张到 Test 中做测试,共13张,对了7张。请问如何改进,能进一步提高 Acc

预测输出咋有空的

您好,我的标签是四位,训练完了后看效果是,发现还有输出结果只有3位的情况

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.