Comments (5)
2021-09-28 10:39:47,071 - log/train.log - INFO - iteration:26 step:25/163, NER loss: 0.226979
I0928 10:39:47.071518 140003545884416 train.py:164] iteration:26 step:25/163, NER loss: 0.226979
2021-09-28 10:41:23,881 - log/train.log - INFO - iteration:26 step:125/163, NER loss: 0.257177
I0928 10:41:23.881218 140003545884416 train.py:164] iteration:26 step:125/163, NER loss: 0.257177
2021-09-28 10:41:53,836 - log/train.log - INFO - evaluate:dev
I0928 10:41:53.836905 140003545884416 train.py:77] evaluate:dev
2021-09-28 10:42:05,035 - log/train.log - INFO - processed 106932 tokens with 3661 phrases; found: 3673 phrases; correct: 3471.
I0928 10:42:05.035623 140003545884416 train.py:81] processed 106932 tokens with 3661 phrases; found: 3673 phrases; correct: 3471.
2021-09-28 10:42:05,035 - log/train.log - INFO - accuracy: 99.27%; precision: 94.50%; recall: 94.81%; FB1: 94.66
I0928 10:42:05.035877 140003545884416 train.py:81] accuracy: 99.27%; precision: 94.50%; recall: 94.81%; FB1: 94.66
2021-09-28 10:42:05,035 - log/train.log - INFO - : precision: 0.00%; recall: 0.00%; FB1: 0.00 3
I0928 10:42:05.035951 140003545884416 train.py:81] : precision: 0.00%; recall: 0.00%; FB1: 0.00 3
2021-09-28 10:42:05,036 - log/train.log - INFO - LOC: precision: 94.48%; recall: 96.11%; FB1: 95.29 1830
I0928 10:42:05.036015 140003545884416 train.py:81] LOC: precision: 94.48%; recall: 96.11%; FB1: 95.29 1830
2021-09-28 10:42:05,036 - log/train.log - INFO - ORG: precision: 91.92%; recall: 89.75%; FB1: 90.82 953
I0928 10:42:05.036077 140003545884416 train.py:81] ORG: precision: 91.92%; recall: 89.75%; FB1: 90.82 953
2021-09-28 10:42:05,036 - log/train.log - INFO - PER: precision: 97.63%; recall: 98.19%; FB1: 97.91 887
I0928 10:42:05.036137 140003545884416 train.py:81] PER: precision: 97.63%; recall: 98.19%; FB1: 97.91 887
2021-09-28 10:42:05,038 - log/train.log - INFO - evaluate:test
I0928 10:42:05.038823 140003545884416 train.py:77] evaluate:test
2021-09-28 10:42:25,667 - log/train.log - INFO - processed 214621 tokens with 7456 phrases; found: 7474 phrases; correct: 7033.
I0928 10:42:25.667727 140003545884416 train.py:81] processed 214621 tokens with 7456 phrases; found: 7474 phrases; correct: 7033.
2021-09-28 10:42:25,667 - log/train.log - INFO - accuracy: 99.23%; precision: 94.10%; recall: 94.33%; FB1: 94.21
I0928 10:42:25.667995 140003545884416 train.py:81] accuracy: 99.23%; precision: 94.10%; recall: 94.33%; FB1: 94.21
2021-09-28 10:42:25,668 - log/train.log - INFO - : precision: 0.00%; recall: 0.00%; FB1: 0.00 2
I0928 10:42:25.668069 140003545884416 train.py:81] : precision: 0.00%; recall: 0.00%; FB1: 0.00 2
2021-09-28 10:42:25,668 - log/train.log - INFO - LOC: precision: 94.03%; recall: 95.03%; FB1: 94.53 3501
I0928 10:42:25.668134 140003545884416 train.py:81] LOC: precision: 94.03%; recall: 95.03%; FB1: 94.53 3501
2021-09-28 10:42:25,668 - log/train.log - INFO - ORG: precision: 91.49%; recall: 90.81%; FB1: 91.15 2150
I0928 10:42:25.668197 140003545884416 train.py:81] ORG: precision: 91.49%; recall: 90.81%; FB1: 91.15 2150
2021-09-28 10:42:25,668 - log/train.log - INFO - PER: precision: 97.42%; recall: 97.47%; FB1: 97.45 1821
I0928 10:42:25.668256 140003545884416 train.py:81] PER: precision: 97.42%; recall: 97.47%; FB1: 97.45 1821
2021-09-28 10:42:25,960 - log/train.log - INFO - new best test f1 score:94.210
I0928 10:42:25.960424 140003545884416 train.py:94] new best test f1 score:94.210
以一个iteration而言,打印出来的FB!值是训练时的分数嘛,为什么和new best test f1 score是一样的,这个new best test f1 score是测试集上的分数吗,它是怎么定义的,是按预测对一个实体算还是一个标签算? 希望能回答一下,还有上面 iteration:26 有上下两个部分的预测精度,具体都代表什么?
from bertner.
@zsh123123 建议先学一下基础,验证集选择模型,测试集检验泛化性能
from bertner.
@zsh123123 建议先学一下基础,验证集选择模型,测试集检验泛化性能
所以前面的评估是在验证集上,选出了最佳的模型,后面的评估是在测试集上,测试模型的泛化性能,但是在训练集上的评估是通过loss函数,并没有设置precision/recall/f1值。还有一个疑问是之前没有见过FB1,是不是就是f1
from bertner.
@zsh123123 是一样的
from bertner.
@zsh123123 是一样的
还有一个问题,准确度是按预测对一个实体算还是一个标签算
from bertner.
Related Issues (20)
- 项目引用格式 HOT 1
- 训练出来模型,同样话predict出来结果不一致,大概是5次中有一次不一样 HOT 10
- ModuleNotFoundError: No module named 'tensorflow.contrib' HOT 1
- What is the use of the .dev file in data? HOT 3
- ner_predict.utf8预测结果的问题 HOT 2
- ValueError: Couldn't find 'checkpoint' file or checkpoints in given directory chinese_L-12_H-768_A-12/bert_model.ckpt HOT 1
- 代码复现第4步(训练)遇到问题 HOT 3
- 训练结果相关问题
- GPU没有被使用
- 如何加入早停机制呢,放钩子? HOT 1
- config_file HOT 1
- lstm_outputs = self.biLSTM_layer(lstm_inputs, self.lstm_dim, self.lengths)为什么 lstm_outputs的维度是(?,?,786)这里的问号为什么不是实数呢?) HOT 4
- 用其他数据集跑,因为数据集标签不同,报错“ KeyError: 'B-Project' ”,请问如何解决? HOT 2
- 请问data里的bio是怎么生成的,如何进行bio标注
- 没有日志 HOT 1
- 有关iteration step epoch的问题 HOT 1
- 请教一下 HOT 1
- 请教一下 precision recall为零的原因
- Testing issues HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bertner.