darkface_eval_tools's People
Forkers
mornydew sunlel thelissandra1 tvanguard2018 keysmis dokyeongk jaywongwang huent189 senlin-ali filipul1s debguest zxiaolondarkface_eval_tools's Issues
Could you please provide a more detailed input/output file format instruction?
When I try the code:
octave df_eval.m YOUR_ALGORITHM_NAME ./data/gt/ /root/UG2/Sub_challenge2_1/output/userid/output/submission_#/,
the code need to create ./output file before.
And could you provide the gt and submit txt data format like:
x1 y1 x2 y2 conf
x1 y1 x2 y2 conf
(x and y are int, conf is float)
or a more detailed instruction on how to run the code using octave?
How to use
Hi,
When I use the recommended command 'docker run --rm -it ....', I had some questions.
'''
the second one is the folder which contains ground_truth
the third one is the folder which contains your submission
'''
- is the gt folder(./data/gt) in the docker images? But it shows 'Can not find the gt file ./data/gt/1.txt' when I run the command.
- About my prediction file. I don't have a complete set of tests (only 100), and I don't know what the prediction file should be.
Plz help me, Thanks!
Broken google drive link for sample testing images
Got a 404 for sample testing images. Can you please update the link?
Thank you!
Why the predicted confidence will be normalized?
As seen in function norm_score
. Why not use the original confidence score to calculate the AP?
Having issues in running docker
Hi Guys,
There are some issue in when I try to run docker. Specifics are
This is the command I run:
sudo docker run --rm -it -v /home/ayesha/user_output_path2/:/tools/data -v /home/ayesha/:/tools/output scaffrey/eval_tools myTester ./data/gt/ /tools/data/
-
It reads all prediction files(in folder user_output_path2) and in the end display correct AP for DSFD, PyramidBox etc. However, for my algorithm (myTester) it gives AP equal to NAN. Now I am using your sample results file as predictions (ones provided by UG2 challenge) I have plotted them on the 100 images and they seem reasonable. So Nan AP is basically some technical issue.
-
It also says when I run the above command "Can not find the gt file ./data/gt/54.txt"
-
Why does your code also says "Norming prediction", when the predictions are already normalized in sample txt files between 0-1 ?
Thanks anyways
Thanks anyways
AP = 0 for perfect submission (i.e. predict=ground truth)
I try to put predict = ground truth box + score=1.
This gives a ap=0.
The YOUR_ALGORITHM_NAME.mat gives
.. Created by Octave 6.0.0, Thu May 09 10:21:16 2019 UTC root@ff58cad34ac8
.. name: pr_curve
.. type: matrix
.. rows: 1000
.. columns: 2
NAN 0
NAN 0
NAN 0
...
I also note that if i try predict = ground truth box + score=random_number_between_0.9_to_1.0.
This gives a ap=0.84.
i would expect ap=1.00 for these two cases
please note that the confidence is required to be between 0 and 1
and the thresh_num used is 1000
https://github.com/Ir1d/DARKFACE_eval_tools/blob/master/evaluation.m#L11
https://github.com/Ir1d/DARKFACE_eval_tools/blob/master/evaluation.m#L70
sorry for the inconvenience
Cannot reach the reported mAP using the pre-trained DSFD
Hi,
I tested DarkFace images using code from https://github.com/yxlijun/DSFD.pytorch (VGG backbone), but only got mAP of 32.8%. However, your paper reports an mAP of 51.6%. I am not sure why. Could you provide more testing details (such as input image size, NMS threshold, etc.)?
这个有具体教程吗,不知道怎么用
Why the output format is changed?
cd8a93d#diff-8ea6e0136b52bee6f4d87396ffacc0acL53
The output format seems to be changed from (x,y,w,h,conf) to (xmin,ymin,xmax,ymax,conf).
Which format should we use?
Thank you.
Dry Run ranking
Hi Guys,
When will you guys upload the rankings for dryrun of track 2.2 ? I mean its already late ?
Thanks
The mAP is 0.479 for DSFD using DSFD.mat
Hi,
I calculated mAP (area under the PR-curve) using mAP = np.trapz(DSFD[:, 0], DSFD[:, 1]), but got 0.479 instead of the reported 0.516 from the paper.
I am not sure why.
Challenge ranking
Hi,
Why is challenge ranking still not available ?
There might divided by zero with one predicted result
With only one predicted face, the max score is equal to min score, because
max_score = max(max_score,max(score_list)); min_score = min(min_score,min(score_list));
and the following might cause problem:
norm_score_list = (score_list - min_score)/(max_score - min_score);
https://github.com/Ir1d/DARKFACE_eval_tools/blob/master/norm_score.m#L27
docker start failed
Hi,
When I use the recommended command 'docker run --rm -it ....', I had some questions.
docker run failed,:
octave: X11 DISPLAY environment variable not set
octave: disabling GUI features
error: textscan: invalid stream number = -1
error: called from
df_eval at line 15 column 11
we have found a bug and are working on a fix
sorry for the inconvenience
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.