pterhoer / faceimagequality Goto Github PK
View Code? Open in Web Editor NEWCode and information for face image quality assessment with SER-FIQ
Code and information for face image quality assessment with SER-FIQ
Hi, I run your methods on a PC (with an Intel Core i7-4790K@4GHz, 16GB RAM and an GeForce GTX 1080) to test its runtime. I find it need about 2~3s per image. Is it a little bit longer?
Hi when will you release the SER-FIQ on FaceNet code? Thanks.
I try to run the file 'serfiq_example.py' and got the result of two test images which you providied in './data/'. The score of 'test_img.jpeg' is 0.89 ,and the score of 'test_img2.jpeg' is 0.87.Are the resuls correct? Yet,I test on other images like side faces and front faces.But their results are indistinguishable,Is there something wrong with my use? Thank you for your reply!
Greetings. Tried to reproduce the result of your solution. I decided to run the code as it is without changing anything.
Environment: AWS MX 1.8 (+Keras2) with Python3 (CUDA + and Intel MKL-DNN)
from face_image_quality import InsightFace, SERFIQ, get_embedding_quality
import cv2
insightface_model = InsightFace()
ser_fiq = SERFIQ()
test_img = cv2.imread("./data/test_img2.jpeg")
embedding, score = get_embedding_quality(test_img,
insightface_model,
ser_fiq
)
print("SER-FIQ quality score of image 1 is", score)
SER-FIQ quality score of image 1 is 0.9999999850988388
At the same time, by sending almost any image, I get a score : 0.9999
Please tell me what could be wrong?
Hi pterhoer:
I am confused about the threshold setting in your paper. In Section 4, it reads:"The face recognition performance is reported in terms of EER and FNMR at a FMR threshold of 0.01. The FNMR is also reported at 0.001 FMR threshold". In other papers, as I know, they set a threshold T. If the Cosine/Euclidean distance of two imgs is smaller than T, we claim it's a false match. So, is the threshold in your paper means the same?
Thanks!
Li
@pterhoer @jankolf Thanks for such a wonderful paper.
I have some queries related to the implementation part.
Firstly, as I understood there is no specific training for the SER-FIQ model. Basically, we need a pre-trained MTCNN and pre-trained face recognition model, say Arcface+Resnet100 that can be trained from scratch on my own dataset also. Just need to pass these as a parameter to your serfiq_examply.py file. Am I correct?
Or in addition to the above two parameters, we need pre-trained SER-FIQ model weights also. What does ./data/pre_fc1_bias.npy stores? How can I train this on my own dataset?
100/100 [00:04<00:00, 23.29pass/s]
测试平台 RTX3090
Hi pterhoer:
Thansk for your excellent work! I'm reproducing your work, could you please release your code for the evaluation of error vs reject curves?
Best
Li
Hello,
I'm trying to reproduce the paper result (error vs reject curve for LFW - ArcFace). Are you computing FMR 0.001 for every genuine and impostor pairs combination of LFW or are you using the official pairs protocol ?
FaceImageQuality/face_image_quality.py
Line 262 in d9eff50
Since Gal et al. [12] proofed that applying dropout repetitively on a network approximates the uncertainty of a Gaussian process [33], the euclidean distance is a suitable choice for d(xi, xj ).
FaceImageQuality/face_image_quality.py
Lines 187 to 188 in d9eff50
Nice work in this repo.
I have read through other closed issues and it seems normal that two models have very different quality score ranges based on the SER-FIQ method. Normalization is recommended to enforce the number to the same range just to make them look comfortable since only the ranking is important while the actual values are not.
I do have some thoughts here. Suppose two models are observed to have different quality values and range widths on the same dataset. E.g. Model 1 has range [0.2,0.6], i.e. 0.4+-0.2, and Model 2 has range [0.8,0.82], i.e. 0.81+-0.01.
Any thoughts or discussions are welcome.
i run the example many times, test img1 socre is 0.55, test img2 score is 0.54, but i think the quality of img1 is much higher than img2, is there something wrong?
SER-FIQ quality score of image 1 is 0.8926710573645206
SER-FIQ quality score of image 2 is 0.8782863740164669
Hi, Pterhoer:
I use SER-FIQ(on-top model) algorithm in 3 scenarios: Face recognition,PersonReid, VehicleReid。However,It only performs well in Face recognition.
In PersonReid and VehicleReid it performs worse than Niqe, Bris and Piqe. Do you have any idea why SER-FIQ perfroms badly in other scenarios?
looking forward to your reply!
Thanks
Hi,
To generate the plots in the paper for LFW, do you drop pairs from LFW's pairs.txt?
If this is the case, then a method that drops more pairs will end up having the best FNMR vs Ratio of Unconsidered Images Plot and would hence be better.
How do you ensure the same number of pairs are evaluated even if the different images are dropped for each IQA?
For example,
I was comparing your methods vs BRISQUE. In my experiments, BRISQUE dropped images that reduced the total number of evaluation pairs in LFW's pairs.txt when compared to your method. I feel this is why, in my experiments, BRIQUE outperforms SERFIQ.
If possible can you please explain the evaluation procedure in detail
Hello,when I tried to run the code. I found missing two module that in face_image_quality.py line 73 and line 74. I
can download some mtcnn implement inmxnet version.But I don't know the rule of preprocess.py. Can you share the file or share some detail that I can reproduce it. Thank you.
Hi, i got 2 questions about SER-FIQ quality score when i run serfiq_example.py with 3 different model (model-r34-amf-slim, model-r50-am-lfw, model-r100-ii).
The FNMR is reported at 0.001 FMR by ratios of unconsidered images which determined by quality. I am not sure whether imposter pairs are filtered by quality when calculating FMR0.1%. For example, LFW datasets, it has genuine and imposter pairs. Genuine pairs of images are filtered according to ratios of unconsidered images to calculate FNMR further. But for imposter pairs of images, are they filtered by quality to calculate FMR0.1%?
thanks for your contribution, i cant find mtcnn_detector and face_preprocess func, where can i find them?
In your paper, in Section 4 On-top model preparation, you have stated that for networks trained without dropout, we have a new network "It consist of five layers with nemb/128/512/nemb/nids dimensions". The two intermediate layers have 128 and 512 dimensions. What does nemb mean? Why this network depends on nids
In your code, in class SERFIQ, you have
inputs = Input(shape=(25088,))
x = inputs
x = Dropout(0.5)(x, training=True)
x = Dense(512, name="dense", activation="linear")(x)
x = BatchNormalization()(x)
x = Lambda(euclid_normalize)(x)
output = x
So the network stricture does not correspond to what you have written in the paper!?
Why 25088?
hi pterhoer, Nice work!
Dear authors, in this repo, there's some dependencies not specified, which prevents running the code straightly.
Such as
btw, I'm not familiar with face toolkits.
thanks for ur time.
Hi, Thanks for sharing the source codes.
I have installed everything based on requirement.txt file. Now, I'm getting "OSError: libcudnn.so.7: cannot open shared object file: No such file or directory" error when I run the "serfiq_example.py" file.
Could you please advise.
Thanks
RuntimeError: simple_bind error. Arguments:
data: (1, 3, 112, 112)
what should i do?
Hello, this is a very nice project but still I have a problem of the installation of the used packages.
In the README it is said the mxnet and tensorflow1.14.0 should be installed. However, I found that if I firstly installed tensorflow1.14.0, and then mxnet-gpu, the installation of mxnet-gpu will force tensorflow to be upgraded to a higher version.
If I downgrade tensorflow back to 1.14.0 version, it will say that some packages are not compatible. But if I install mxnet-gpu with --no-deps on, it turned out some other packages are in conflicts.
For some reason, I can not use conda envs, so I can only install them in the basic environment. I'm not sure whether any one else have come with the same problem? Thanks a lot.
ValueError: cannot reshape array of size 7618528 into shape (512,25088)
Hello Philipp.
I don't quite understand the calculation of face quality. You define the face quality of image as the sigmoid of the negative mean euclidean distance d(xi, xj ) between all stochastic embeddings pairs. And you also mention that Gal proofed that applying dropout repetitively on a network approximates the uncertainty of a Gaussian process , the euclidean distance is a suitable choice for d(xi, xj ). However, Gal use variation as the uncertainty of output. You use the sigmoid of the negative mean euclidean distance to define the face quality of image but not the variation. I really want to know the relationship between the negative mean euclidean distance and variation.
I'll be very appreciated if you could help me.
Hi,
I've installed everything based on "requirements.txt" file but I'm getting "cannot import name 'ImageRecordInt8Iter' from 'mxnet.io.io'" error when I run "serfiq_example.py" file.
Please advise.
Thanks.
I have passed the path to the Insightface repository to the InsightFace class in face_image_quality.py, but it still says:
Traceback (most recent call last):
File "serfiq_example.py", line 9, in
insightface_model = InsightFace()
File "/data/xxx/open_source_code/FaceImageQuality/face_image_quality.py", line 74, in init
from face_preprocess import preprocess
ModuleNotFoundError: No module named 'face_preprocess'
Everybody is expecting that this is going to return a numeric value describing the quality of the face. But this is not the case. This just checks that the vectors can handle different situations or not.
hi! Thank you for this method, but I have a problem. The results of all images are almost the same, and there is no good discrimination.
Looking forward to your answer!
Hi pterhoer:
Your code seems to be the "on-top model" method of SER, I am trying to transfer it to "same model" but faild. could you please release the “same model” code?
Hi,
Which Mxnet version should we install? I'm getting tones of error regarding mxnet. My Cuda version is 9.1. So I installed mxnet-cu91. Then I got "cannot import name 'ImageRecordInt8Iter' from 'mxnet.io.io' " error and I changed some depricated functions to fix it. Now I'm getting "cannot import name 'multi_sum_sq' from 'mxnet.ndarray' " error and have no idea how to fix it as I didn't find anything online about that error.
Please advise.
Thanks.
Hello, after installing and downloading models I get the following error when I run the serfiq_example.py:
Traceback (most recent call last):
File "serfiq_example.py", line 2, in
from face_image_quality import SER_FIQ
File ".../FaceImageQuality/face_image_quality.py", line 22, in
import mxnet as mx
File ".../anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/init.py", line 23, in
from .context import Context, current_context, cpu, gpu, cpu_pinned
File ".../anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/context.py", line 23, in
from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass
File "...anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/base.py", line 356, in
_LIB = _load_lib()
File ".../anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/base.py", line 347, in _load_lib
lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
File ".../anaconda3/envs/faceQuality/lib/python3.8/ctypes/init.py", line 373, in init
self._handle = _dlopen(self._name, mode)
OSError: libnccl.so.2: cannot open shared object file: No such file or directory
Edit: I run it on cpu cause I didn't find a way to run it on CUDA 12.2.
I am testing ser-fiq score on several data, some of them are very blur or have big angle, some of them are high quality faces. However, from my test, low quality images are very likely have high score than high quality faces. How should I check this?
Hello, I dont know how to create them, and can they use in every recongition-model?
@pterhoer 首先非常感谢作者的工作!
通过一些实际环境中使用数据测试的情况描述:
在完整的脸上可以预测正确结果
对于遮挡的人脸(自我遮挡、物体遮挡)和残缺的人脸这些也是评定为低质量的人脸,在这种情况下不能正确评定,这些情况是实际应用中关注的重点
在此,请教下是否有合适的处理方法
Great job and novel idea! i'm trying to test it on our private dataset and wandering if there is some hint(or code) for the error versus reject curves. Thanks!
I run the example:serfiq_example.py,get the result of
SER-FIQ quality score of image 1 is 0.8926241124725216
SER-FIQ quality score of image 2 is 0.878391420280302
Is this result reasonable? I think the score of picture 1 is significantly better than picture 2, but the two are almost equal.
How to divide the images in a dataset into high quality and low quality?
please provided their link, thanks.
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
Hi, Thanks for your work!
I found an interesting thing about the SER-FIQ model.
I used data/test_img.jpeg
for testing.
First, I cropped the output of this line and saved it as a jpg file, naming 1.jpg
Second, I used ffmpeg command ffmpeg -i 1.jpg -q:v 10 2.jpg
to decrease the quality of the image, and save it as 2.jpg
Third, I used my own face detection model (UltraFace) and landmark detection model (FAN model) to detect and align the face and saved the face image as 3.jpg
.
Fourth, I test the three images with SER-FIQ model with the following code
ser_fiq = SER_FIQ(gpu=1)
test_img_folder = './data'
face_imgs = glob.glob(os.path.join(test_img_folder, '*.jpg'))
for face_img in face_imgs:
test_img_ori = cv2.imread(face_img)
test_img = cv2.cvtColor(test_img_ori, cv2.COLOR_BGR2RGB)
aligned_img = np.transpose(test_img, (2, 0, 1))
score = ser_fiq.get_score(aligned_img, T=100)
new_path = os.path.join('outputs', str(score)+ '_' + os.path.basename(face_img))
cv2.imwrite(new_path, test_img_ori)
print("SER-FIQ quality score of image is", score)
And the results are:
1.jpg: 0.8465793745298666
2.jpg: 0.8412792795675421
3.jpg: 0.05755140244918808
As you can see, the SER-FIQ model is robust to the image quality (or image size) decreasing, however, when facing images aligned from other models (not MTCNN), the score decreases dramatically.
Have you encountered this problem and do you know why this happen?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.