Giter Club home page Giter Club logo

faceimagequality's Introduction

Face Image Quality Assessment

15.05.2020 SER-FIQ (CVPR2020) was added.

18.05.2020 Bias in FIQ (IJCB2020) was added.

13.08.2021 The implementation now outputs normalized quality values.

30.11.2021 Related works section was added

SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020

Table of Contents

Abstract

Face image quality is an important factor to enable high-performance face recognition systems. Face quality assessment aims at estimating the suitability of a face image for recognition. Previous works proposed supervised solutions that require artificially or human labelled quality values. However, both labelling mechanisms are error-prone as they do not rely on a clear definition of quality and may not know the best characteristics for the utilized face recognition system. Avoiding the use of inaccurate quality labels, we proposed a novel concept to measure face quality based on an arbitrary face recognition model. By determining the embedding variations generated from random subnetworks of a face model, the robustness of a sample representation and thus, its quality is estimated. The experiments are conducted in a cross-database evaluation setting on three publicly available databases. We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry. The results show that our unsupervised solution outperforms all other approaches in the majority of the investigated scenarios. In contrast to previous works, the proposed solution shows a stable performance over all scenarios. Utilizing the deployed face recognition model for our face quality assessment methodology avoids the training phase completely and further outperforms all baseline approaches by a large margin. Our solution can be easily integrated into current face recognition systems and can be modified to other tasks beyond face recognition.

Key Points

  • Quality assessment with SER-FIQ is most effective when the quality measure is based on the deployed face recognition network, meaning that the quality estimation and the recognition should be performed on the same network. This way the quality estimation captures the same decision patterns as the face recognition system. If you use this model from this GitHub for your research, please make sure to label it as "SER-FIQ (on ArcFace)" since this is the underlying recognition model.
  • To get accurate quality estimations, the underlying face recognition network for SER-FIQ should be trained with dropout. This is suggested since our solution utilizes the robustness against dropout variations as a quality indicator.
  • The provided code is only a demonstration on how SER-FIQ can be utilized. The main contribution of SER-FIQ is the novel concept of measuring face image quality.
  • If the last layer contains dropout, it is sufficient to repeat the stochastic forward passes only on this layer. This significantly reduces the computation time to a time span of a face template generation. On ResNet-100, it takes 24.2 GFLOPS for creating an embedding and only 26.8 GFLOPS (+10%) for estimating the quality.

Results

Face image quality assessment results are shown below on LFW (left) and Adience (right). SER-FIQ (same model) is based on ArcFace and shown in red. The plots show the FNMR at \Large 10^{-3} FMR as recommended by the best practice guidelines of the European Border Guard Agency Frontex. For more details and results, please take a look at the paper.

Installation

We recommend using a virtual environment to install the required packages. Python 3.7 or 3.8 is recommended. To install them execute

pip install -r requirements.txt

or you can install them manually with the following command:

pip install mxnet-cuXYZ scikit-image scikit-learn opencv-python

Please replace mxnet-cuXYZ with your CUDA version. After the required packages have been installed, download the model files and place them in the

insightface/model

folder.

After extracting the model files verify that your installation is working by executing serfiq_example.py. The score of both images should be printed.

The implementation for SER-FIQ based on ArcFace can be found here: Implementation.
In the Paper, this is refered to SER-FIQ (same model) based on ArcFace.

Bias in Face Quality Assessment

The best face quality assessment performance is achieved when the quality assessment solutions build on the templates of the deployed face recognition system. In our work on (Face Quality Estimation and Its Correlation to Demographic and Non-Demographic Bias in Face Recognition), we showed that this lead to a bias transfer from the face recognition system to the quality assessment solution. On all investigated quality assessment approaches, we observed performance differences based on on demographics and non-demographics of the face images.

Related Works

You might be also interested in some of our follow-up works:

Citing

If you use this code, please cite the following papers.

@inproceedings{DBLP:conf/cvpr/TerhorstKDKK20,
  author    = {Philipp Terh{\"{o}}rst and
               Jan Niklas Kolf and
               Naser Damer and
               Florian Kirchbuchner and
               Arjan Kuijper},
  title     = {{SER-FIQ:} Unsupervised Estimation of Face Image Quality Based on
               Stochastic Embedding Robustness},
  booktitle = {2020 {IEEE/CVF} Conference on Computer Vision and Pattern Recognition,
               {CVPR} 2020, Seattle, WA, USA, June 13-19, 2020},
  pages     = {5650--5659},
  publisher = {{IEEE}},
  year      = {2020},
  url       = {https://doi.org/10.1109/CVPR42600.2020.00569},
  doi       = {10.1109/CVPR42600.2020.00569},
  timestamp = {Tue, 11 Aug 2020 16:59:49 +0200},
  biburl    = {https://dblp.org/rec/conf/cvpr/TerhorstKDKK20.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DBLP:conf/icb/TerhorstKDKK20,
  author    = {Philipp Terh{\"{o}}rst and
               Jan Niklas Kolf and
               Naser Damer and
               Florian Kirchbuchner and
               Arjan Kuijper},
  title     = {Face Quality Estimation and Its Correlation to Demographic and Non-Demographic
               Bias in Face Recognition},
  booktitle = {2020 {IEEE} International Joint Conference on Biometrics, {IJCB} 2020,
               Houston, TX, USA, September 28 - October 1, 2020},
  pages     = {1--11},
  publisher = {{IEEE}},
  year      = {2020},
  url       = {https://doi.org/10.1109/IJCB48548.2020.9304865},
  doi       = {10.1109/IJCB48548.2020.9304865},
  timestamp = {Thu, 14 Jan 2021 15:14:18 +0100},
  biburl    = {https://dblp.org/rec/conf/icb/TerhorstKDKK20.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}


If you make use of our SER-FIQ implementation based on ArcFace, please additionally cite the original ArcFace module.

Acknowledgement

This research work has been funded by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.

License

This project is licensed under the terms of the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. Copyright (c) 2020 Fraunhofer Institute for Computer Graphics Research IGD Darmstadt

faceimagequality's People

Contributors

jankolf avatar pterhoer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faceimagequality's Issues

The ser-fiq score is not right

I am testing ser-fiq score on several data, some of them are very blur or have big angle, some of them are high quality faces. However, from my test, low quality images are very likely have high score than high quality faces. How should I check this?

SERFIQ is not working correctly

Greetings. Tried to reproduce the result of your solution. I decided to run the code as it is without changing anything.
Environment: AWS MX 1.8 (+Keras2) with Python3 (CUDA + and Intel MKL-DNN)

from face_image_quality import InsightFace, SERFIQ, get_embedding_quality
import cv2

insightface_model = InsightFace()
ser_fiq = SERFIQ()
test_img = cv2.imread("./data/test_img2.jpeg")
embedding, score = get_embedding_quality(test_img,
                                               insightface_model,
                                               ser_fiq
                                               )
print("SER-FIQ quality score of image 1 is", score)
SER-FIQ quality score of image 1 is 0.9999999850988388

At the same time, by sending almost any image, I get a score : 0.9999
Please tell me what could be wrong?

scores of lfw dataset are all around 0.89

Hello Philipp,

I am trying to reproduce your experiment, but encountered a strange phenomenon. I calculated all scores of lfw database and plotted the distribution of these scores. The scores are all around 0.89. Is this result normal?

image

I'll be very appreciated if you could help me.

Best,
Sitadivon

about same model and on-top model

Hi pterhoer:
Your code seems to be the "on-top model" method of SER, I am trying to transfer it to "same model" but faild. could you please release the “same model” code?

serfiq_example.py result?

I run the example:serfiq_example.py,get the result of
SER-FIQ quality score of image 1 is 0.8926241124725216
SER-FIQ quality score of image 2 is 0.878391420280302
Is this result reasonable? I think the score of picture 1 is significantly better than picture 2, but the two are almost equal.

preprocess

Hello,when I tried to run the code. I found missing two module that in face_image_quality.py line 73 and line 74. I
can download some mtcnn implement inmxnet version.But I don't know the rule of preprocess.py. Can you share the file or share some detail that I can reproduce it. Thank you.

the relationship between the negative mean euclidean distance and variation?

Hello Philipp.
I don't quite understand the calculation of face quality. You define the face quality of image as the sigmoid of the negative mean euclidean distance d(xi, xj ) between all stochastic embeddings pairs. And you also mention that Gal proofed that applying dropout repetitively on a network approximates the uncertainty of a Gaussian process , the euclidean distance is a suitable choice for d(xi, xj ). However, Gal use variation as the uncertainty of output. You use the sigmoid of the negative mean euclidean distance to define the face quality of image but not the variation. I really want to know the relationship between the negative mean euclidean distance and variation.
I'll be very appreciated if you could help me.

about equation 1 in paper

图片
why is div m^2?maybe C(2,m),Σ/(m(m-1)/2) as the code in this git:

eucl_dist = euclidean_distances(norm, norm)[np.triu_indices(T, k=1)]   
return embedding, 2*(1/(1+np.exp(np.mean(eucl_dist))))

SER-FIQ eval

Hello,

I'm trying to reproduce the paper result (error vs reject curve for LFW - ArcFace). Are you computing FMR 0.001 for every genuine and impostor pairs combination of LFW or are you using the official pairs protocol ?

environment error: simultaneous installation of mxnet-gpu and tensorflow1.14.0

Hello, this is a very nice project but still I have a problem of the installation of the used packages.
In the README it is said the mxnet and tensorflow1.14.0 should be installed. However, I found that if I firstly installed tensorflow1.14.0, and then mxnet-gpu, the installation of mxnet-gpu will force tensorflow to be upgraded to a higher version.
If I downgrade tensorflow back to 1.14.0 version, it will say that some packages are not compatible. But if I install mxnet-gpu with --no-deps on, it turned out some other packages are in conflicts.
For some reason, I can not use conda envs, so I can only install them in the basic environment. I'm not sure whether any one else have come with the same problem? Thanks a lot.

Training from Scratch

@pterhoer @jankolf Thanks for such a wonderful paper.

I have some queries related to the implementation part.

Firstly, as I understood there is no specific training for the SER-FIQ model. Basically, we need a pre-trained MTCNN and pre-trained face recognition model, say Arcface+Resnet100 that can be trained from scratch on my own dataset also. Just need to pass these as a parameter to your serfiq_examply.py file. Am I correct?

Or in addition to the above two parameters, we need pre-trained SER-FIQ model weights also. What does ./data/pre_fc1_bias.npy stores? How can I train this on my own dataset?

environment.yml file broken

Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • scikit-image==0.15.0=py36h6538335_0
  • numpy==1.14.2=py36h5c71026_0
  • protobuf==3.11.4=py36h33f27b4_0
  • tornado==6.0.4=py36hfa6e2cd_0
  • cffi==1.14.0=py36h7a1dbc1_0
  • requests==2.18.4=py36h4371aae_1
  • urllib3==1.22=py36h276f60a_0
  • cryptography==2.9.2=py36h7a1dbc1_0
  • py-mxnet==1.2.1=py36hcd68555_0
  • libpng==1.6.37=h2a8f88b_0
  • openssl==1.1.1g=he774522_0
  • grpcio==1.27.2=py36h351948d_0
  • win_inet_pton==1.1.0=py36_0
  • pyreadline==2.1=py36_1
  • scikit-learn==0.20.1=py36hb854c30_0
  • sqlite==3.31.1=h2a8f88b_1
  • pillow==7.1.2=py36hcc1f983_0
  • pywavelets==1.0.3=py36h452e1ab_1
  • python==3.6.9=h5500b2f_0
  • cudatoolkit==9.0=1
  • zstd==1.3.7=h508b16e_0
  • freetype==2.10.2=hd328e21_0
  • libtiff==4.1.0=h56a325e_0
  • icc_rt==2019.0.0=h0cc432a_1
  • wincertstore==0.2=py36h7fe50ca_0
  • wrapt==1.12.1=py36he774522_1
  • libprotobuf==3.11.4=h7bd577a_0
  • jpeg==9b=hb83a4c4_2
  • zlib==1.2.11=h62dcd97_4
  • vc==14.1=h0510ff6_4
  • kiwisolver==1.2.0=py36h246c5b5_0
  • libopencv==3.4.1=h875b8b8_3
  • cytoolz==0.10.1=py36hfa6e2cd_0
  • yaml==0.1.7=hc54c509_2
  • h5py==2.10.0=py36h5e291fa_0
  • opencv==3.3.1=py36h20b85fd_1
  • tensorflow==1.14.0=eigen_py36hf4fd08c_0
  • pyyaml==5.3.1=py36he774522_0
  • tensorflow-base==1.14.0=eigen_py36hdbc3f0e_0
  • libmxnet==1.2.1=gpu_mkl_hdf6cc24_1
  • vs2015_runtime==14.16.27012=hf0eaf9b_1
  • hdf5==1.10.4=h7ebc959_0
  • matplotlib-base==3.1.0=py36h3e3dc42_0
  • tensorboard==1.14.0=py36he3c9ec2_0
  • xz==5.2.5=h62dcd97_0
  • tk==8.6.10=hfa6e2cd_0
  • scipy==1.1.0=py36hc28095f_0

runtime

Hi, I run your methods on a PC (with an Intel Core i7-4790K@4GHz, 16GB RAM and an GeForce GTX 1080) to test its runtime. I find it need about 2~3s per image. Is it a little bit longer?

Why use batch normalization and L2 normalization in SERFIQ?

  • First, I know insightface use cosine distance as similarity (so normalization is needed), but SERFIQ use euclidean distance. If you normalize here, it is equivalent to cosine distance!
    norm = normalize(X, axis=1)

Since Gal et al. [12] proofed that applying dropout repetitively on a network approximates the uncertainty of a Gaussian process [33], the euclidean distance is a suitable choice for d(xi, xj ).

Plots in the Paper

Hi,

To generate the plots in the paper for LFW, do you drop pairs from LFW's pairs.txt?
If this is the case, then a method that drops more pairs will end up having the best FNMR vs Ratio of Unconsidered Images Plot and would hence be better.
How do you ensure the same number of pairs are evaluated even if the different images are dropped for each IQA?

For example,
I was comparing your methods vs BRISQUE. In my experiments, BRISQUE dropped images that reduced the total number of evaluation pairs in LFW's pairs.txt when compared to your method. I feel this is why, in my experiments, BRIQUE outperforms SERFIQ.

If possible can you please explain the evaluation procedure in detail

About the generalization of SER-FIQ algorithm

Hi, Pterhoer:
I use SER-FIQ(on-top model) algorithm in 3 scenarios: Face recognition,PersonReid, VehicleReid。However,It only performs well in Face recognition.
In PersonReid and VehicleReid it performs worse than Niqe, Bris and Piqe. Do you have any idea why SER-FIQ perfroms badly in other scenarios?
looking forward to your reply!
Thanks

about the result

I try to run the file 'serfiq_example.py' and got the result of two test images which you providied in './data/'. The score of 'test_img.jpeg' is 0.89 ,and the score of 'test_img2.jpeg' is 0.87.Are the resuls correct? Yet,I test on other images like side faces and front faces.But their results are indistinguishable,Is there something wrong with my use? Thank you for your reply!

About the quality score ranges

Nice work in this repo.
I have read through other closed issues and it seems normal that two models have very different quality score ranges based on the SER-FIQ method. Normalization is recommended to enforce the number to the same range just to make them look comfortable since only the ranking is important while the actual values are not.

I do have some thoughts here. Suppose two models are observed to have different quality values and range widths on the same dataset. E.g. Model 1 has range [0.2,0.6], i.e. 0.4+-0.2, and Model 2 has range [0.8,0.82], i.e. 0.81+-0.01.

  1. Is the quality value difference here 0.4 vs 0.81 an indicator of something? Is the range width here 0.4 vs 0.02 also an indicator of something?
  2. Even with normalization, are the quality comparable across two models? In order to make the quality comparable between the two models, what kind of normalization is required?

Any thoughts or discussions are welcome.

error versus reject curves

Great job and novel idea! i'm trying to test it on our private dataset and wandering if there is some hint(or code) for the error versus reject curves. Thanks!

The model works ONLY on MTCNN aligned faces

Hi, Thanks for your work!

I found an interesting thing about the SER-FIQ model.
I used data/test_img.jpeg for testing.

First, I cropped the output of this line and saved it as a jpg file, naming 1.jpg
test_img_3

Second, I used ffmpeg command ffmpeg -i 1.jpg -q:v 10 2.jpg to decrease the quality of the image, and save it as 2.jpg
test_img_4

Third, I used my own face detection model (UltraFace) and landmark detection model (FAN model) to detect and align the face and saved the face image as 3.jpg.
test_img_face

Fourth, I test the three images with SER-FIQ model with the following code

    ser_fiq = SER_FIQ(gpu=1)

    test_img_folder = './data'

    face_imgs = glob.glob(os.path.join(test_img_folder, '*.jpg'))
    for face_img in face_imgs:
        test_img_ori = cv2.imread(face_img)
        test_img = cv2.cvtColor(test_img_ori, cv2.COLOR_BGR2RGB)
        aligned_img = np.transpose(test_img, (2, 0, 1))

        score = ser_fiq.get_score(aligned_img, T=100)
        new_path = os.path.join('outputs', str(score)+ '_' + os.path.basename(face_img))

        cv2.imwrite(new_path, test_img_ori)

        print("SER-FIQ quality score of image is", score)

And the results are:

1.jpg: 0.8465793745298666
2.jpg: 0.8412792795675421
3.jpg: 0.05755140244918808

As you can see, the SER-FIQ model is robust to the image quality (or image size) decreasing, however, when facing images aligned from other models (not MTCNN), the score decreases dramatically.

Have you encountered this problem and do you know why this happen?

Thanks!

Implemention of evaluation problem

The FNMR is reported at 0.001 FMR by ratios of unconsidered images which determined by quality. I am not sure whether imposter pairs are filtered by quality when calculating FMR0.1%. For example, LFW datasets, it has genuine and imposter pairs. Genuine pairs of images are filtered according to ratios of unconsidered images to calculate FNMR further. But for imposter pairs of images, are they filtered by quality to calculate FMR0.1%?

The value is not distinguishable

hi! Thank you for this method, but I have a problem. The results of all images are almost the same, and there is no good discrimination.
Looking forward to your answer!

There are some dependency missing.

Dear authors, in this repo, there's some dependencies not specified, which prevents running the code straightly.
Such as

  1. mtcnn_detector?
  2. insightface model version?

btw, I'm not familiar with face toolkits.
thanks for ur time.

quality score not always the same, and which model should I use

Hi, i got 2 questions about SER-FIQ quality score when i run serfiq_example.py with 3 different model (model-r34-amf-slim, model-r50-am-lfw, model-r100-ii).

  1. when one of 3 model above is specified(e.g. ,model-r50-am-lfw), the quality score varies each time I run the code. For exampe, test_img1.jpg sometimes gets 0.551925, sometimes gets 0.549972.
  2. 3 different model conduct different results. The quality score of test_img1 is always higher than test_img2 when I use model-r50-am-lfw and model-r100-ii, while test_img1 score is lower than test_img2 when model-r34-amf-slim is used.
    Is the result reasonable? And which model shoud I use? Maybe I miss something detail?

About the FNMR, ERR code

Hi pterhoer:
Thansk for your excellent work! I'm reproducing your work, could you please release your code for the evaluation of error vs reject curves?

Best
Li

About the threshold setting

Hi pterhoer:
I am confused about the threshold setting in your paper. In Section 4, it reads:"The face recognition performance is reported in terms of EER and FNMR at a FMR threshold of 0.01. The FNMR is also reported at 0.001 FMR threshold". In other papers, as I know, they set a threshold T. If the Cosine/Euclidean distance of two imgs is smaller than T, we claim it's a false match. So, is the threshold in your paper means the same?

Thanks!
Li

In the SER_FIQ(on top) model what does nemb signify?

In your paper, in Section 4 On-top model preparation, you have stated that for networks trained without dropout, we have a new network "It consist of five layers with nemb/128/512/nemb/nids dimensions". The two intermediate layers have 128 and 512 dimensions. What does nemb mean? Why this network depends on nids

In your code, in class SERFIQ, you have

inputs = Input(shape=(25088,))
x = inputs
x = Dropout(0.5)(x, training=True)
x = Dense(512, name="dense", activation="linear")(x)
x = BatchNormalization()(x)
x = Lambda(euclid_normalize)(x)
output = x

So the network stricture does not correspond to what you have written in the paper!?

Why 25088?

Which Mxnet version should we install?

Hi,
Which Mxnet version should we install? I'm getting tones of error regarding mxnet. My Cuda version is 9.1. So I installed mxnet-cu91. Then I got "cannot import name 'ImageRecordInt8Iter' from 'mxnet.io.io' " error and I changed some depricated functions to fix it. Now I'm getting "cannot import name 'multi_sum_sq' from 'mxnet.ndarray' " error and have no idea how to fix it as I didn't find anything online about that error.
Please advise.
Thanks.

关于实际使用情况的一些反馈意见

@pterhoer 首先非常感谢作者的工作!
通过一些实际环境中使用数据测试的情况描述:
在完整的脸上可以预测正确结果
对于遮挡的人脸(自我遮挡、物体遮挡)和残缺的人脸这些也是评定为低质量的人脸,在这种情况下不能正确评定,这些情况是实际应用中关注的重点

在此,请教下是否有合适的处理方法

No module named 'face_preprocess'

I have passed the path to the Insightface repository to the InsightFace class in face_image_quality.py, but it still says:
Traceback (most recent call last):
File "serfiq_example.py", line 9, in
insightface_model = InsightFace()
File "/data/xxx/open_source_code/FaceImageQuality/face_image_quality.py", line 74, in init
from face_preprocess import preprocess
ModuleNotFoundError: No module named 'face_preprocess'

Error running example

Hello, after installing and downloading models I get the following error when I run the serfiq_example.py:

Traceback (most recent call last):
File "serfiq_example.py", line 2, in
from face_image_quality import SER_FIQ
File ".../FaceImageQuality/face_image_quality.py", line 22, in
import mxnet as mx
File ".../anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/init.py", line 23, in
from .context import Context, current_context, cpu, gpu, cpu_pinned
File ".../anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/context.py", line 23, in
from .base import classproperty, with_metaclass, _MXClassPropertyMetaClass
File "...anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/base.py", line 356, in
_LIB = _load_lib()
File ".../anaconda3/envs/faceQuality/lib/python3.8/site-packages/mxnet/base.py", line 347, in _load_lib
lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
File ".../anaconda3/envs/faceQuality/lib/python3.8/ctypes/init.py", line 373, in init
self._handle = _dlopen(self._name, mode)
OSError: libnccl.so.2: cannot open shared object file: No such file or directory

Edit: I run it on cpu cause I didn't find a way to run it on CUDA 12.2.

results is ok?

SER-FIQ quality score of image 1 is 0.8926710573645206

SER-FIQ quality score of image 2 is 0.8782863740164669

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.