Giter Club home page Giter Club logo

realy's Introduction

REALY Benchmark

This is the official repository for 3D face reconstruction evaluation on the Region-aware benchmark based on the LYHM Benchmark (REALY). The REALY benchmark aims to introduce a region-aware evaluation pipeline to measure the fine-grained normalized mean square error (NMSE) of 3D face reconstruction methods from under-controlled image sets.

Evaluation Metric

Given the reconstructed mesh from the 2D image in REALY by a specific method, the REALY benchmark calculates the similarity of ground-truth scans on four regions (nose, mouth, forehead, cheek) with the predicted mesh. The detailed evaluation pipeline is available in the REALY paper.

REALY: Rethinking the Evaluation of 3D Face Reconstruction.
Zenghao Chai*, Haoxian Zhang*, Jing Ren, Di Kang, Zhengzhuo Xu, Xuefei Zhe, Chun Yuan, and Linchao Bao (* Equal contribution)
ECCV 2022
Project Page: https://www.realy3dface.com/
arXiv: https://arxiv.org/abs/2203.09729

Requirements

This evaluation implementation is tested under Windows, macOS, and Ubuntu environments. NO GPU is required.

Installation

Clone the repository and set up a conda environment with all dependencies as follows:

git clone https://github.com/czh-98/REALY
cd REALY
conda env create -f environment.yaml
conda activate REALY

  • NOTE: for Windows, you need to install scikit-sparse according to the guidline here.

Evaluation

1. Data Preparation

  • We have merged our benchmark to the Headspace dataset. Please sign the Agreement and indicate the usage of REALY benchmark according to their guideline, then you will get the permission to download the benchmark data.

  • Download and unzip the benchmark file, you will find three folders, put the "REALY_HIFI3D_keypoints/" and "REALY_scan_region/" folders into "REALY/data/".

  • Use the images in the "REALY_image/" folder to reconstruct 3D meshes with your method(s). We provide the cropped and the original + depth map versions (512x512), respectively. You may use them according to your need.

  • [Important] Please save meshes as "*.obj", where "*" should have the same name as input images. NOTE: REALY is only suitable for meshes with the same topology. Please make sure the saved meshes share the same topology as your template mesh (e.g., if you use Trimesh to save meshes, please check whether you have set "process=False".)

2. Keypoints Preparation

  • [Important] To make a more accurate alignment, we extend 68 keypoints to 85 with additional keypoints in the facial cheek. Prepare the 85 barycentric keypoints file. The example of HIFI3D topology can be found at "REALY/data/HIFI3D.obj" and corresponding barycentric coordinate "REALY/data/HIFI3D.txt".

  • [Optional] NOTE: If you use one of the same template(s) as the methods we compared in the paper (e.g., BFM09, Deep3D, 3DDFA_v2, FLAME, HIFI3D, etc.), you can use the predefined keypoints in "REALY/data/" folder directly; or you do not know how to export the barycentric file, you may ignore this step, and send one template mesh (".obj" file) to Zenghao Chai, and then the barycentric file will be sent back to you.

  • Put your template mesh "*.obj" and corresponding barycentric coordinate "REALY/data/*.txt" into "/REALY/data/".

3. Evaluation

  • To evaluate the results on the frontal/multi-view image sets, run
python main.py --REALY_HIFI3D_keypoints ./data/REALY_HIFI3D_keypoints/ --REALY_scan_region ./data/REALY_scan_region --prediction <PREDICTION_PATH> --template_topology <TEMPLATE_NAME> --scale_path ./data/metrical_scale.txt --save <SAVE_PATH>
  • Wait for the evaluation results, the NMSE of each region will be saved at "<SAVE_PATH>/REALY_error.txt", and the global aligned, regional aligned SP*, deformation SH*, and error map will be saved at "<SAVE_PATH>/region_align_save/".

  • [Optional] If you want to present your method(s) on REALY, please send the reconstructed meshes and barycentric coordinate files to us, and we will re-evaluate and check the results. After that, we will update the project page accordingly.

HIFI3D++

If you want to use the 3DMM introduced in this paper, please refer to the instructions and demos.

Contact

If you have any question, please contact Zenghao Chai or Linchao Bao.

Citation

If you use the code or REALY evaluation pipeline or results in your research, please cite:

@inproceedings{REALY,
  title={REALY: Rethinking the Evaluation of 3D Face Reconstruction},
  author={Chai, Zenghao and Zhang, Haoxian and Ren, Jing and Kang, Di and Xu, Zhengzhuo and Zhe, Xuefei and Yuan, Chun and Bao, Linchao},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year = {2022}
}

realy's People

Contributors

czh-98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

realy's Issues

关于分辨率的一个小问题

情况是这样,如果GT模型h和预测模型p分辨率差很多,比如GT模型一般可以用扫描仪获得,顶点数远远大于预测的模板模型。看论文bICP应该查找的也是一对一的点对,也就是最终计算的点对数是预测模板p的个数,这种评价方法很正确。我想请问,如果这个时候要用gICP,计算p->h方向比h->p方向应该更能反应真实的误差吧?我理解h->p方向GT模型h上会有很多点找到了预测模型p上面的同一点。

Face region mask

Thank you for release your awesome project.

I noticed you released the HIFI3D++ 3DMM file.
Because I am not a university employee,
so I can't complete the dataset user agreement form.

Can you also provide the face region mask file of one case ?

will open the training code?

统一拓扑结构得到了约2000个不同性别、年龄、种族的人脸mesh,在此基础上,构建了一个全头人脸3DMM并命名为HIFI3D++. all these data preprocess script will be release?

Shape's jaw fit abnormally

This work is wonderful!!
And I also paid attention to your previous related work
https://github.com/tencent-ailab/hifi3dface,
I directly replaced hifi3dface/3DMM/files/HIFI3D++.mat with hifi3dface/3DMM/files/HIFI3D++.mat
The following error occurred when running:
f_sigma0 = (paras) / sigma # (500, 1)
ValueError: operands could not be broadcast together with shapes (526,1) (500,1)
Then I solved the above error using the following:
In optimization/rgbd/step3_prefit_shape.py
--> AI-NEXT-Shape
datas = h5py.File(os.path.join(modle_base, "shape_ev.mat"), "r")
ev_f = np.asarray(datas.get("ev_f")).reshape(-1, 1)
sigma_shape = np.sqrt(ev_f / np.sum(ev_f))

--> HIFI3D++
sigma_shape = np.sqrt(basis3dmm['EVs']/np.sum(basis3dmm['EVs'])) # (526, 1)

In hifi3dface/optimization/rgbd/RGBD_utils/AddHeadTool.py
--> AI-NEXT-Shape
ev_f = pca_info_h["ev_f"]
sigma_shape = np.sqrt(ev_f / np.sum(ev_f))

--> HIFI3D++
basis3dmm = scipy.io.loadmat("3DMM/files/HIFI3D++.mat")
sigma_shape = np.sqrt(basis3dmm['EVs'] / np.sum(basis3dmm['EVs']))

But the result is much worse:

new_res
Am I using HIFI3D++.mat incorrectly?

DECA evaluation with REALY

Hi. Thanks for sharing great works!
I am YJHong and currently developing my own 3d face reconstruction based on DECA.

After I've read evaluation protocol written in README, I have following questions for evaluating my model to REALY.

  • When doing keypoint preparation,
    • DECA uses FLAME, but FLAME.obj uploaded in data/FLAME.obj is quite different from FLAME used in DECA. So should I place the DECA template file instead of using uploaded FLAME obj in this repo ?
      • Uploaded FLAME.obj differs in shape and global position while FLAME template used in DECA is positioned around origin (0,0,0)
    • Since barycenter txt file contains face index / bary center, it seems that can be utilized whatever FLAME template is loaded.
  • When doing evaluation,
    • What the scales in the metrical_scale file means ? Are they generally used across 3dmm models ?
    • Currently, template_mask argument is set to None, but FLAME has vertex mask representing face region. Haven't you used this mask when evaluating DECA ?

Thank you.
YJHong.

containerize the project

this project can be containerize and deployed on docker and can be started in isolated environment using docker

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.