Giter Club home page Giter Club logo

agora_evaluation's Introduction

AGORA: Avatars in Geography Optimized for Regression Analysis

arXiv AGORA Images

This page contains information about how to use the code to evaluate on AGORA validation/test images.

If you are interested in other sections please see the following pages:

  1. Project joints and vertices: Project the joints and vertices using the SMPL/SMPL-X parameter file and camera information.
  2. Find corresponding masks: Find corresponding masks for images.
  3. Check prediction file format: Check format of the prediction files before submitting for evaluation
  4. Evaluation metric and protocol: Details about the evaluation metric and protocols.
  5. How to use kid model: Details on how to use kid model with SMPL-X.
  6. AGORA GT processing

Update

  • Released AGORA ground truth data processing code here. Processed files could be obtain on AGORA

Evaluation on AGORA

If you want to evaluate the results of your 3D human pose and shape estimation method on AGORA validation images, you can follow the following steps. It is highly recommended to run the evaluation on validation images before submitting the results on test images on evaluatoin server.

Prerequisites

Create and activate a Python 3.8 virtual environment:

python3.8 -m venv path_to_virtual_env
source path_to_virtual_env/bin/activate

Installation

First, checkout the code smplx:

$ git clone https://github.com/vchoutas/smplx.git

and install both packages with pip:

$ pip install .
$ pip install ./smplx

Downloads

SMPL-X/SMPL model download

For SMPL-X evaluation:

Download the SMPL-X model and place the model files in demo/model/smplx. Rename the model files SMPLX_MALE.npz, SMPLX_FEMALE.npz and SMPLX_NEUTRAL.npz if needed.

For SMPL evaluation:

Download the SMPL model and place it in demo/model/smpl Rename the models to SMPL_MALE.pkl, SMPL_FEMALE.pkl, SMPL_NEUTRAL.pkl if needed.

SMPL-X/SMPL vertex indices download

Download the vertex indices from SMPL-X from the section MANO and FLAME vertex indices and place it in utils.

SMPL-X/SMPL kid template model download

Download the kid template vertices from section SMIL/SMIL-X template section of AGORA Downloads and place them in utils.

Ground truth dataframe download

Ground truth dataframe consists of all the information corresponding to the images in the dataset e.g. camera, joints, vertices, ground truth fit path etc. Download the validation Camera dataframe (with SMPL joints and vertices or with SMPLX joints and vertices) from AGORA Downloads and extract all the .pkl files in demo/gt_dataframe folder. Check Ground Truth dataframe to get more details about different dataframes and how to use them.

Preparation of prediciton data for evaluation

In short for each predicted person in an image, a dictionary should be generated and stored as pickle file in demo/predictions folder. Please check Prediction format to get more details about the format for the output prediction file. Please go through the page carefully. If the output format for prediction file is not correct, the evaluation pipeline will fail.

Evaluate predictions

To run the evaluation for SMPL-X results, use the evaluate_agora executable:

$ evaluate_agora --pred_path demo/predictions/ --result_savePath demo/results/ --imgFolder demo/images/ --loadPrecomputed demo/gt_dataframe/  --baseline demo_model --modeltype SMPLX --indices_path utils --kid_template_path utils/smplx_kid_template.npy --modelFolder demo/model/ --onlybfh

To run the evaluation for SMPL results, use the evaluate_agora executable:

$ evaluate_agora --pred_path demo/predictions_smpl/ --result_savePath demo/results/ --imgFolder demo/images/ --loadPrecomputed demo/gt_dataframe_smpl/  --baseline demo_model --modeltype SMPL --indices_path utils --kid_template_path utils/smpl_kid_template.npy  --modelFolder demo/model/

To run the evaluation for 1280x720 version, just provide --imgWidth 1280 and --imgHeight 720 as parameter to evaluate_agora executable. Note that the --imgFolder and --loadPrecomputed should also be replaced with 1280x720 version.

If you need to debug the projection of ground truth and prediction keypoints in image please provide '--debug' boolean flag and '--debug_path' as the path to output the images. This will generate images in debug_path where left part show the overlaid prediction keypoints and right part show the overlaid ground truth keypoints.

$ evaluate_agora --pred_path demo/predictions/ --result_savePath demo/results/ --imgFolder demo/images/ --loadPrecomputed demo/gt_dataframe/  --baseline demo_model --modeltype SMPLX --indices_path utils --kid_template_path utils/smplx_kid_template.npy --modelFolder demo/model/ --onlybfh --debug --debug_path demo/debug

Citation

If you use this code, please cite:

@inproceedings{Patel:CVPR:2021,
  title = {{AGORA}: Avatars in Geography Optimized for Regression Analysis}, 
  author = {Patel, Priyanka and Huang, Chun-Hao P. and Tesch, Joachim and Hoffmann, David T. and Tripathi, Shashank and Black, Michael J.}, 
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition ({CVPR})}, 
  month = jun,
  year = {2021},
  month_numeric = {6}
}

References

Here are some great resources that we used:

SMPL-X

SMPL

SMIL

Acknowledgement

Special thanks to Vassilis Choutas for sharing the pytorch code used in fitting the SMPL-X model to the scans.

Contact

For questions, please contact [email protected].

For commercial licensing (and all related questions for business applications), please contact [email protected].

agora_evaluation's People

Contributors

davhoffmann avatar paulchhuang avatar pixelite1201 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

agora_evaluation's Issues

Wrong occlusion score

Hello thanks for the great work!

I noticed inconsistent occlusion score for many samples. I get an occlusion score of 99.41 for the green character which is in front of the camera. I use the smplx dataframe. Do you have the correct occlusion scores somewhere?

sample_id : ag_trainset_renderpeople_bfh_brushifygrasslands_5_15_00561.png

image

Wrong occlusion annotation in validation dataset

Hi,
I found that in validation dataframe file validation_0_withjv.pkl, all of the occlusions are annotated as 100, while the actual occlusions observed in the image are not 100.
For example, you can check that problem by looking into the dataframe corresponding to validation/ag_validationset_renderpeople_bfh_construction_5_15_00214.png.
Can you check this problem in validation set? Thanks a lot!

Equation 4 in the paper look wrong?

The Eq. 4 in the paper doesn't look intuitive. Why the probability is square-rooted and inside the robustifier?
Is it just a typo or there's an explanation for this?

openpose command

Hi, could you share the OpenPose parameters you used to detect persons, so that the detector is not the reason for low performance?

I am using the following parameters on the validation set. However, it is only able to detect 3505 persons, where as the number of persons according to the validation ground truth dataframes, is 7800+. So if I evaluate on validation set, the recall is really bad.

--face --hand --net_resolution "272x-1" --scale_number 4 --scale_gap 0.25 --hand_scale_number 6 --hand_scale_range 0.4 --render_pose 0 --number_people_max 15

Error while calculating average error and generating plots

Hello,

Would really appreciate your help with this issue. I ran the command:

evaluate_agora --pred_path extract_zip/predictions/ --result_savePath demo/results/ --imgFolder demo/images/ --loadPrecomputed demo/gt_dataframe_smpl/ --baseline demo_model --modeltype SMPL --indices_path utils --kid_template_path utils/smpl_kid_template.npy --modelFolder demo/model/ --onlybfh --debug --debug_path demo/debug

And got the following error:

INFO:root:Calculating Average Error and Generating plots
WARNING:matplotlib.legend:No handles with labels found to put in legend.
WARNING:matplotlib.legend:No handles with labels found to put in legend.
Traceback (most recent call last):
  File "/home/mkhalid/anaconda3/envs/agora/bin/evaluate_agora", line 8, in <module>
    sys.exit(evaluate_agora())
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/cli.py", line 30, in evaluate_agora
    run_evaluation(sys.argv[1:])
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/evaluate_agora.py", line 296, in run_evaluation
    compute_avg_error(args, error_df)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/compute_average_error.py", line 470, in compute_avg_error
    plot_x_error(
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/agora_evaluation/compute_average_error.py", line 108, in plot_x_error
    ax.set_xticklabels(['0-10',
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/seaborn/axisgrid.py", line 923, in set_xticklabels
    ax.set_xticklabels(labels, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/axes/_base.py", line 63, in wrapper
    return get_method(self)(*args, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/cbook/deprecation.py", line 451, in wrapper
    return func(*args, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/axis.py", line 1796, in _set_ticklabels
    return self.set_ticklabels(labels, minor=minor, **kwargs)
  File "/home/mkhalid/anaconda3/envs/agora/lib/python3.8/site-packages/matplotlib/axis.py", line 1717, in set_ticklabels
    raise ValueError(
ValueError: The number of FixedLocator locations (6), usually from a call to set_ticks, does not match the number of ticklabels (10).

but when I run --> python agora_evaluation/check_pred_format.py --predZip pred.zip --extractZipFolder extract_zip --modeltype SMPL it prints --> "If you reach here then your zip folder is ready to submit"
Screenshot from 2023-08-11 21-20-32

Wrong Adult Betas

Hi, I noticed that in the folder smplx_gt/trainset_movedrenderpeople_adults_bfh, the ground truth smplx fittings all have betas of shape 11 in the pickle file. However, adults smplx should have betas of shape 10, and 11 should be the shape of kids model.
Besides, in corresponding dataframes, the labels kid for those smplx fittings areFalse. Can you check whether the smplx fittings are kids or adults, and correct the labels or betas accordingly? Thanks a lot!

correspondence between SMPLX and SMPL annotations

Hi,
I'd like to ask correspondence between SMPLX and SMPL annotations.

with open('SMPL/train_0_withjv.pkl', 'rb') as f:
    data_smpl = pickle.load(f, encoding='latin1')
    data_smpl = {k: list(v) for k,v in data_smpl.items()}

with open('SMPLX/train_0_withjv.pkl', 'rb') as f:
    data_smplx = pickle.load(f, encoding='latin1')
    data_smplx = {k: list(v) for k,v in data_smplx.items()}

data_smpl['gt_joints_2d'][image_id][person_id] always represents the same person with data_smplx['gt_joints_2d'][image_id][person_id]?

Any instructions on how render SMPL bodies in Unreal?

Hi, thank you very much for providing the dataset, parameters, and these evaluation instructions. I would like to use your parameters to generate the bodies myself using Blender and Unreal Engine.

Do you have any additional instructions / scripts on how to do it more automatically, especially in Unreal where Python scripting is not that straightforward?

So far, I've figured out I could use SMPL-X plugin and Python scripting in Blender to generate the SMPL-X bodies following your parameters, however, the Unreal part is a bit more complicated, especially as I have zero experience using it. I guess that using C++ is more convenient than Python in Unreal, so I also have to figure out how to make it work all together.

I would greatly appreciate any help or brief advice! Thank you!

incompatible gender annotations

The gender information provided in .pkl files in 'Cam' folder is incompatible with the one provided in SMPLX 'GT_fits' folder.

For example, in demo/Cam/train_0.pkl, in the image named ag_trainset_renderpeople_body_hdri_50mm_5_10_00013_1280x720.png, the 8-th person is annotated as "female" in Cam, but annotated as "male" in SMPLX GT_fits folder.

You can check it with the following code

with open('data/AGORA/annotations/Cam/train_0.pkl', 'rb') as f:
    df = pk.load(f)

for i in range(df.shape[0]):
    img_path = df.iloc[i]['imgPath']
    # print(img_path)
    if img_path == 'ag_trainset_renderpeople_body_hdri_50mm_5_10_00013.png':
        model_data_path_raw = df.iloc[i][f'gt_path_smplx'][8]
        print(model_data_path_raw)
        model_data_path = model_data_path_raw.split('.')[0] + '.pkl' # switch to pkl file
        model_data_path = os.path.join('data/AGORA/annotations', model_data_path)

        print(df.iloc[i][f'gender'][8])
        with open(model_data_path, 'rb') as f:
            model_data = pk.load(f)
        
        print(model_data['gender'])

How to get 2D projections from SPIN-ft output?

Hi thank you for your work on AGORA.
I am trying to evaluate the provided SPIN-ft checkpoint on AGORA validation set, but the result looks like below.. Note that I am using the ground truth bounding boxes in the validation annotations as the input to SPIN. Does this look reasonable to you?

{'precision': 0.78, 'recall': 0.78, 'f1': 0.78, 
'body-MPJPE': 156.7, 'kid-body-MPJPE': nan, 
'body-NMJE': 200.9, 'kid-body-NMJE': 'nan', 
'body-MVE': 154.4, 'kid-body-MVE': nan, 
'body-NMVE': 197.9, 'kid-body-NMVE': 'nan'}

It looks like the precision of SPIN-ft using the OpenPose detected bounding boxes is 'precision': 0.91
If we use the ground truth bounding boxes, we shouldn't get a worse precision than that, right?

Calibration parameters

I wonder how to get Calibration parameters of each train/validation data?such as rotation、translation、focal length, principal point.

No objects to concatenate

Hi! I attempted to run the following:

evaluate_agora --pred_path ~/data/EHF/meshes_aligned/ --result_savePath results/ --imgFolder ~/data/EHF/images/ --modelFolder ./data/models/smplx/

However, this runs into the error as follows:

Traceback (most recent call last):
  File "/home/joon_local/miniconda3/envs/pytorch/bin/evaluate_agora", line 33, in <module>
    sys.exit(load_entry_point('agora-evaluation', 'console_scripts', 'evaluate_agora')())
  File "/home/joon_local/agora_evaluation/agora_evaluation/cli.py", line 30, in evaluate_agora
    run_evaluation(sys.argv[1:])
  File "/home/joon_local/agora_evaluation/agora_evaluation/evaluate_agora.py", line 287, in run_evaluation
    error_df = pandas.concat(error_df_list)
  File "/home/joon_local/miniconda3/envs/pytorch/lib/python3.9/site-packages/pandas/core/reshape/concat.py", line 372, in concat
    op = _Concatenator(
  File "/home/joon_local/miniconda3/envs/pytorch/lib/python3.9/site-packages/pandas/core/reshape/concat.py", line 429, in __init__
    raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate

Any ideas on why this is happening?

Thanks in advance.

Questions about evaluation on 3DPW

Thank you for your great work! I have a few questions about evaluation on 3DPW.

  1. In table 4 of your paper, why does SPIN-ft-EFT perform worse (PA-MPJPE: 59.3 -> 59.7) after fine-tuned on EFT dataset? In EFT paper, they got PA-MPJPE 58.1 by training SPIN on COCO-All-EFT.

  2. In table 2, on the AGORA testset, it seems fine-tuning on AGORA is better than EFT, but in table 5 of your supplementary material, on the 3DPW testset, EFT is better. It seems methods performing better on AGORA may not perform better on the real scenes. Am I right?

Thank you!

Evaluation on test set

Hello,

Thanks a lot for your great work. I would like to test my framework on the test-set. However, I could not find the camera information for the test set to project the estimated 3D to 2D to prepare the submission files.

Do we essentially need to project predictions ? can we just use estimated 2d keypoints from openpose. ( In the submission file the shape of 2D keypoint is 24,2 which probably means that we must project predictions to get 2D joints?)

Using clothed SMPL-X model in other methods

I am using PIXIE to infer pose information from images. However, PIXIE transfers the pose information onto an undressed SMPL-X model. I want the PIXIE output on a clothed SMPL-X model. Hence, I was searching for work that works with clothed SMPL-X models, and I came across this work.
What would be the easiest way for me to get the PIXIE output on a clothed SMPL-X model? It suffices if the clothing/skin color etc. are fixed/constant i.e., they don't have to match the clothing from the source image.

Missing Mask

Hi,
I noticed that there is a mask missing in the dataset, the path being ag_trainset_3dpeople_body_archviz/ag_trainset_3dpeople_body_archviz_5_10_mask_cam00_000045_00004.png in the training masks 3840*2160.
Can you check whether there's other missing masks in the dataset? Thanks a lot!

Leaderboard

Hi,

I uploaded my results on the evaluation server, but I can't make it public after getting this mail.

This submission has an unexpected score and it is automatically forwarded for an additional check by a website administrator. 
The submission process at the moment is suspended until one of the administrators approves this submission for publishing. 
You will be additionally notified when this happens through a separate email. Thank you for your understanding.

My submission email address is [email protected]
Could you check?
Thanks

Many images are missing in the 3840x2160 training split

Hi, I found that most of images are missing in the 3840x2160 training split.
On the other hand, all images can be found in 1280x720 training split.
Here is a code to check the missing images.
You can run this code in the below directory
$(PWD)
|-- Cam
|-- |-- train_0.pkl
|-- |-- train_1.pkl
|-- |-- train_2.pkl
|-- |-- train_3.pkl
|-- |-- train_4.pkl
|-- |-- train_5.pkl
|-- |-- train_6.pkl
|-- |-- train_7.pkl
|-- |-- train_8.pkl
|-- |-- train_9.pkl
|-- train_0 (a folder that contains 3840x2160 images)
|-- train_1 (a folder that contains 3840x2160 images)
|-- train_2 (a folder that contains 3840x2160 images)
|-- train_3 (a folder that contains 3840x2160 images)
|-- train_4 (a folder that contains 3840x2160 images)
|-- train_5 (a folder that contains 3840x2160 images)
|-- train_6 (a folder that contains 3840x2160 images)
|-- train_7 (a folder that contains 3840x2160 images)
|-- train_8 (a folder that contains 3840x2160 images)
|-- train_9 (a folder that contains 3840x2160 images)

import os.path as osp
from glob import glob
import pickle

data_path_list = glob(osp.join('Cam', 'train*'))
for data_path in data_path_list:
    with open(data_path, 'rb') as f:
        data = pickle.load(f, encoding='latin1')
        data = {k: list(v) for k,v in data.items()}
    folder_name = data_path.split('/')[-1][:-4] # train_0

    for i in range(len(data['imgPath'])):
        img_path_gt = data['imgPath'][i]
        img_path = osp.join(folder_name, img_path_gt)
        if not osp.isfile(img_path):
            print(img_path)

Problems on using smplx_gt

Hi, thx for the nice dataset!
I wonder why there are only around 3k samples of smplx annotation results in the smplx_gt folder.

How to use kid model?

Hi, I want to work on the kid model given in AGORA. But I don't know how to use smplx_kid_template.npy file.
As described in docs, I tried this:

import smplx

def draw_vertex(vertex_tensor, name):
    vertex_array = vertex_tensor.cpu().squeeze(0).numpy()
    obj_file = f'./output/{name}.obj'
    with open(obj_file, 'w') as f:
        for vertex in vertex_array:
            f.write(f"v {vertex[0]} {vertex[1]} {vertex[2]}\n")

modelFolder = f'./models/smplx/SMPLX_NEUTRAL.pkl'
smplx_kid_template_path = f'./assets/smplx_kid_template.npy'

template = smplx.create(modelFolder, model_type='smplx', age='kid', kid_template_path=smplx_kid_template_path,ext='pkl')

draw_vertex(template.v_template, 'smplx_kids')

But this code outputs a .obj file just the same as a usual SMPL-X model. Could you please help me? Thanks a lot!

The scale with the RenderPeople dataset does not match.

scale_issue

I opened the renderpeople scan data and the AGORA SMPL mesh file (.obj), and it was confirmed that the scale difference between these two meshes was very large.

How can I match the scale and centroid between these two?

keypoints_3d meaning

Hi,
I noticed a field naming keypoints_3d in smplx_gt/datasetxxxx/modelxxxx.pkl with shape [25, 3], which I don't find a counterpart in the model pkl file released by smplx(such as SMPLX_NEUTRAL.pkl). May I ask the meaning of this field? Thanks a lot!

mask path corresponding to the specific smplx label

Thank you for offering the ground-truth labels in camera coordinates.
https://github.com/pixelite1201/agora_evaluation/tree/master/agora_processing_code

I'd like to get the mask path corresponding to the specific smplx label.
How can I get this?

How can I get 'pnum' information from agora-body.npz?
https://github.com/pixelite1201/agora_evaluation/blob/8c9dece5fc89c8b2acc24203bf9e8f8d5cd0b8b1/agora_evaluation/find_corresponding_masks.py#L66C1-L67C1

In a single image, the number of mask image files and the number of SMPL-X labels are not the same.

get occluded person mask

Hi,
I was trying to get the the single person occluded mask in an image. I hope I can get this done by calculating the intersection part of full mask and individual mask. However, I noticed that although the colors of the same person in full mask and individual mask look the same from eyes, the rgb values of the two masks are slightly different when reading out by opencv. Because of this discrepancy, I can not get the single person occluded mask conveniently but have to do something like finding closest color.
May I ask why don't you use the same color to mark a person in both full mask and individual mask? Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.