Giter Club home page Giter Club logo

3ddfa's Introduction

3ddfa's People

Contributors

ammarali32 avatar beiller avatar cleardusk avatar iamlychee avatar kant avatar liguohao96 avatar shirakia avatar timgates42 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3ddfa's Issues

generate 3D mesh pics

Hi jianzhu,
I am a student from Zhejiang University and I have successfully run your code but I can't generate the pics with 3D mesh like imgs/vertex_3d.jpg by running visualize.py.
It would be greatly appreciated if you can tell me where is the code for offscreen 3D rendering to generate imgs/vertex_3d.jpg!
Thank you very much!

Besh wishes,
Minda.

Problem about rendering

Thank you very much for your work!
Actually I run your code successfully, the structure of the code is extremely clear.
However, when I use MeshLab to visualize the .obj and .pyl, I find the rendering result is confusing:
image
image
It seems that the 3D model(.pyl files) is partially normal.
Did I do something wrong? Or should I try another visualization tool?
By the way ,don't you think the 3D models(.obj files) look similar?
image

Thank you very much!

The rendering result from the render_demo.m and the meshlab

First, Thanks to your great job for 3D face alignment. Here I got a problem about the visual 3D face rendering results. For example, I run the render_demo.m under the folder ‘visualize’ , and the ‘test1_0.mat’ and 'test1_1.mat' are created by running the command 'python3 main.py -f samples/test1.jpg', the rendering result is about x-axis mirror symmetry with the result provided in the ‘test1_0.jpg’ and 'test1_1.jpg', which means the face is inverted. And I get the correct result by replace the command ‘vertex = load('test1_0');‘ with ’vertex = load('image00427');‘ which used the ’image00427.mat‘ you provided. Why I got this problem? I look forward to your reply.

Implementation is different with the paper?

Hi, thank you for sharing you work.
I read the TPAMI paper, but i didn't found the two stream architecture in this repositories. The paper propose Pose Adaptive Convolution and Projected Normalized Coordinate Code which are not contained in this repositories either.
Well, from the main.py, it just crops face region in image and feeds to mobilenet. I wonder why using simple regression strategy can produce promising results as you provided? Is data augmentation matters?

about roi_img

Hi, jianzhu,

I found that if i do not use dlib_landmarks, but use anothor rectangles. the 3DDFA result is not so good.
is it that roi_img containing face is not sufficient?
what additional requirements should roi_img meet?

thank you for you reply!

pose of the face (frontalization)

thank you for sharing the work. Is there a way to get the pose of the face based on the vertices or something? I want to frontalize the face. Thank you

error happens when I run benchmark.py

Thank you for sharing your code. But I meet an error like this :
map_location = {f'cuda:{i}': 'cuda:0' for i in range(8)} me
^
SyntaxError: invalid syntax
Would you please tell me what I should do. Thank you

the training input data, and way to generate FACE PROFILING

Deal Sir,
Thank you for sharing your code!

1.As I read your paper, the input data not only include the raw image, but also include the PNCC and PAC data, my Question is, Will you update the script to generate the PNCC and PAC data, If not, Can you give me some tips to generate them?
2. You also did some face profiling, Can you give me some tips on how to do that?

can't import mesh_core_cython

After building cython module, the error message "ImportError: cannot import name 'mesh_core_cython" was showed.
What can I do??

Training params

Can you describe the parameters format? I noticed that the param_all_norm.pkl file contains 62 extracted parameters of each image from you dataset. By the FaceProfiling code (http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/HPEN/main.htm) I generated a dataset containing my own profiles, but I didn't understand how to create the pkl file using FaceProfiling output variables.

eyes landmark do not blink

hello author,
i tried your code for 3d face landmark in live camera, using mtcnn for face detection work and your face crop method. and i find the eyes landmark do not blink (i am blinking).
is this correct?

Possible error in "matrix2angle" function?

Your code:

def matrix2angle(R):
''' compute three Euler angles from a Rotation Matrix. Ref: http://www.gregslabaugh.net/publications/euler.pdf
Args:
R: (3,3). rotation matrix
Returns:
x: yaw
y: pitch
z: roll
'''
# assert(isRotationMatrix(R))

if R[2, 0] != 1 or R[2, 0] != -1:
    x = asin(R[2, 0])
    y = atan2(R[2, 1] / cos(x), R[2, 2] / cos(x))
    z = atan2(R[1, 0] / cos(x), R[0, 0] / cos(x))

else:  # Gimbal lock
    z = 0  # can be anything
    if R[2, 0] == -1:
        x = np.pi / 2
        y = z + atan2(R[0, 1], R[0, 2])
    else:
        x = -np.pi / 2
        y = -z + atan2(-R[0, 1], -R[0, 2])

return x, y, z

Reference:

Screenshot from 2019-03-25 09-14-24

Hi I checked your estimate_pose.py code with the reference http://www.gregslabaugh.net/publications/euler.pdf and it seems that x is supposed to be x = -asin(R[2,0])? Is this a bug? Or is there any special reason to remove the negative sign? If so, can you explain why the sign is removed? Thanks!

question about reproduce

Thanks for your wonderful job! Does it work for animal (like dog or cat) face alignment? I mean, change the training data and training by the same method. What's your advice?

Pose Information

How to take the estimated pose information from the python code? I noticed that some generated .mat files inside the 300W-3D folder has the Pose_Param variable, which contains the pose information. But the .mat files generated by the visualize.py code do not provide this information.

adding texture to 3d model

Hi,

In your documentation, in Application -> 2. Face Reconstruction, there is a 3D model and then the texture is added to that. Is this also addressed in your code?
What I understood from the .ply file is that it only contains shape, and not texture and light. Right?

Thanks again for your wonderful work!

Rendered mesh

Can you explain how the .mat files generated by visualize.py code are represented? I noticed that the meshes are 3x53215. I tried to plot those meshes in MATLAB but I didn't see face models.

download issue

hello, thank you for this work. Is there a way to publish the remove neck version of BFM data also on google drive? Is quite hard for me to download them from pan.baidu.

Thank you

Some detail question

Dear author, when refer to 3D face reconstuction, i see in your main.py which could only generate vertex list, could it generate texture list?

How to generate cropped image?

Hi cleardusk:

I found that the cropped image of AFLW are not resized to 120*120 directly from original image, so how to you generate them?

thank you very much!

Forest

how to generate two styles in obama_three_styles.gif

Hey @cleardusk
I have a question regarding the Gif image obama_three_styles.gif,
Can you please explain how you generated two stlyes overlays in this GIF, I understood about the dlib facial points. I am kind of new to this topic. But I couldn't grasp how you are generating the fist two styles. Are they depth map and PNCC represented in different color ?

Thanks in advance

how to prepare augmented training dataset

Dear Sir:

Currently, the training data and benchmark data sets are cropped based on labeled facial landmarks.

During inference, if we don't have landmarks, need face bounding box to crop the face. so in order to do the same preprocess both training and inference, need to regenerate training data according to face detection result. so is it possible to share current training data generation scripts.

thank you very much.

comparison with FAN method

Hi,
thank you for your contribution on this topic.
I am curious to understand how this improved implementation performs w.r.t FAN approach from "How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks" work.

thank you.

Have you tested the trained model on real-world images?

Hi cleardusk, I wrote a script, which uses dlib to detect face. and then feed the cropped face into the trained network. The obtained results are quite worse and I can not find the reason. Maybe I missed some key steps. Thus, could you kindly write a demo, which can be tested on the real-world videos? Thanks

Have you realsed the data pre-processing code?

Hello @cleardusk , Thanks for your fantistic reproducttion.
I am going to compare my own method with 3DDFA in another dataset. However, I'm not sure the data pre-processing method, espacially about pose and shape parameters. For example, the official data of ALFW is unormalized, and the vector length is longer than yours. So could you briefly introduce the normalize method and the way the choose the dimension of parameters?
THANKS!

run main.py error

python main.py -f samples/test1.jpg

Traceback (most recent call last):
File "/home/ww/3DDFA/main.py", line 205, in
main(args)
File "/home/ww/3DDFA/main.py", line 45, in main
model.load_state_dict(model_dict)
File "/home/ww/anaconda3/envs/Stacked_Hourglass_Network_Keras/lib/python3.6/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MobileNet:
Unexpected key(s) in state_dict: "bn1.num_batches_tracked", "dw2_1.bn_dw.num_batches_tracked", "dw2_1.bn_sep.num_batches_tracked", "dw2_2.bn_dw.num_batches_tracked", "dw2_2.bn_sep.num_batches_tracked", "dw3_1.bn_dw.num_batches_tracked", "dw3_1.bn_sep.num_batches_tracked", "dw3_2.bn_dw.num_batches_tracked", "dw3_2.bn_sep.num_batches_tracked", "dw4_1.bn_dw.num_batches_tracked", "dw4_1.bn_sep.num_batches_tracked", "dw4_2.bn_dw.num_batches_tracked", "dw4_2.bn_sep.num_batches_tracked", "dw5_1.bn_dw.num_batches_tracked", "dw5_1.bn_sep.num_batches_tracked", "dw5_2.bn_dw.num_batches_tracked", "dw5_2.bn_sep.num_batches_tracked", "dw5_3.bn_dw.num_batches_tracked", "dw5_3.bn_sep.num_batches_tracked", "dw5_4.bn_dw.num_batches_tracked", "dw5_4.bn_sep.num_batches_tracked", "dw5_5.bn_dw.num_batches_tracked", "dw5_5.bn_sep.num_batches_tracked", "dw5_6.bn_dw.num_batches_tracked", "dw5_6.bn_sep.num_batches_tracked", "dw6.bn_dw.num_batches_tracked", "dw6.bn_sep.num_batches_tracked".

can you give me some advice maybe the 'phase1_wpdc_vdc_v2.pth.tar' missing parameters?

Hi,JianZhu, the result of aflw is different from your?

My command line is : python benchmark.py -c models/phase1_wpdc_vdc.pth.tar (I use python 2.7 instead)
qq 20180810222913

The result of AFLW2000 is similar with yours ,but the other one is far away from result your presented.
So, what problem I encounted?

Thanks JianZhu for your contribution!

isRotationMatrix returns False!

Hi,

I tried the following code to get the rotation matrix and check if it is a rotation matrix or not, and if so, I get the three rotation angles from it. Suprisingly, isRotationMatrix that you introduced in one of the issues returns False. Here's my code:

annotation = np.load('path/to/param_all_norm.pkl')
pose_params = annotations[0][-12:]
pose_params = pose_params.reshape((3, 4))
print(isRotationsMatrix(pose_params[:, 0:3]))

do you have any idea?

Thanks!

About output

Hello
I am A raw recruit about Face Alignment. Well, I see the output is 62 dimension of the mobilenet-1 model in the code, and I notice this is the first stage pre-train model.I guess I miss some crucial points. But what I want to know is that if I want to estimate the angle of a face image using this model, what should I focused on?
Thank you!

One question about parameter

Thanks for your amazing work again! One question, what meaning of '0.14' and '1.58' as shown below? It's determined by datasets or experience? Thank you~

3DDFA/utils/inference.py

Lines 86 to 87 in 25f7467

center_y = bottom - (bottom - top) / 2.0 + old_size * 0.14
size = int(old_size * 1.58)

Another 3DMM

Dear Sir,

Thank you for this amazing work !

My question is :
If we have another 3DMM how can we re make the work to create antoher 300W Dataset with the new 3DMM parameters ? which steps should we take ?

Thank you.

How to remove neck part

Hi,cleardusk
Thank you for sharing your code!
The 3d face shape often contains the neck part. How to remove it? Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.