Giter Club home page Giter Club logo

2dasl's People

Contributors

xgtu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2dasl's Issues

Unable to generate mtl file with writeobjs.m

Hi @XgTu , Thanks for sharing the code.

I am able to successfully run the project however the obtained obj's are just white meshes. The material file for each obj is not getting generated.

  1. In the writeobj.m function i am guessing options argument has to be called properly.

  2. Should we write texture .jpg to the mtl file (i.e any connection between options.nm_file is texture jpg?) but in this project we are not writing generating any texture file.

Can you please tell me if i am wrong and how to create mtl files?

Thank you.

How to plot the 3d face in picture

Good job!
I have a simple question to ask. How can I to plot the 3D face reconstruction in the picture, just like the example which had been show? This question puzzled me for a long time, I'm looking forward to your reply.

License

Please add the MIT license because you use the code in 3DDFA, which is under the MIT license.

Question about backward pass.

Hi, i recently read your paper, but i have a little confusion about the backward pass. In your paper, you show that you backward pass your predicted 2d landmarks to output x^2d, do you means that you replace x2d with x^2d to generate 2d FLMs and keep the input image unchanged and restart forward training? If i am understanding it in a wrong way, could you please describe the backward pass with more details?

3D NME

Hello,thanks for your open source code!
I'm getting start to follow the work of PRN, but I cannot really reimplement their work yet.
I've read your paper on arxiv and noticed that you compared 2DASL with PRN in experiment part.
So I hope you can help me with some problems about the NME metrics.

Do reimplement PRN and get the results or you just evaluate their model?
How do you calculate 3D NME and do you have any idea about how do PRN calculate 3D NME?

(I evaluated the model provided in their github on AFLW2000-3D.
The 2D NME is alright.
But I come up with some problem when calculating 3D NME.

I use exactly the same normalization factor (bounding box size) and calculate the mean L2 distance between ground truth and predicted results.
But the 68 kpt 3D NME I got is around 6.0%, which is much higher than PRN paper's 4.4% and similar to 6.12% in your paper.

I've tried to refer to the evaluation code of 3DDFA but it seems that they didn't evaluate 3D coordinates.)

training code

Could you please publicize the training code? We are not sure about the training details such as how to get the mean and std of parameters and how to set the super parameters about the net. Thanks a lot!

Where did you get the 68 landmarks annotation of AFLW dataset?

As far as I know, AFLW only has annotation of 21 landmarks. Where did you get the annotation of 68 landmarks in test_codes/test.configs/AFLW_GT_pts68.npy?

Moreover, how to use AFLW_GT_pts68.npy and AFLW_GT_pts21.npy, since they do not align with the original image either the croped image.

By the way, AFLW has 21123 images. Why does your test_codes/test.data/AFLW_GT_crop only have 21080 images? Did you remove some images?

bad result!!!!!

我参照3DDFA更改了测试代码,结果很差,
lmkscache
这是第4通道lmks的可视化,应该没有问题,crop方式对齐3ddfa
cache
这是最终的vertex结果,白色部分是vertex,问题可能出在什么地方呢,model不是最终版本或者其他和3ddfa不同的细节之处?期待回复!!

测试main文件如下:
from utils import *

ckpt_path = 'models/2DASL_checkpoint_epoch_allParams_stage2.pth.tar'
ckpt = torch.load(ckpt_path, map_location=lambda storage, loc: storage)['res_state_dict']
state_dict = {}
for key, value in ckpt.items():
if key.startswith('module'):
state_dict[key[7:]] = value
else:
state_dict[key] = value
model = resnet50(pretrained=False, num_classes=62)
model.load_state_dict(state_dict)
transform = transforms.Compose([
ToTensorGjz(),
NormalizeGjz(mean=127.5, std=128)
])

img_path = 'test.jpg'
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
lmks = get_face_lmks(img)

roi_box = parse_roi_box_from_landmark(lmks.T.copy())
img_crop = crop_img(img, roi_box)
lmks_crop = crop_lmks(roi_box, lmks)

lmks_crop = fit_lmks(lmks_crop, img_crop.shape[:2])
lmks_crop[lmks_crop>119] = 119
img_crop = cv2.resize(img_crop, dsize=(120, 120), interpolation=cv2.INTER_LINEAR)

lmks_map = get_18lmks_map(lmks_crop)
lmks_map = lmks_map[:,:,np.newaxis]

lmks_map = torch.from_numpy(lmks_map).unsqueeze(0).permute(0,3,1,2)

input = transform(img_crop).unsqueeze(0)
input = torch.cat([input, lmks_map], dim=1)

with torch.no_grad():
param = model(input)
param = param.squeeze().cpu().numpy().flatten().astype(np.float32)
dense = get_dense_from_param(param, roi_box)
print(dense.T)
show_lmks(img, dense.T)
cv2.imwrite('cache.png', img)

training code

Hi,thanks for your great work!Will you release your training code?

Error in downloading the the "visualize.rar" package

Hi @XgTu , I'm facing the error "Sorry, the file you have requested does not exist.
Make sure that you have the correct URL and the file exists." when I want to download the the "visualize.rar" package from google drive. and I can't use baidu as well.

How to convert 53K vertices to 43K. (for PRNet evaluation)

Hi @XgTu , Thank you for the source codes you released. I'm just wondering how did you compare your method with PRNet or actually how did you evaluate PRNet on AFLW2000 dataset? Coz the PRNet's outputs have 43K vertices while your model's outputs and AFLW2000 ground truth have 53K vertices.
How did you reduce the groundtruth vertices to 43K?
I realized that there is a "exp_base_reduce.mat" file which can produce ground truth with 39K vertices. Where did you get this .mat file and how can reproduce it with 43K vertices?
Thank you in advance for your clarification.

How can I use this code to generate depth map?

I am writing a network for face anti-spoofing which use depth map for auxiliary supervision. I see some works use PRNet to generate depth map, but I want to generate it through a 3DMM based way. I found this excellent work and I want to know that whether my idea could be available

run on live Webcam or single image?

Hi @XgTu, and thank you very much for making this code available!

How can i run it live on a Webcam, or a single image? Is there any similar code I can look at to get started?

Specifically I need to return the 3d landmarks from an image, or video.

Like this GIF:

55405030-cb3b7f80-558b-11e9-9553-e1858db0e198

How can i replicate this result from a webcam?

Thanks!

How to achieve dense face alignment?

Hello, I am very happy to see your work, I want to achieve a dense alignment of the face, like the following
image
But I didn't see how to implement this step in your code, and your code is based on Matlab, not Python. Can you provide a Python implementation? This will help understand the meaning of each step, thank you

No visualize.rar

There is no visualize.rar file for google drive :(

Could you please upload it once more?

Testing code for predictions.

Hi,

I am kind of new to pytorch, did anyone know which script should I run in order to predict 2D or 3D landmarks for a given facial image? or should I write a new testing code and use the model such as "2DASL_checkpoint_epoch_allParams_stage2.pth.tar" for this task?

Thanks a lot!

How can I reproduce your smooth demo?

Thank you so much for sharing your code!
I'm confused how can I reproduce your demo(so smooth!), and what makes the 3D face reconstruction so smooth?
Did you smooth 3Dlandmarks from 3DDFA?
Thanks!

about training data

How do I get a file like 2DASL \ test_codes \ test.data \ detect_testData_1998.txt during training, in other words, how is this file generated ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.