Giter Club home page Giter Club logo

landmark-detection's Introduction

Hi there 👋

Anurag's github stats

landmark-detection's People

Contributors

d-x-y avatar v-goncharenko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

landmark-detection's Issues

my own training dataset

Thanks for your excellent work! i'd like to know how to make my own dataset to train a customed SAN network?just follow the 300W or aflw format?

About the crop_pic.py file

I am very puzzled when using crop_pic.py file to cut images, which function is mainly called for image face cutting in the py file?Is it the ' call()' under the PreCrop?

Error running san_eval.py

I feel so sorry to bother you again.The following problems occurred when I run the san_eval.py to tested the new image:
Traceback (most recent call last):
File "san_eval.py", line 78, in
evaluate(args)
File "san_eval.py", line 55, in evaluate
batch_heatmaps, batch_locs, batch_scos, _ = net(inputs)
File "/share/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(input, **kwargs)
File "/share/home/SAN/lib/models/itn_cpm.py", line 136, in forward
batch_location, batch_score = find_tensor_peak_batch(cpm_stage3[ibatch], self.config.argmax, self.downsample)
File "/share/home/SAN/lib/models/basic_batch.py", line 43, in find_tensor_peak_batch
X = MU.np2variable(torch.arange(-radius, radius+1), heatmap.is_cuda, False).view(1, 1, radius
2+1)
File "/share/home/SAN/lib/models/model_utils.py", line 16, in np2variable
raise Exception('Do not know this type : {}'.format( type(x) ))
Exception: Do not know this type : <class 'torch.Tensor'>
Does the mistake come from the pytorch?

Combine SAN with SBR

According to the SBR paper: the facial landmark detector can be replaced with other method.
Do you have a try to combine SAN with SBR?
Thanks.

SBR Base Detector?

Which project are you using?

SBR

Hello,
Thanks for the great code! unfortunately, it requires GPU processing which is not applicable in my case. However, It seems to me that the output of the base detector is sufficient for my goal, could you please give more details or reference some papers for the base detector? and does the base detector require GPU processing to run efficiently?

Thanks!

demo problem

I want to try SBR to vis demo-sbr.mp4.I execute scripts of demo_sbr.sh, but it has error about can not find eval-start-eval-00-01.pth.What is eval-start-eval-00-01.pth and how to get it?

[feature request] SBR on CPU?

Is it possible to run SBR on CPU without CUDA? If yes, could you please give some info on how to achieve that?

I tried to replace cuda code but I am stuck with line 51 in eval.py
net = net.cuda()
not sure how to replace it!

Can you also provide a test script?

Hi Xuanyi, thanks for your great work. I'm trying to test your network on our dataset for facial landmarks detection. I follow your steps and finish the training and evaluation step. I want to just test on a few subjects. So it would be great that you can also share a simple test script. Like for a new coming image, it can detect the 68 landmarks. Thanks.

Cannot perform evaluation.

Hi,

I think your work is impressive, but I met some issues while reproducing it.

Now, I just want to use your provided pertained model for learning the code, so I did not download two complete datasets and skipped the training part, went to evaluation directly. Also, I changed some necessary codes so that I can run it on my Mac, which does not have GPU.

Evaluation on the single image: What I got is a series of output number (Figure below), can you tell me which files I can use for qualitative results, 2D image with landmarks (like Figure 8 in the paper).

Evaluate on 300-W or AFLW: I cannot find 300W-EVAL.sh and AFLW_CYCLE_128.FULL-EVAL.sh files in the script folder. Can you tell me where I may make a mistake?

screen shot 2019-01-18 at 7 59 20 pm

Best,
XG

[TS3] Execution Time on CPU

Hi,

Can you give details of the execution time per image when running on CPU? Can it be used for real-time landmark detection on videos?

PS method

If you are submitting a feature request, please preface the title with [feature request].

If you are submitting a bug report, please fill in the following details.

Which project are you using?

SAN

Issue description

What is the specific ps method used to form the 300W-style and AFLW-style data sets proposed in the paper? If we want to use the style-aggregated face generation module method in the paper to synthesize our dataset, how do we form various styles?
Could you provide the PS method adopted in your paper? Thank you very much!

Code example

cannot import models

When I ran the following command "sh scripts/300W/300W_Cluster.sh 0,1 GTB 3", (all previous steps have been run successfully), it gives the following error:

File "cluster.py", line 23, in
import debug, models, options
File "...../SAN-master/lib/debug/init.py", line 1, in
from .debug_main import main_debug_save
File "....../SAN-master/lib/debug/debug_main.py", line 6, in
import models
ModuleNotFoundError: No module named 'models'

Can you please let me know if I am missing any module or how to fix this? Thanks!

Error in generate_300W.py

Hello, when I run generate_300W.py, there are something typeerror. Could you please help my solve it ?

Traceback (most recent call last):
File "generate_300W.py", line 168, in
box_datas = load_all_300w(path_300w, style)
File "generate_300W.py", line 43, in load_all_300w
all_datas = load_mats(pairs)
File "generate_300W.py", line 30, in load_mats
cobjects = load_box(dataset[0], dataset[1])
File "generate_300W.py", line 11, in load_box
mat = loadmat(mat_path)
File "/home/hy/miniconda3/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 141, in loadmat
MR, file_opened = mat_reader_factory(file_name, appendmat, **kwargs)
File "/home/hy/miniconda3/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 64, in mat_reader_factory
byte_stream, file_opened = _open_file(file_name, appendmat)
TypeError: 'NoneType' object is not iterable

SAN & SBR are slower on GPU Than CPU

Which project are you using?

SAN or SBR

I run SAN and SBR on google colab with Tesla K80 GPU and CUDA V10.0.130, but the execution time is always longer on GPU than it is on CPU.

SAN: GPU = 2.49453s, CPU = 1.21520s
SBR: GPU = 6.39389s, CPU = 1.90000s

Any idea what could cause this issue?

Thanks!

download the dataset

Hello@D-X-Y:
I am very grateful for your contribution,but when I download the dataset ,
it always has some problems,like the download speed is low and I have a failure,
I think the best way is that I need to use ps to make the dataset by myself,
can you tell me how to make it use ps at detail or you have any other solution about it?
Thank you very much!

evaluate other picture

when I make an evaluation on other picture (the picture belongs to 300W) ,I found the landmark is not accurate,when I use my own dataset,I found it has the same problem,can you help me solve it or give me any other suggestion?thank you

Hi!I have a problem about aggregated face.

Thank you for your codes, And I am confused about the aggregated face producing,How does cycleGAN learn unsupervisedly through aggregated features? Is K set arbitrarily or is it equal to 3

Run test script error

Hi,Xuanyi
I download the Pretrained model from google drive, but when I run the test script 'CUDA_VISIBLE_DEVICES=1 python san_eval.py --image ./cache_data/cache/test_1.jpg --model ./snapshots/SAN_300W_GTB_itn_cpm_3_50_sigma4_128x128x8/checkpoint_49.pth.tar --face 819.27 432.15 971.70 575.87' I got the error message:

The image is ./cache_data/cache/test_1.jpg
The model is ./snapshots/checkpoint_49.pth.tar
The face bounding box is [819.27, 432.15, 971.7, 575.87]
Traceback (most recent call last):
File "san_eval.py", line 78, in
evaluate(args)
File "san_eval.py", line 32, in evaluate
snapshot = torch.load(snapshot)
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 304, in load
return _load(f, map_location, pickle_module)
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 460, in _load
magic_number = pickle_module.load(f)
TypeError: argument must have 'read' and 'readline' attributes
Do you have any idea?

train error

Hello@D-X-Y:
I am very sorry to disturb you, but when I run the bash scripts/300W/300W_Cluster.sh 0,1 GTB 3,it show that :
runtimeerror:found 0 files in subfolder of :./cache_data/cache/AFLW .
It's because of the datasets?That I just download the 300w,and I don't download the AFLW

Could you please share PS generation code?

I want to generate datasets from the original dataset. Your paper said that the datasets are PS-generated. Can you share the filters or code you use in photoshop? Thank you

Some questions after running 300W_Cluster.sh

Hi,i have finished to run 300W_Cluster.sh on your style datasets,and it works well.But the cluster.lst seems a little strange,here is my txt log file:
cluster_seed_1986_22-Aug-at-07-51-49.txt
I have three questions:
1.At the line 174,cluster-00-03.lst is extremely small since it has only 44 pictures in it,is that normal?
2.From your reply on other issue,when use those *.lst files to train network,which two files are needed for training cycleGAN?in your paper,a words saying:regard the group with the maximum element and the group with the minimum as two different style sets by default. These two different groups are then used to train style-unified face generation module.But from other issues,you said that largest two not (max and min) files are needed,so,which is True?
3.Is your network supports customed num landmark training,such as 106,108,even 1000 landmarks?whether just modify the num_pts value can make this?Maybe needs to modify the normalization method which is defined as calculating the distance between the 37-th and the 46-th points?
Thanks!@D-X-Y

datasets module

Hi,
Thank you so much for the contribution on this.
Seems like the "datasets" module is missing in the current repo.
Could you please provide it for easier usage? Thanks!

SAN

SAN
问题描述:
1、请问下作者有没有测试过SAN的模型( ResNet-152)在CPU上或者在移动端的耗时?
2、请问下这个残差网络是否可以替换浅层一点的网络,性能会不会下降,有没有做过相关尝试?
3、请问下这个300w增强的数据还是原来68点的标注吗,AFLW和300w有没有放在一起训练过?

How to get the bounding box?

Hi,
Thanks for your work. Recently i test your code on 300W test set, include helen, aflw, ibug. when i input the test images with face bounding box based on mtcnn, i can not get a high accuracy. So could you tell me which detector do you adopt in this project. Thank You.
Siyuan

something wrong when using crop_pic.py

Hello! I generate lists successfully. But when I crop face using crop_pic.py, I met a problem.
As for 300W, it cropped 3636 faces totally, not 3837.
As for AFLW, it cropped 21123 faces totally, not 24386.
Could you please tell me why the numbers are not equal?

[feature request] ICCV2019 paper

Which project are you using?

TS3

Issue description

Where can I find your ICCV2019 paper "Teacher Supervises Students How to Learn from Partially Labeled Images for Facial Landmark Detection" for now?
Can you provide a pdf copy?Thanks.

The result is not normal

Hello,D-X-Y,
I am trying to use your model to predict more number points facial landmark, but the facial landmark of my prediction after training are the same point.What is the possible reason for this problem?
Looking forward for your reply, thank you!
image

性能问题

TS3 论文中没找到相关的速度性能,请问您训练的模型大小和在cpu 和 gpu下大概跑多少ms

[feature request] Complete TS3 Project Code

Which project are you using?

TS3

Issue description

To my understanding , the files under models DIR are not all the source codes, right? Would you release the dataloader part and train function days later?

The checkpoint_49.pth.tar in SAN_300W_GTB_itn_cmp_3_50_sigma4_128_128_8 is broken

If you are submitting a feature request, please preface the title with [feature request].

If you are submitting a bug report, please fill in the following details.

Which project are you using?

SAN

Issue description

The checkpoint_49.pth.tar in SAN_300W_GTB_itn_cmp_3_50_sigma4_128_128_8 is broken and can't be decompressed. Could you provide a new checkpoint-49.pth.tar in Baidu Yun? This will be a big help for me. Thank you very much!

Can I train a model with SBR to detect any dataset?

Hello,
I understand the model is designed to detect landmarks for a video which has been trained. However, I am wondering whether I can train a model with large scale dataset to do inference for any dataset, so that the model can do runtime inference.

May I have some questions:

  1. Have you succeed train such a model?
  2. The backbone I used is a pretrained model which can output quite stable landmarks (with slight jitter), and I would like to further improve this slight jitter. In this case, would SBR help with this? Or SBR can improve only poor performance model with limited training data?
  3. In your code, within the lk operation the batch is split and trained by sequence . Could you please tell why not design to proceed by batch ?

Thank you.

training is not normal

I trained the 68pts net on your style_datasets.But things were not going as expected:
Firstly,i ran 300W_Cluster.sh to cluster the 300W-orginal datasets into three groups,here is my running log:
cluster_seed_1986_22-Aug-at-07-51-49.txt
Since the cluster-00-03.lst is very small,i chose cluster-01-03.lst/cluster-02-03.lst as cycle_a_lists and cycle_b_lists.
Then,i ran 300W_CyCLE_128.sh,i found the running log is not normal,so i stopped it manually.Here is my running log:
seed-8335-26-Aug-at-02-18-08.log
in the running log above,D-A and D-B 's value are always around 0.15,can not reduce,any suggestions?
Thanks@D-X-Y

Questions about 'Evaluate on 300-W or AFLW'

I recently read your paper SAN. Thank you very much for your open source code for everyone to learn. I hope that you can answer some of my questions in your spare time. I use your test script
‘’bash scripts/300W/300W-EVAL.sh 0,1
bash scripts/AFLW/AFLW_CYCLE_128.FULL-EVAL.sh 0,1‘’ has generated some .pth.tar files.
I want to know what information these files hold and what they do.
Addtionally, I want to know if there is a way to verify the images in bulk and save the visualization results (with landmark points).
Thanks!

Are you going to release the trained SAN model on 300w?

Dear Xuanyi Dong,

First of all I would like to congratulate you for your excellent work. I'm a PhD student at Spain. My research is focused on face alignment. I have used your https://github.com/D-X-Y/SAN code and I would like to ask some questions:

  • Are the 300W and AFLW best trained models publicly available? I have downloaded the SAN_300W_GTB_itn_cpm_3_50_sigma4_128x128x8 that you provide but the reported results are far away from the reported in the paper https://arxiv.org/abs/1803.04108.
 > Full:
NME: 6.053968616522241
AUC: 32.29837440048558
FR: 15.965166908563134
 > Helen:
NME: 4.968793431617647
AUC: 39.18139403496543
FR: 5.454545454545457
 > LFPW:
NME: 5.1568256324448525
AUC: 36.52686151341836
FR: 6.696428571428569
 > Common:
NME: 5.044820891879911
AUC: 38.10815694365434
FR: 5.956678700361007
 > iBUG:
NME: 10.195211871721133
AUC: 8.350256928175838
FR: 57.03703703703704

We would like to repeat the 3.98 NME obtained in the Full set on 300W. I look forward to your response.

Best regards,
Roberto Valle

Can't load the pre-trained model

Hi D-X-Y, I downloaded the pre-trained model vgg16-397923af.pth and assign it as the model parameter when running eval.py, but it failed at param = snapshot['args'], reporting KeyError: 'args'. so I retrain the detector as the doc describes , this error disappears, though the retrained detector looks not as good as your demo
so please give me some help. why can't the pre-trained model work well? the retrained model works not as good as your demo with the totally same code

ModuleNotFoundError: No module named 'apt_pkg'

If you have a question or would like help and support, please ask at Issues

If you are submitting a feature request, please preface the title with [feature request].
If you are submitting a bug report, please fill in the following details.

Which project are you using?

SAN or SBR or others?

Issue description

Provide a short description.

Code example

Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.

System Info

Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).

You can get the script and run it with:

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py

Using SAN project in a comparative review

If you are submitting a feature request, please preface the title with [feature request].

If you are submitting a bug report, please fill in the following details.

Which project are you using?

SAN or SBR or TS3 or others?

Issue description

Provide a short description.

Code example

Please try to provide a minimal example to repro the bug.
Error messages and stack traces are also helpful.

System Info

Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).

You can get the script and run it with:

wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py

"cluster.py" No module named 'models'

hello,when I run $sh scripts/300W/300W_Cluster.sh 0,1 GTB 3. There is no 'models'. Is it missing the models.py ? Thanks.

$ sh scripts/300W/300W_Cluster.sh 0,1 GTB 3
script name: scripts/300W/300W_Cluster.sh
3 arguments
Traceback (most recent call last):
File "cluster.py", line 23, in
import debug, models, options
File "/home/hy/exercise/semantic_segmentation/SAN/lib/debug/init.py", line 1, in
from .debug_main import main_debug_save
File "/home/hy/exercise/semantic_segmentation/SAN/lib/debug/debug_main.py", line 6, in
import models
ModuleNotFoundError: No module named 'models'

about SBR pretrained model

Hey,
I used your pretrained model for SBR model, and results are impressive, though I have one question regarding that. Did you train the model only on 300W dataset, meaning it doesnt include 300VW as well?
If so, if we could train it with 300W and 300VW combined, should we expect a significant increase in accuracy?
Regards, Kadir

Error on running cluster.py

While running the command "sh scripts/300W/300W_Cluster.sh 0,1 GTB 3" I got the following error message:

Traceback (most recent call last):
File "cluster.py", line 232, in
main()
File "cluster.py", line 58, in main
resnet = models.resnet152(True, num_classes=4)
AttributeError: module 'models' has no attribute 'resnet152'

Do you have any idea what I did wrong?

Hi! some problems about data process

It is about the project of SAN. If I want to train the network using my own dataset, how to use PS to process the three kinds of pictures. I have not used the PS before, can you tell me how to handle the pictures and the setting parameters of PS?(gray, sketch, light). Thank you!!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.