Giter Club home page Giter Club logo

lab's People

Contributors

e-sarkis avatar wywu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lab's Issues

the details of generate boundary map

Hi,wywu
Thanks for your impressive work.
I am try my best to generate boundary map as same as the one in your paper.Unfortunately, there are large difference between the generate boundary map and yours.
In my implement, i set the resolution 256,line width 20 in binary map and distance type L2,masksize 5 in distance map.I want to know the parameter in your setting. Thanks in advance.

How to evaluate on new face data

@wywu thanks for sharing your great work!
I would like to evaluate on the pretrained model to get face landmark on some new face data without annotation. Will you share the code to evaluate on a new dataset?
Thanks!

Can't build /tools/alignment_tools.bin

Hello, can anybody help me please? I got a wrong message when i build the Modified-caffe for LAB. The error info as follow:

.build_release/tools/alignment_tools.o:在函数‘RunTestOnWFLW() [clone .omp_fn.0]’中:
alignment_tools.cpp:(.text+0x4f1):对‘alignment_tools::ConvertImageToBGR(cv::Mat&)’未定义的引用
alignment_tools.cpp:(.text+0x881):对‘alignment_tools::CalcAffineMatByPose(std::vector<cv::Point
, std::allocator<cv::Point_ > > const&, std::vector<cv::Point_, std::allocator<cv::Point_ > > const&)’未定义的引用
alignment_tools.cpp:(.text+0xe9e):对‘alignment_tools::NormalizeImage(cv::Mat&)’未定义的引用
alignment_tools.cpp:(.text+0x1586):对‘alignment_tools::InvAffinePose(cv::Mat_ const&, std::vector<cv::Point_, std::allocator<cv::Point_ > > const&)’未定义的引用
alignment_tools.cpp:(.text+0x16b7):对‘alignment_tools::InvAffinePose(cv::Mat_ const&, std::vector<cv::Point_, std::allocator<cv::Point_ > > const&)’未定义的引用
alignment_tools.cpp:(.text+0x1cc8):对‘alignment_tools::ConvertImageToGray(cv::Mat&)’未定义的引用
.build_release/tools/alignment_tools.o:在函数‘RunTestOnWFLW()’中:
alignment_tools.cpp:(.text+0x4552):对‘alignment_tools::ReadImageLabelList(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int const&, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > >&, std::vector<std::vector<float, std::allocator >, std::allocator<std::vector<float, std::allocator > > >&)’未定义的引用
alignment_tools.cpp:(.text+0x4615):对‘alignment_tools::ReadLabelList(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, int const&)’未定义的引用
alignment_tools.cpp:(.text+0x54d9):对‘alignment_tools::WriteImageLabelList(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<std::vector<float, std::allocator >, std::allocator<std::vector<float, std::allocator > > > const&)’未定义的引用
collect2: error: ld returned 1 exit status
Makefile:628: recipe for target '.build_release/tools/alignment_tools.bin' failed
make: *** [.build_release/tools/alignment_tools.bin] Error 1
make: *** 正在等待未完成的任务....

I have google this error, but i haven't get any helpful info. By the way, i build the caffe with other enviroment: cuda 9.0, cudnn 7.0, python 3.5, opencv 3.3.1, matlab 2016b.
Thank you very much.

about pose

Hi,
Can this project calculate the head posture?
This is very important to me.
Thank you very much!

standard normalised landmarks mean error

How can I caculate the standard normalised landmarks mean error? I caculate use
mean error = (mean_nor / distance).
Is the "distance" = "eye_distance" right?

make error

Hi @wywu
I'm really interesting about your method and great result represented in your paper.
I'd like to reproduce your code, but I get errors when make it.
What I have done is "make all -j4" in the Home directory.
What I get is
"src/caffe/alignment_tools/io.cpp: In function ‘void alignment_tools::ReadImageLabelList(const string&, const int&, std::vector<std::__cxx11::basic_string >&, std::vector<std::vector >&)’:
src/caffe/alignment_tools/io.cpp:52:27: warning: ignoring return value of ‘char* fgets(char*, int, FILE*)’, declared with attribute warn_unused_result [-Wunused-result]
fgets(buf, max_path, fp);
"
and
"Makefile:572: recipe for target '.build_release/lib/libcaffe.so.1.0.0' failed"
Although, it's a negligible problem, I hope you or anyone else can give me some help.
My system is Ubuntu 16.04, with gcc-5.4, cuda-8.0.
thx.

About installation

My pc is Ubuntu 16.04.4 LTS, protobuf 3.5.1 for installation.
Protobuf 3.6 is newer, below 3.4 is older to install.
When I do ''make'', it returns that

PROTOC src/caffe/proto/caffe.proto
CXX .build_release/src/caffe/proto/caffe.pb.cc
.build_release/src/caffe/proto/caffe.pb.cc: In member function ‘void caffe::BlobProtoVector::InternalSwap(caffe::BlobProtoVector*)’:
.build_release/src/caffe/proto/caffe.pb.cc:5329:37: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::BlobProto’
blobs_.InternalSwap(&other->blobs_);
^
.build_release/src/caffe/proto/caffe.pb.cc: In member function ‘void caffe::NetParameter::InternalSwap(caffe::NetParameter*)’:
.build_release/src/caffe/proto/caffe.pb.cc:6968:39: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::V1LayerParameter’
layers_.InternalSwap(&other->layers_);
^
.build_release/src/caffe/proto/caffe.pb.cc:6969:37: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrField<std::cxx11::basic_string >’
input
.InternalSwap(&other->input
);
^
.build_release/src/caffe/proto/caffe.pb.cc:6971:49: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::BlobShape’
input_shape_.InternalSwap(&other->input_shape_);
^
.build_release/src/caffe/proto/caffe.pb.cc:6972:37: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::LayerParameter’
layer_.InternalSwap(&other->layer_);
.build_release/src/caffe/proto/caffe.pb.cc:33636:37: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::BlobProto’
blobs_.InternalSwap(&other->blobs_);
^
.build_release/src/caffe/proto/caffe.pb.cc:33639:41: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::NetStateRule’
include_.InternalSwap(&other->include_);
^
.build_release/src/caffe/proto/caffe.pb.cc:33640:41: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::NetStateRule’
exclude_.InternalSwap(&other->exclude_);
^
.build_release/src/caffe/proto/caffe.pb.cc:33642:37: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrField<std::cxx11::basic_string >’
param
.InternalSwap(&other->param
);
^
.build_release/src/caffe/proto/caffe.pb.cc: In member function ‘void caffe::V0LayerParameter::InternalSwap(caffe::V0LayerParameter*)’:
.build_release/src/caffe/proto/caffe.pb.cc:35457:37: error: ‘google::protobuf::internal::RepeatedPtrFieldBase’ is an inaccessible base of ‘google::protobuf::RepeatedPtrFieldcaffe::BlobProto’
blobs_.InternalSwap(&other->blobs_);
^
Makefile:588: recipe for target '.build_release/src/caffe/proto/caffe.pb.o' failed
make: *** [.build_release/src/caffe/proto/caffe.pb.o] Error 1

' in accessible base of ' is also a problem about protobuf versions.
Can you tell me your environment about which version of your g++, protoc, and so on?
Thank you very much!

Network forward pass speed

I have benchmarked models with build-in caffe time util, does it look sane? Also I'm not sure why I get so large CPU time difference between mac and ubuntu builds and build with CUDNN is slower for some reason.

Mac OS

./build/tools/caffe time --model=./models/WFLW/WFLW_final/rel.prototxt
Average Forward pass: 2493.78 ms.
Average Backward pass: 375.324 ms.
Average Forward-Backward: 2870.7 ms

./build/tools/caffe time --model=./models/WFLW/WFLW_wo_mp/rel.prototxt
Average Forward pass: 694.657 ms.
Average Backward pass: 196.416 ms.
Average Forward-Backward: 891.54 ms

Ubuntu

CPU
./build/tools/caffe time --model=./models/WFLW/WFLW_final/rel.prototxt
Average Forward pass: 4468.37 ms.
Average Backward pass: 393.096 ms.
Average Forward-Backward: 4863.64 ms.

GeForce GTX TITAN X
./build/tools/caffe time --model=./models/WFLW/WFLW_final/rel.prototxt -gpu 0
Average Forward pass: 178.025 ms.
Average Backward pass: 133.243 ms.
Average Forward-Backward: 312.231 ms.

CPU
./build/tools/caffe time --model=./models/WFLW/WFLW_wo_mp/rel.prototxt
Average Forward pass: 806.207 ms.
Average Backward pass: 193.004 ms.
Average Forward-Backward: 1001.08 ms.

GeForce GTX TITAN X
./build/tools/caffe time --model=./models/WFLW/WFLW_wo_mp/rel.prototxt -gpu 0
Average Forward pass: 60.9703 ms.
Average Backward pass: 46.9149 ms.
Average Forward-Backward: 108.198 ms.

With CUDNN build:

 ./build/tools/caffe time --model=./models/WFLW/WFLW_final/rel.prototxt -gpu 0
I1124 18:21:10.306239 14327 caffe.cpp:408] Average Forward pass: 207.585 ms.
I1124 18:21:10.306246 14327 caffe.cpp:410] Average Backward pass: 137.963 ms.
I1124 18:21:10.306253 14327 caffe.cpp:412] Average Forward-Backward: 346.198 ms.

./build/tools/caffe time --model=./models/WFLW/WFLW_wo_mp/rel.prototxt -gpu 0
I1124 18:19:52.215446 14266 caffe.cpp:408] Average Forward pass: 74.862 ms.
I1124 18:19:52.215452 14266 caffe.cpp:410] Average Backward pass: 47.7963 ms.
I1124 18:19:52.215461 14266 caffe.cpp:412] Average Forward-Backward: 122.95 ms

Closing-eye data

Hi, the WFLW contains closing-eye face data? I trained one model and found the landmark of eyes are not accurate. When I close the eye, the landmarks are not changed...

300w dataset--bbox

Hi,

I am going to repeat your result on 300w dataset and WFLW dataset.

I found that result of face alignment is sensitive to face bounding box. You mentioned in your paper that all training images are cropped and resized to 256*256 according to provided bounding boxes. WFLW provide bbox in the ann file. But 300W dataset provide two different bounding box. One is detected by a face detector, and anther is determined by 68 landmarks.

So which bounding box is used in your experiments? Or any other face detector do you use?

Thank you.

Regards,
Kiki

Which datasets have you used?

Hi, I am concerned about which datasets you have used to get the video demo effect, which is not only can deal with large pose and almost no jittering. How can you do that ?

The program keeps waiting...

After the program loaded all the network,
The console stopped outputing new things,
I used 'nvidia-smi' to check the memory use,
207Mb?
Something wrong?

GPU memory usage

Hi, I'd like to know if there is something I can do to use this code with ~12GB of GPU memory

About the WFLW dataset

Dear author,
The WFLW offers 98 landmarks. Are the 68 landmark in 300W a subset of the 98 landmarks here?Can you offer the corresponding relations? Thanks a lot.

face landmark detection is frame by frame?

@wywu Thanks for sharing great work. From the demo video, it seems a little shaking in face landmarks. Are the landmarks detected frame by frame? Is there any method to keep landmarks stable when process video?

Is there any pretrained model for network training?

I have some doubts about LAB training:

  1. Do you use any pretrained models for training. When I saw the prototxt file, I found the risidual unit in hourglass estimator network seens changed(inputchannel->outputchannel/4->outputchannel/4->outputchannel), different with original hourglass network for pose estimation.

2.Can I train two network seperately, from the paper, I noticed that dfake op is the only reason that the first estimator's training needs the next regressor's output.

3.Finally, do you have some advice for size reduce of the estimator network. I wrote codes on tf, and I felt it cost long time to load the network structure and do initialization, training process is also slow.

Best regards.
Thank u for your help.

关于IPN和ION差距的疑问,IPN和ION的测试是同一个模型吗

您好。
感谢您提供这么优秀的论文。
我看到您的论文中300W上IPN是4.12,ION是3.49.
ION和IPN计算公式只差归一化项不一样,所以可以计算得到其比例为IPN=(外眼角距离/瞳孔距离)* ION, 300W测试集上(外眼角距离/瞳孔距离)≈1.39. 我自己的检测结果,3DDE,MSM,AWing论文的结果是符合这个规律的。 按照这个规律,3.49的ION对应的IPN应该是4.85. (本描述中瞳孔坐标是用眼睛周围的6个点计算均值得到的)
所以,我想请教一下您,
1. 在测试IPN和ION时您用的是同一个模型吗,能提供IPN的测试代码吗(已经看到您在其他issue中提供了算法)?
2. 怎么样训练提升IPN精度呢?即如果得到模型的ION是3.49(其对应IPN应该是4.85),怎么训练可以进一步提升IPN至更高呢?

 谢谢您的解答。

Paper inconsistencies

Dear Wayne Wu,

First of all I would like to congratulate you for your excellent work. I'm a PhD student at Spain. My research is focused on face alignment. I have used your WFLW trained model successfully and I read your CVPR paper. I would like to ask some questions:

  • Training. I am not sure which training images are you using in your experiments in COFW-29 and AFLW? I understand that in the LAB result you have trained using the 300W images to supply boundary information ... consequently it is not comparable with the literature. On the other hand, in the LAB w/o boundary result, how is it possible to train such a complex DCNN (res-18 architecture) using only 1345 training images in COFW? Are you using fine-tuning or training from scratch?

  • Testing. Your results in the 300W table are inconsistent between pupils and corners normalization. For example, according to literature it is not possible to obtain in the challenging subset a NME of 6.98 (pupils) and 5.19 (corners).

Method Pupils Corners
LAB 6.98 5.19
SHN paper 7.00 4.9
DAN paper 7.57 5.24

Do you have an explanation for this normalization error? Could you update your fixed results or release the trained model in 300W?

I look forward to your response.

Best regards,
Roberto Valle

make problem

Hi @wywu
I'm really interesting about your method and great result represented in your paper.
And I try to run the pretest model.But I get error when run the "alignment_tools.cpp".
I try to test if the "alignment_tools.cpp" can run, it returns error:
alignment_tools.cpp: line 20: //: Is a directory alignment_tools.cpp: line 27: using: command not found alignment_tools.cpp: line 28: using: command not found alignment_tools.cpp: line 29: using: command not found alignment_tools.cpp: line 30: using: command not found alignment_tools.cpp: line 33: syntax error near unexpected token thread_num,'
alignment_tools.cpp: line 33: DEFINE_int32(thread_num, 1, "thread_num");'
Because I run the model on my school server,I have no permission to run the make command. And it's system is CentOS Linux 7.5.

Question regarding your Inter-pupil normalization scheme

Hi, when I was evaluating my algorithm based on Inter-ocular Normalisation similar to the matlab code provided by 300W official evaluation code, I got ~ 3.30% on accuracy, which is better than the result you reported in your paper, but when I was trying to evaluate by Inter-pupil Normalization, I can only get 4.30%, which is worse than the result you reported in your paper. I thing I might have something wrong with Inter-pupil normalization. As I posted in Here, I calculated inter-ocular distance for each face using average position of 6 points around each eye.

Could you please provide some details on how did you get Inter-ocular NME? It would be great if you could share a short piece of code as well.

Thank you so much.

Model comparision

In paper we have Mean error for model comparison, but I wonder is it common to use some statistical test to compare models? i.e. check that error distribution of model A and B is normal and then use some statistical test to compare distributions?

interpolate points to line

Hi,
I was wondering what kind of interpolate method do you use to generate the boundary line? I have tried some simply interpolate method in scipy.interpolate, but the generated boundary line is not always perfect! So, could you give me some advice?
thanks a lot,
Mo

Where can I find the .caffemodel file?

I want to find the .caffemodel file for the pre-trained model weights. But there is only model.bin. Is it possible to read the model.bin and convert it into other file format like .npy?

Check Failed, ReadProtoFromBinaryFile

Hello i was trying your work, but i get an error while loading the network.. The network initialization is done but when i try to load the weights it crashes with CopyTrainedLayersFrom("models/model.bin")

upgrade_proto.cpp:97 Check failed: ReadProtoFromBinaryFile(param_file, param) Failed to parse NetParameter file: models/model.bin

do you have any ideas why this might be happening? i compiled your caffe version..

regards!

How to calculate inter-pupil distance?

I am having trouble on calculating inter-pupil distance, since 68 landmarks does not have points on pupils.
I think there could be two ways to do this:

  1. Using the center of eye corners as the center of pupils
  2. Using the location averaged by all 6 points around eyes to get the position of pupils
    Could you please provide some details on how did you calculate NME?

Thanks

WFLW Ground Truth landmark coordinates doubts

Hi @wywu, we noticed that some of the points for testing are strangely labeled. For example, in image 20_Family_Group_Family_Group_20_118.jpg, the landmark for point 88 (left inner lips) and 92 (right inner lips) are labelled at the same location.
In your ground truth, this corresponds to element 176,177 and element 184,185 which coordinates are:
'511.903015' '176.186005'
'512.805786' '176.284866'

Are they done intentionally?

Screenshot from 2019-04-25 11-47-30

This information is taken from list_98pt_test.txt

460.250183 119.457390 460.182925 127.510249 459.524832 135.534806 458.508017 143.523536 458.612313 151.561096 460.432317 159.393538 463.191511 166.958714 466.233100 174.416003 469.346006 181.843946 472.540975 189.236712 476.246950 196.381600 481.083341 202.804774 485.935830 209.201427 489.041375 216.627161 493.190537 223.450792 500.214228 227.244569 508.168759 228.303213 517.255823 227.299277 526.137446 225.097812 534.713523 221.906679 542.897046 217.811551 550.601759 212.874392 557.721266 207.126040 564.126325 200.592164 569.690194 193.328674 574.382879 185.472764 578.252795 177.179518 581.360719 168.570873 583.769538 159.740493 585.540197 150.759816 586.793757 141.691707 587.696223 132.581386 588.421997 123.455002 462.535004 108.564011 469.135010 103.190010 476.015015 102.980003 483.630035 102.798996 490.654999 105.049004 490.475006 109.459000 483.654999 108.198997 475.845001 107.144005 468.975037 107.108994 511.433044 101.483009 525.554016 99.107010 538.833984 99.279808 553.114014 102.077003 565.017029 108.835999 552.888000 107.602989 539.021973 103.716003 525.254028 106.476006 511.147003 108.483002 501.889008 120.973999 498.866633 131.209360 496.331946 141.560868 491.558282 150.787730 486.713013 158.186005 493.663311 159.610807 500.429043 161.314491 510.028485 158.233025 519.361938 155.489105 469.398621 121.840652 474.647341 118.304444 480.800146 116.981446 487.872301 118.523926 493.822998 122.711998 487.552767 125.460749 480.735669 125.807974 474.892448 124.341421 521.302002 122.459999 527.409961 117.157154 535.035023 114.650516 543.701384 116.420714 551.577881 120.603951 543.624904 122.263489 535.553126 123.198412 528.392131 123.538798 481.094330 176.206665 488.334472 170.670319 497.429113 170.160229 504.112890 169.970744 508.685630 169.569725 525.700740 170.291797 542.072021 174.024002 533.540572 188.872838 517.744885 196.498387 500.706750 199.729979 491.416981 194.843728 485.423343 186.004672 511.903015 176.186005 486.262192 178.418047 503.790537 175.248848 536.117056 175.362401 512.805786 176.284866 535.747143 182.982195 502.764955 190.421496 485.906791 176.158220 484.214724 121.413497 540.025767 119.944785 20--Family_Group/20_Family_Group_Family_Group_20_118.jpg

We also noticed similar behavior in other images such as:
2_Demonstration_Protesters_2_486.jpg
41_Swimming_Swimmer_41_358.jpg
12_Group_Team_Organized_Group_12_Group_Team_Organized_Group_12_328.jpg
13_Interview_Interview_Sequences_13_287.jpg
44_Aerobics_Aerobics_44_494.jpg
2_Demonstration_Protesters_2_586.jpg
and others

different between the .caffemodel and .bin file

Hi everyone,

In this project, the author provide the pretrain model file(model.bin). Would you mind to tell me how different the .caffemodel file and .bin file?

Thanks for author's excellent work!

Thank you very much!

about train

after data argmentation, how should I change the released prototxt file to train,
how long will it take to train if I use 1 GPU

Question for running the test on WLFW with the final model

Dear author,
after running the file "run_test_on_wflw.sh final",the testing result file "pred_release.txt" is the empty file,I can't find where is the problem,whether the setting of "alignment_tools.cpp" have some mistakes?could you give me some suggests,thanks!

I1021 23:08:56.250052 3648 net.cpp:255] Network initialization done.
I1021 23:08:56.664265 3648 net.cpp:744] Ignoring source layer traindata_layer
I1021 23:08:56.664422 3648 net.cpp:744] Ignoring source layer data_scatter
I1021 23:08:56.664510 3648 net.cpp:744] Ignoring source layer data_scatter_data_scatter_0_split
I1021 23:08:56.762648 3648 net.cpp:744] Ignoring source layer data_scatter/concat
I1021 23:08:56.762770 3648 net.cpp:744] Ignoring source layer data_scatter/prod
I1021 23:08:56.762835 3648 net.cpp:744] Ignoring source layer data_scatter/prob/concat
I1021 23:08:56.786267 3648 net.cpp:744] Ignoring source layer res4_2/bn/gather
I1021 23:08:56.786377 3648 net.cpp:744] Ignoring source layer res4_2/bn/gather/dropout
I1021 23:08:56.798363 3648 net.cpp:744] Ignoring source layer loss_98pt
W1021 23:08:57.063319 3648 net.hpp:41] DEPRECATED: ForwardPrefilled() will be removed in a future version. Use Forward().
Killed
list 7 done
cat: ./dataset/WFLW/WFLW_annotations/list_98pt_test_largepose.txt: No such file or directory
cat: ./evaluation/WFLW/WFLW_final_result/pred_98pt_largepose.txt: No such file or directory
cat: ./evaluation/WFLW/WFLW_final_result/pred_98pt_expression.txt: No such file or directory
cat: ./evaluation/WFLW/WFLW_final_result/pred_98pt_illumination.txt: No such file or directory
cat: ./evaluation/WFLW/WFLW_final_result/pred_98pt_makeup.txt: No such file or directory
cat: ./evaluation/WFLW/WFLW_final_result/pred_98pt_occlusion.txt: No such file or directory
cat: ./evaluation/WFLW/WFLW_final_result/pred_98pt_blur.txt: No such file or directory
cat: ./evaluation/WFLW/WFLW_final_result/pred_98pt_test.txt: No such file or directory

Best,
hchc0704

mean error, failure rate

thank you for your excellent work. could you provide me the formula of calculate mean error and failure rate in WFLW dataset? thx

Train/val/test split on 300W dataset?

Thank you for your excellent work. Could you provide the split of train/val/test set on 300W dataset? In the paper you said all the data are used in train and other are used in test, you didn't mention validation set.

make problem

my system is ubuntu16.04,python2.7,caffe(CPU),when I do make it shows:

xyl@xyl-K501LB:~/文档/LAB/LAB-master$ make all -j4
CXX src/caffe/util/interp.cpp
CXX src/caffe/util/signal_handler.cpp
CXX src/caffe/util/io.cpp
CXX src/caffe/util/cudnn.cpp
CXX src/caffe/util/db.cpp
In file included from src/caffe/util/interp.cpp:4:0:
./include/caffe/util/interp.hpp:6:23: fatal error: cublas_v2.h: 没有那个文件或目录
compilation terminated.
Makefile:581: recipe for target '.build_release/src/caffe/util/interp.o' failed
make: *** [.build_release/src/caffe/util/interp.o] Error 1
make: *** 正在等待未完成的任务....

can you please tell me where the problem is?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.