Giter Club home page Giter Club logo

2d-and-3d-face-alignment's People

Contributors

1adrianb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

2d-and-3d-face-alignment's Issues

Mapping from LS3D-W balanced to the LS3D-W images

Hi,

I am wondering if you can provide the mapping from LS3D-W balanced images to its original image path in the original LS3D-W set. Namely, I am wondering is there a way to identify what are those sampled images in the original LS3D-W (300VW-3D, 300W-Testset-3D, AFLW2000-3D-Reannotated and Menpo-3D) set that goes into the balanced set?

Thanks,

Can not run this demo on CPU

When I run this demo with 2D-FAN model and set "-device cpu", it ended with error "In 1 module of nn.Sequential: /home/v-doxu/torch/install/share/lua/5.1/cudnn/init.lua:171: assertion failed!"

Query regarding dataset annotation

Hey,
I want to make my own 3d face landmark annotation dataset to use it in my internship project. So how can I make one so that I can also use to train models for commercial purpose ?

Thanks.

datasets

I only found the LS3D-W dataset not found the 300W-LP. Could you tell me where can download?

Scale factor in utils transform and get_normalisation

First of all thanks for such a wonderful library.
Factor deciding scale in utils.get_normalisation is 195 : scale = (math.abs(maxX-minX)+math.abs(maxY-minY))/195
While the same for utils.transform is 200: h = 200*scale
Shouldn't both be same? Either 200 or 195.

"Segmentation fault (core dumped)"

I have followed all the steps mentioned in the readme file, but when I run the script:

th main.lua

The console says: 'Segmentation fault (core dumped)'

I have the models and the dataset included in the same folder, and trying to follow the paths described in the 'opts.lua' file.

Is there any thing more I have to do to put the program working?

"Found 0 images" message

I have the application apparently working, but it looks like is not able to locate the images.
I have deducted the folder structure checking 'opts.lua' file.

My folder structure is:
2D-and-3D-face-alignment
2D-and-3D-face-alignment/dataset
2D-and-3D-face-alignment/dataset/LS3D-W/
2D-and-3D-face-alignment/dataset/LS3D-W/*.t7 # All the LS3D-W dataset
2D-and-3D-face-alignment/dataset/LS3D-W/*.jpg # All the LS3D-W dataset
2D-and-3D-face-alignment/models
2D-and-3D-face-alignment/models/2D-FAN.t7 # A renamed version of 2D-FAN-generic.t7
2D-and-3D-face-alignment/LICENCE
2D-and-3D-face-alignment/main.lua
2D-and-3D-face-alignment/opts.lua
2D-and-3D-face-alignment/README.md
2D-and-3D-face-alignment/utils.lua

Is the folder structure wrong?

Question about caffe implementation

Hi,
I have a problem for caffe implementation.
table.insert(out, tmpOut)
I want to know the meaning of this line. I only use tmpOut4 for output and the result is poor. Shoud l concat all tmpOut to output?

Building docker image fails when building dlib because of 'Boost python library not found'

I was trying to build the docker image using docker build -t face-alignment . and the process failed in the step 12/13:

Step 12/13 : RUN pip install -r requirements.txt
...
Building wheels for collected packages: dlib, networkx, olefile
  Running setup.py bdist_wheel for dlib: started
  Running setup.py bdist_wheel for dlib: finished with status 'error'
  Complete output from command /opt/conda/envs/pytorch-py35/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-w8dr42qr/dlib/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpcna1qvlhpip-wheel- --python-tag cp35:
  running bdist_wheel
  running build
  Detected Python architecture: 64bit
  Detected platform: linux
  Configuring cmake ...
  -- The C compiler identification is GNU 5.4.0
  -- The CXX compiler identification is GNU 5.4.0
  -- Check for working C compiler: /usr/bin/cc
  -- Check for working C compiler: /usr/bin/cc -- works
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Check for working CXX compiler: /usr/bin/c++
  -- Check for working CXX compiler: /usr/bin/c++ -- works
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  CMake Warning at /usr/share/cmake-3.5/Modules/FindBoost.cmake:725 (message):
    Imported targets not available for Boost version
  Call Stack (most recent call first):
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:763 (_Boost_COMPONENT_DEPENDENCIES)
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:1332 (_Boost_MISSING_DEPENDENCIES)
    /tmp/pip-build-w8dr42qr/dlib/dlib/cmake_utils/add_python_module:61 (FIND_PACKAGE)
    CMakeLists.txt:9 (include)
  -- Could NOT find Boost
  CMake Warning at /usr/share/cmake-3.5/Modules/FindBoost.cmake:725 (message):
    Imported targets not available for Boost version
  Call Stack (most recent call first):
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:763 (_Boost_COMPONENT_DEPENDENCIES)
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:1332 (_Boost_MISSING_DEPENDENCIES)
    /tmp/pip-build-w8dr42qr/dlib/dlib/cmake_utils/add_python_module:63 (FIND_PACKAGE)
    CMakeLists.txt:9 (include)
  -- Could NOT find Boost
  CMake Warning at /usr/share/cmake-3.5/Modules/FindBoost.cmake:725 (message):
    Imported targets not available for Boost version
  Call Stack (most recent call first):
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:763 (_Boost_COMPONENT_DEPENDENCIES)
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:1332 (_Boost_MISSING_DEPENDENCIES)
    /tmp/pip-build-w8dr42qr/dlib/dlib/cmake_utils/add_python_module:66 (FIND_PACKAGE)
    CMakeLists.txt:9 (include)
  -- Could NOT find Boost
  CMake Warning at /usr/share/cmake-3.5/Modules/FindBoost.cmake:725 (message):
    Imported targets not available for Boost version
  Call Stack (most recent call first):
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:763 (_Boost_COMPONENT_DEPENDENCIES)
    /usr/share/cmake-3.5/Modules/FindBoost.cmake:1332 (_Boost_MISSING_DEPENDENCIES)
    /tmp/pip-build-w8dr42qr/dlib/dlib/cmake_utils/add_python_module:69 (FIND_PACKAGE)
    CMakeLists.txt:9 (include)
  -- Could NOT find Boost
  -- Found PythonLibs: /opt/conda/envs/pytorch-py35/lib/libpython3.5m.so (found suitable version "3.5.2", minimum required is "3.4")
  --  *****************************************************************************************************
  --  To compile Boost.Python yourself download boost from boost.org and then go into the boost root folder
  --  and run these commands:
  --     ./bootstrap.sh --with-libraries=python
  --     ./b2
  --     sudo ./b2 install
  --  *****************************************************************************************************
  CMake Error at /tmp/pip-build-w8dr42qr/dlib/dlib/cmake_utils/add_python_module:116 (message):
     Boost python library not found.
  Call Stack (most recent call first):
    CMakeLists.txt:9 (include)
  -- Configuring incomplete, errors occurred!
  See also "/tmp/pip-build-w8dr42qr/dlib/tools/python/build/CMakeFiles/CMakeOutput.log".
  error: cmake configuration failed!
  
  ----------------------------------------
  Failed building wheel for dlib
...
Command "/opt/conda/envs/pytorch-py35/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-w8dr42qr/dlib/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-9qlwtoek-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-w8dr42qr/dlib/
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1

From the message I suppose I need to compile and install Boost libraries first...

3D Alignment

Is the result reported for 3D face alignment in the paper the 2D error between the points and the prediction? If so, why is it not the error over all dimensions?

About the LS3D-W annotation

Hi, thanks a lot for your project!

I want to use your LS3D-W dataset as my training set. But when I get access to the .t7 annotation file, I get the 682 array, instead of 683 with depth dimension as I expected.
So, this dataset contains 3D landmarks, right? Where could I get the depth dimension?

Thanks you very much!

does it work with Torch7 with LuaJIT 2.1.0-beta1?

/home/mmf159/torch-cl/install/bin/luajit: /home/mmf159/torch-cl/install/share/lua/5.1/trepl/init.lua:384: 
/home/mmf159/torch-cl/install/share/lua/5.1/trepl/init.lua:384: attempt to index a string value
stack traceback:
	[C]: in function 'error'
	/home/mmf159/torch-cl/install/share/lua/5.1/trepl/init.lua:384: in function 'require'
	main.lua:12: in main chunk
	[C]: in function 'dofile'
	...9/torch-cl/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
	[C]: at 0x00405e90

is it a problem of version of LuaJIT 2.1.0-beta1?

I am trying to run https://github.com/AaronJackson/vrn

Hair of the 2d face

When I am running the code, I am unable to see the hair over the face, Is there a way in which we can have human hair on the top of the face, I mean is this model predict only the face or Can we generate 3d part of face like hair ear etc also,

In some images, I am seeing some black part on the top and, I hope it comes because of hair color. Can we extend it to little bit such that it looks like hair of a male person.

how to obtain bounding box files (.t7 or .pts)

Hi,

I am trying to run your demo, but I can't figure out your pipeline for obtaining the bounding box info files as required in utils.getFileLists(). I would appreciate any pointers on this.

In addition, what is the expected file format for the bounding box info?

Thanks
Ilker

Regarding licence of model file and dataset.

hi Adrian,
I have gone through the repository and found that the license type used in this repository is BSD 3-Clause "New" or "Revised".
1.Is the model file also covered under BSD 3-Clause "New" or "Revised" license?
2.With your knowledge, any idea about the licensing of datasets you used?

Converting .t7 labels into other formats

I am trying to use this dataset on a windows machine. After wrestling with torch7 for a few days, I failed to get it working in Windows. My aim is to train a network from scratch in dlib c++ library. I need to first convert the .t7 labels (e. image00002.t7) into a different format.

Since, I couldn't install torch, I am trying to use this torch deserialization tool for python.

When I try to open the file using the following code, I get an empty array.

>>> torchfile.load('0001.t7')
array([], dtype=float64)
>>> torchfile.load('0011.t7')
array([], dtype=float64)
>>> torchfile.load('0111.t7')
array([], dtype=float64)
>>> torchfile.load('0110.t7')
array([], dtype=float64)

Can you tell me the exact format of labels in the t7 file. Or is there a utility you can point me to? For now, I need the labels in plain text. I will later decide what format to use.

Install fb-python on windows?

I install fb-python on windows but get such message:

`
E:\Work\Lib\2D-and-3D-face-alignment>luarocks search python

python - Search results for Lua 5.1:

Rockspecs and source rocks:

fbpython
0.1-2 (rockspec) - https://raw.githubusercontent.com/torch/rocks/master

python
scm-0 (rockspec) - https://raw.githubusercontent.com/torch/rocks/master

E:\Work\Lib\2D-and-3D-face-alignment>luarocks install fbpython
Installing https://raw.githubusercontent.com/torch/rocks/master/fbpython-0.1-2.rockspec

Error: This rockspec for fbpython does not support win32, windows platforms.
`

How to install fb-python on windows?

thank you

Bad result!!

The 2D-FAN-300W.t7 model get a bad result! However,the pytorch version is good.
2018-01-17 11-22-26

Datasets compatibility?

When train the model on 300W-LP, the center and scale are predefined at lines. But when I try to run you demo code I found that the center and scale are computed according to ground-truth landmarks lines. And I don't unstand how the parameters like 0.12 and 195 comes?

The AUC on LS3D-W balanced dataset is 17-18% lower than the result reported in the papers

LS3D-W Checksum

The dataset is large. Can you provide md5sum of LS3D-W.tar.gz?

Also some questions:

  • Are 2d-FAN and 3d-FAN trained on LS3D-W? Section 3.1 says 300-W-LP is used for training.
  • Does Figure9 show the result for one global model trained on LS3D-W and tested on each (a)-(f) testset?
  • Do you have a plan for binarized depth network?

About .mat file

The Mat file in the '300W_LP' dataset folder and in the 'landmarks' is different. I want to know about how to use the Mat file in the '300W_LP' dataset folder or its role.Thanks

Some errors when i run demo

I put the 2d-fan and dataset in the demo,when i run th main.lua,it occurs:
screenshot from 2017-09-11 09-01-13
I tried to install npy4th, but failed, have you ever meet this error? thx

fb-python cannot installed successfully?

Thanks for you sharing this code, but I cannot install fb-python correctly, when I run luarocks make rockspec/fbpython-0.1-1.rockspec, the following error occurs:

cmake -E make_directory build &&
cd build &&
cmake -DROCKS_PREFIX=/home/hao/torch/install/lib/luarocks/rocks/fbpython/0.1-1
-DROCKS_LUADIR=/home/hao/torch/install/lib/luarocks/rocks/fbpython/0.1-1/lua
-DROCKS_LIBDIR=/home/hao/torch/install/lib/luarocks/rocks/fbpython/0.1-1/lib
.. &&
make

-- Found Torch7 in /home/hao/torch/install
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- thread
CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
Could NOT find PythonLibs: Found unsuitable version "2.7.6", but required
is at least "2.7.13" (found /usr/lib/x86_64-linux-gnu/libpython2.7.so)
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:313 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-2.8/Modules/FindPythonLibs.cmake:208 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
CMakeLists.txt:47 (FIND_PACKAGE)

-- Configuring incomplete, errors occurred!
See also "/home/hao/fblualib/fblualib/python/build/CMakeFiles/CMakeOutput.log".

Thanks you in advance!

Support for MacOS

Hello, I noticed that some of the Lua dependecies require CUDA, so I'm wondering can this codebase run on MacOS? Thanks!

2D-3D landmarks output . how?

hello and thank you for the code
I'm trying to output the 3D landmarks of an input image and I'm a bit confused .
(I'm running the code on KDE Neon - Ubuntu 16.04)

this is the command I'm using
th main.lua -detectFaces True -type 3D -outputFormat txt -model models/3D-FAN.t7 -input images/

images contains several .jpg images

first i get this error :
Found 5 images
5 images require a face detector
Initialising python libs...
Initialising detector...
processing /shared/foss/2D-and-3D-face-alignment/images/Surprise_363_1.jpg
/shared/foss/torch-multi/install/bin/luajit: main.lua:90: Invalid numpy data type 9

according to AaronJackson/vrn#59
I changed the line in face_detection_dlib.lua
local detections = py.reval('[np.asarray([d.left(), d.top(), d.right(), d.bottom()]) for i, d in enumerate(dets)]',{dets=dets})
with
local detections = py.reval('[np.asarray([d.left(), d.top(), d.right(), d.bottom()],dtype=float) for i, d in enumerate(dets)]',{dets=dets})

now I have this error :
/shared/foss/torch-multi/install/bin/luajit: bad argument #2 to '?' (index out of bound at /tmp/luarocks_torch-scm-1-2154/torch7/generic/Tensor.c:965)
stack traceback:
[C]: at 0x7f0fb5057b70
[C]: in function '__index'
main.lua:184: in main chunk
[C]: in function 'dofile'
...orch-multi/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

the first .txt file is written before crashing , it contains the following lines :

379
218
[torch.FloatTensor of size 2]
, 379
250
[torch.FloatTensor of size 2]

I tried to replace preds_img with preds_hm in the loop were the .txt file is written in main.lua (line 172 to 187) and now it works ...
but I only got 2D landmarks.
I'm certainly missing something here ..

thanks in advance
luc

Cannot achieve same result in paper

Dear Mr. Adrian Bulat,
I run your code based on the Nvidia-docker file you provided. But unfortunately, I cannot achieve the same result as you reported in the paper. I downloaded the model from your provided link.

So could you review and check are there any problems with the code and models?
Thank you so much and sorry for any inconvenience.

Here is my evaluation result
===300W-Testset-3D
600
('AUC: ', 0.6637857142857142)
===Menpo-3D
8955
('AUC: ', 0.42285235702321128)
===300VW-CatA
62643
('AUC: ', 0.52723026857407407)
===300VW-CatB
32872
('AUC: ', 0.55551620136981539)
===300VW-CatC
27245
('AUC: ', 0.26830873292609392)

Here is the link to download the result of 3D-FAN that I generate
3dFAN_result.zip
Here is my modified code.
3dFAN.zip

somequestion about dockerfile

now i have gpu driver and nvidia-docker on my machine,can i use the dockerfile directly? Do I need to install CUDA on my machine?

Dlib Face detection blocked

Hello Adrian,

Thank you very much for sharing your code and model.
I have an error when running the landmark detection over an image without bounding box. It seems that the dlib detector doesn't work.

The error could be found here:

/home/wisimage/torch/install/bin/luajit: ./facedetection_dlib.lua:21: Python error: opaque ref: call
RuntimeError: Unsupported image type, must be 8bit gray or RGB image.

I've mentioned that you have parsed the image into byte and HWC format. I'am not quite sure how fb.python handle the python variable in Lua code but print(py_img) looks fine.

Any ideas and suggestions are appreciated. Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.