Giter Club home page Giter Club logo

facesynthetics's People

Contributors

errollw avatar friggog avatar microsoft-github-policy-service[bot] avatar tadasbaltrusaitis avatar tjcashman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facesynthetics's Issues

Dataset downloading

Hi!

I'm trying to download the dataset using separated links.
However, I get an error message 'connection reset by peer' for some links.
So, would you additionally support Onedrive sharing links for the dataset?

Render passes

Thank you for this dataset. Very cool project.

Did you capture render passes, or are project files available to rerender?

I'd love to be able to experiment with normal, curvature, depth, etc

3D landmarks?

is it so difficult to provide 3d landmarks?

without 3D we cannot estimate pitch/yaw/roll correctly.

Multi view rendered images

First of all, thank you for this great work. I was wondering if the dataset could be expanded to provided images rendered from multiple camera directions, something similar to co3d from facebook. This will enable many additional research possibilities.

larger resolution?

can you please render in 1024 or 2048 res
512 is small enough for modern tasks

6d pose annotations

Hi!
First, I would also like to express my admiration for this work.

Then please add my request for releasing 6d pose information. I think it would be interesting to train estimators for 6d pose on your dataset due to having perfect ground truth data in contrast to training on 300w-lp.

Thank you

Michael

How do we can get a bounding box?

Hello,
Thanks for your great work

Now, I am studying about this one.

I want to know how you guys get the bounding box 256 x 256?
Is there any specific method or additional detection netwrok for it?

dense landmarks?

Hi! Besides the 70 sparse landmarks, is it possible to get the ~700 dense landmarks ?

Alpha background masks.

If you could release the Alpha masks shown in the paper, they would be really useful for training background matting models (for example PaddlePaddleSeg PP-Matting)

The segmentation images only contain a binary background class, so applying these as masks can result in a halo effect.

Failing to download the full dataset

Hello guys, thank you for the great work.
I've tried (numerous times) to download the full dataset (32GB), using different internet connections (aws machine as well), but at each time, the download eventually failed.
Any suggestion? maybe there is a mirror?

separate categories?

{frame_id}_seg.png # Segmentation image, where each pixel has an integer value mapping to the categories below

then how to get the SKIN label under GLASSES ?

firefox_2021-10-11_19-57-16

can you please re-render whole dataset with separated categories?

wrap3d manually or with python scripts?

Hi, you have mentioned that you register your head models with r3ds. I want to know whether this process is automated or manual? Specifically, how you choose the corresponding keypoints, which consumes quite a lot of time. After 'wraping', will you modify the wrapped results further to get better results?

Contrib: 3D Face Models - Just make your own

I think the research team and Microsoft have worked really hard on this research project. Sharing the dataset in it's format and scale is incredibly generous and a good-faith contribution to the ML research community.

So hats off to the team, this is a really exciting paradigm shift not just for facial detection models!

Anyway, for everyone who keeps asking for them to share their 3D assets - they literally tell you everything you need to do it yourself in their short summary video!

I won't write a tutorial but here's a very rough gist of a workflow to create your own 3D face models:

  • Read the research abstract and watch the announcement video again. Pay close attention to the brief run through of their 3D model process.
  • Create a simple abstract brief of the type of dataset you want to end up with using your own 3D face models
  • Work backwards using your understanding of how the research team described their 3D workflow
  • From there you can create a rough plan of everything you'll need to collect and learn to replicate their methods.
  • If you put some time to figure out the above - Everything below will fill in the blanks and hopefully get excited to learn, experiment and relish in taking the challenge head on.
  • Download Blender (It's free and there's plenty of quality tutorials on youtube)
  • Download a pre-made 3D generic head model on Sketchfab (Also free)
  • You can also find 3d models for facial apparel and hair on Sketchfab
  • Get your head around the basics of modelling and rigging (see: Youtube) so that you can iterate head model variants that you can imperatively mutate dynamic poses and facial expressions
  • Use a face tracking app on your phone and collect facial expression and head movement data
  • Bind your tracking data to your rigged head models iteratively with some clever blender scripting OR
  • You can also use Unity or Unreal Engine to programatically apply your tracking data. This will help with render performance and dynamic iteration of Face variants with your 3D facial feature and accessory assets.
    This will cost you in render quality but you've got thousands of face models to render and you'll see from the supplied dataset that your renders aren't expected to be 100% photo realistic.
  • I would recommend directing your effort into optimising the anatomical accuracy of your face models.
  • Once you bake the tracking into your model variations, you can continue in your game engine context or jump back to blender to add your textures, environment, lighting and animate the camera rotation
  • The environment and lighting is a piece of cake - just grab some HDRI's from Polyhaven and apply them to your face model variation scene as well as making sure they emit light - subtle generic scene lighting will help too
  • Polyhaven has you covered for your face textures too - be sure to play around with the material settings to take advantage of your lighting, enable shadows and maybe some subsurface scattering for the skin
  • Finally create your camera rotation animation and hopefully by this stage you'll have created a little script to iterate through your face models and render your dataset programatically (You can write some simple python code in blender to get that working)
  • Oh don't forget to use the Cycles renderer - that's what these guys used

There you go, be free and don't forget to share your 3D assets 😂
Btw, those renders are either going to take weeks to months to finish. Either that or you'll max out your credit card on render farms...

Happy to help anyone brave enough to give this a go!

Edit: extra ideas

Face annotation conversion

Greetings!
Thanks for sharing such an inspirational results!

I'm setting up a set of training experiments with the dataset. I found, that landmarks' annotation format does not directly maps to 300W, WFLW, etc.:

  1. Different keypoints positions.
  2. Invisible landmarks are drawn as direct projections. In contrast, other annotation types supposes landmarks movement to the edge of visible part of the face.

So do you have a plan to a) provide a converter to the mentioned benchmarks or b) share a 3D landmarks annotation?
I would be very grateful.

Sample from your dataset:
Microsoft_3

Example from WFLW dataset:
WFLW_4

Full annotations

Hello! Thanks for the amazing work!
Do you plan to release of all types of annotations? I mean depth, normals, UVs, all points for head.
That would be cool!

Could you release corresponding synthetic 3d model?

Hi, Thanks for your great work!

I study in face systhetic and notice the great dataset.

Could you release corresponding synthetic 3d model of synthetic image?

It may be brought more research topic on the dataset. Thanks!

dense landmarks

Hello! Thanks for the amazing work!
Do you plan to release of annotations about depth landmark?
That would be cool!

train val test split

Hi guys, thank you for the great work.
It seems that all the data is in a one big directory, is there a standard way to split the data to train / val / test ?

Request for split compression

Dear all,

I've tried to download the full dataset of 100,000 images (32GB) several times via the Google Chrome browser and wget, but I cannot download them because of network error.

I think the size of the compressed dataset is too large to download at once.

Thus, I request that you should upload the dataset using split compression.
It is useful to retry download even if the download fails.

Best regards,
Vujadeyoon

Trained networks

Thanks for this great work! Do you also plan to release the nn models you trained for your publication?

Segmentation ground truth missing a label for mask on face(facewear)?

Hope that I did't got it wrong, but it seems to be that ground truth missing a label, for "facewear": 18.

I could got a heatmap image for facewear is labeled to be 0, as same as background.
image

So, it was intend to be, or due to some other reasons for that?
(Have read the paper, it said that "we
dress our faces in headwear (36 items), facewear (7 items)
and eyewear (11 items) including helmets, head scarves, face
masks, and eyeglasses
")

3D landmarks from a front view

Hi, Thanks for releasing the dataset.

It seems releasing 3D landmarks for the entire dataset is not part of the plan for near future. But I wonder if it is possible to share a reference set for the 70 3D landmarks from the front view, so the community can do pose estimation using methods such as Perspective-n-Point.

Synthetic 3D models?

For each synthetic image, is there a corresponding synthetic 3D model of the face/head?

Mirroring FaceSynthetics on Hugging Face

Hello! Thank you for creating and publishing this dataset 😄 Would you be interested in uploading it to the Hugging Face dataset hub? Hosting is free, and I know our users would find this dataset extremely valuable. Beyond helping with discoverability, datasets on Hugging Face can be used with the datasets library (https://github.com/huggingface/datasets), which enables things like streaming and also provides a ton of efficient data-manipulation tools.

We have guides on how to upload datasets if this is something you're interested in, but I'm also happy to help out with this myself!

cc: @osanseviero

Gaze data release inquiry

Hello,

Thanks for the great work! I just wanted to inquire about the eye-tracking data mentioned in section 4.5 in the paper. Would you be sharing (segmentations labels, images, landmarks, and gaze angles) at all? I think it'd be of huge benefit to the gaze tracking community.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.