microsoft / facesynthetics Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Hi!
I'm trying to download the dataset using separated links.
However, I get an error message 'connection reset by peer' for some links.
So, would you additionally support Onedrive sharing links for the dataset?
current hair segmentation is useless
look at sample
https://user-images.githubusercontent.com/8076202/137632951-f00bfee7-69df-4c55-9484-1d5c32be821c.mp4
Looks like you removed hair pixels with < 0.5 opacity .
we need hair mask as fp value, so the developer will decide either to include all hair, or clip by certain opacity.
Thank you for this dataset. Very cool project.
Did you capture render passes, or are project files available to rerender?
I'd love to be able to experiment with normal, curvature, depth, etc
is it so difficult to provide 3d landmarks?
without 3D we cannot estimate pitch/yaw/roll correctly.
First of all, thank you for this great work. I was wondering if the dataset could be expanded to provided images rendered from multiple camera directions, something similar to co3d from facebook. This will enable many additional research possibilities.
can you please render in 1024 or 2048 res
512 is small enough for modern tasks
Hi!
First, I would also like to express my admiration for this work.
Then please add my request for releasing 6d pose information. I think it would be interesting to train estimators for 6d pose on your dataset due to having perfect ground truth data in contrast to training on 300w-lp.
Thank you
Michael
Please provide the dense landmarks as mentioned in the paper
679 points rather than just 70 points
The face in each image has an expression. Can we get the values for the blendshapes used to create these expressions for each image?
Hello,
Thanks for your great work
Now, I am studying about this one.
I want to know how you guys get the bounding box 256 x 256?
Is there any specific method or additional detection netwrok for it?
Hello, it would be very beneficial for my use case to have image/mask pairs for the whole upper body. Is that possible?
Thanks.
Hi! Besides the 70 sparse landmarks, is it possible to get the ~700 dense landmarks ?
If you could release the Alpha masks shown in the paper, they would be really useful for training background matting models (for example PaddlePaddleSeg PP-Matting)
The segmentation images only contain a binary background class, so applying these as masks can result in a halo effect.
Hello guys, thank you for the great work.
I've tried (numerous times) to download the full dataset (32GB), using different internet connections (aws machine as well), but at each time, the download eventually failed.
Any suggestion? maybe there is a mirror?
Hi, you have mentioned that you register your head models with r3ds. I want to know whether this process is automated or manual? Specifically, how you choose the corresponding keypoints, which consumes quite a lot of time. After 'wraping', will you modify the wrapped results further to get better results?
I think the research team and Microsoft have worked really hard on this research project. Sharing the dataset in it's format and scale is incredibly generous and a good-faith contribution to the ML research community.
So hats off to the team, this is a really exciting paradigm shift not just for facial detection models!
Anyway, for everyone who keeps asking for them to share their 3D assets - they literally tell you everything you need to do it yourself in their short summary video!
I won't write a tutorial but here's a very rough gist of a workflow to create your own 3D face models:
Sketchfab
Unity
or Unreal Engine
to programatically apply your tracking data. This will help with render performance and dynamic iteration of Face variants with your 3D facial feature and accessory assets.There you go, be free and don't forget to share your 3D assets 😂
Btw, those renders are either going to take weeks to months to finish. Either that or you'll max out your credit card on render farms...
Happy to help anyone brave enough to give this a go!
Edit: extra ideas
Thanks for this great work. I wonder the process of attaching hair and eyebrows to head manual or automated? If manually attaching, how long does it take?
Your responses are appreciated.
Regards
Can we get the face bbox through segment?
Greetings!
Thanks for sharing such an inspirational results!
I'm setting up a set of training experiments with the dataset. I found, that landmarks' annotation format does not directly maps to 300W, WFLW, etc.:
So do you have a plan to a) provide a converter to the mentioned benchmarks or b) share a 3D landmarks annotation?
I would be very grateful.
Hello! Thanks for the amazing work!
Do you plan to release of all types of annotations? I mean depth, normals, UVs, all points for head.
That would be cool!
Hi, Thanks for your great work!
I study in face systhetic and notice the great dataset.
Could you release corresponding synthetic 3d model of synthetic image?
It may be brought more research topic on the dataset. Thanks!
Hello! Thanks for the amazing work!
Do you plan to release of annotations about depth landmark?
That would be cool!
Hi guys, thank you for the great work.
It seems that all the data is in a one big directory, is there a standard way to split the data to train / val / test ?
Hi, I want to know how do you do data augmentation?And do you train the network from scratch?
Dear all,
I've tried to download the full dataset of 100,000 images (32GB) several times via the Google Chrome browser and wget, but I cannot download them because of network error.
I think the size of the compressed dataset is too large to download at once.
Thus, I request that you should upload the dataset using split compression.
It is useful to retry download even if the download fails.
Best regards,
Vujadeyoon
Thanks for this great work! Do you also plan to release the nn models you trained for your publication?
Hope that I did't got it wrong, but it seems to be that ground truth missing a label, for "facewear": 18.
I could got a heatmap image for facewear is labeled to be 0, as same as background.
So, it was intend to be, or due to some other reasons for that?
(Have read the paper, it said that "we
dress our faces in headwear (36 items), facewear (7 items)
and eyewear (11 items) including helmets, head scarves, face
masks, and eyeglasses")
Hi, Thanks for releasing the dataset.
It seems releasing 3D landmarks for the entire dataset is not part of the plan for near future. But I wonder if it is possible to share a reference set for the 70 3D landmarks from the front view, so the community can do pose estimation using methods such as Perspective-n-Point.
Will you release 703 landmarks dataset as used in <3D Face Reconstruction with Dense Landmarks>?
Thank you
it interrupts on ~13Gb
also server does not support resume.
For each synthetic image, is there a corresponding synthetic 3D model of the face/head?
There are important files that Microsoft projects should all have that are not present in this repository. A pull request has been opened to add the missing file(s). When the pr is merged this issue will be closed automatically.
Microsoft teams can learn more about this effort and share feedback within the open source guidance available internally.
Will you publish the code for synthesize the head model so that I can customize ?
Hello! Thank you for creating and publishing this dataset 😄 Would you be interested in uploading it to the Hugging Face dataset hub? Hosting is free, and I know our users would find this dataset extremely valuable. Beyond helping with discoverability, datasets on Hugging Face can be used with the datasets
library (https://github.com/huggingface/datasets), which enables things like streaming and also provides a ton of efficient data-manipulation tools.
We have guides on how to upload datasets if this is something you're interested in, but I'm also happy to help out with this myself!
cc: @osanseviero
Hello,
Thanks for the great work! I just wanted to inquire about the eye-tracking data mentioned in section 4.5 in the paper. Would you be sharing (segmentations labels, images, landmarks, and gaze angles) at all? I think it'd be of huge benefit to the gaze tracking community.
perfect project!
I want to know if is the hair library 3D models?
looking forward your replying
https://github.com/deepinsight/insightface/tree/master/alignment/synthetics
Just for reference.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.