Comments (23)
Hi, thanks for your interest on our work.
You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.
from magic-animate.
The Japanese engineer, peisuke, created the google colab to generate a dense pose video.
https://colab.research.google.com/drive/1KjPpZun9EtlEMcFDEo93kFPqbL4ZmEOq?usp=sharing
The result is here.
https://x.com/peisuke/status/1732066240741671090?s=46&t=aBgVHjAMy0TFw0zYAE90WQ
from magic-animate.
Hi, great work for the paper.
I am trying to generate denseposes with detectron2 as suggested and I noticed that the colors I get are not matching those of the sample inputs in this repo.
What I get | What I would like to get |
---|---|
Am I missing something, like a color scheme option for detectron2? I guess feeding my image to the controlnet will not produce optimal results, as the domain shift is quite significant.
EDIT: cmap=cv2.COLORMAP_VIRIDIS
as input to DensePoseResultsFineSegmentationVisualizer
's initializer solves this
from magic-animate.
Thank you for introducing me. I have uploaded the Colab code here.
https://github.com/peisuke/MagicAnimateHandson
from magic-animate.
I don't know if someone is interested in this but I modified the original DensePose code to make it compilable and provided the compiled models here. You only need torch, torchvision and opencv to run this compiled model
from magic-animate.
i managed to generate like this
do colors need to match?
from magic-animate.
I generated one for everyone, if you want to try :)
police.fast.mp4
from magic-animate.
you can extract a motion path for free here: pose.rip
from magic-animate.
@dajes thank you for your nice work! :D
from magic-animate.
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.
Is model 100% limited to 512*512
Or can it process like 768*768 both image and dense pose?
from magic-animate.
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.Is model 100% limited to 512*512
Or can it process like 768*768 both image and dense pose?
We tried to infer using higher resolutions but the preservation ability for the reference image slightly decreased. You may try again, the results should be reasonable.
from magic-animate.
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.Is model 100% limited to 512512
Or can it process like 768768 both image and dense pose?We tried to infer using higher resolutions but the preservation ability for the reference image slightly decreased. You may try again, the results should be reasonable.
the major problem is generating DensePose video
it is freaking hard
could you help me? I have been struggling for like 6 hours here my thread
facebookresearch/detectron2#5170
from magic-animate.
@FurkanGozukara I think your image can be improved by setting alpha=1.0 in the visualizer (it looks transparent and the violet background seems to leak through the pose)
from magic-animate.
The Japanese engineer, peisuke, created the google colab to generate a dense pose video.
https://colab.research.google.com/drive/1KjPpZun9EtlEMcFDEo93kFPqbL4ZmEOq?usp=sharing
The result is here.
https://x.com/peisuke/status/1732066240741671090?s=46&t=aBgVHjAMy0TFw0zYAE90WQ
damn i spent huge time for this :d
i am making local installer and video generator right now
from magic-animate.
@FurkanGozukara I think your image can be improved by setting alpha=1.0 in the visualizer (it looks transparent and the violet background seems to leak through the pose)
where to edit for this? in pose_maker.py file?
from magic-animate.
finally released full scripts including DensePose maker : #44
from magic-animate.
I generated one for everyone, if you want to try :)
police.fast.mp4
Hello, I want to know if this is an IUV map or an I map
from magic-animate.
Hi, great work for the paper.
I am trying to generate denseposes with detectron2 as suggested and I noticed that the colors I get are not matching those of the sample inputs in this repo.
What I get What I would like to get
![]()
Am I missing something, like a color scheme option for detectron2? I guess feeding my image to the controlnet will not produce optimal results, as the domain shift is quite significant.EDIT:
cmap=cv2.COLORMAP_VIRIDIS
as input toDensePoseResultsFineSegmentationVisualizer
's initializer solves this
Hello, I would like to ask if this image is saved directly or if the pkl file is first saved using the dump method before plotting.Thanks!
from magic-animate.
@BJQ123456 I am using the show
DensePose command to print these images, not dump
from magic-animate.
Hi, thanks for your interest on our work. You can either estimate a DensePose semantic map sequence from the target video using detectron2 or render the DensePose semantic map from parameteric models like SMPL and SMPL-X. We are still working on the second pipeline, will update once it's ready.
Because the detectron2 DensePose esitmator contains a detection head, so the head or legs may be cropped. My suggestion is to center crop it and then resize to 512X512, and 25 fps is recommended.
Hope this can help.
Hi! Is there any follow-up of rendering the DensePose semantic map from SMPL-X?
from magic-animate.
I written a script and auto installer for this : https://www.patreon.com/posts/94098751
from magic-animate.
mark
from magic-animate.
mark
from magic-animate.
Related Issues (20)
- Issues with IUV diagrams
- what does $z_t$ refer to in formulas (1) and (3) HOT 1
- the code of computing PSNR in the Disco repository is wrong HOT 2
- It's just not working on windows HOT 3
- which paper is spatial self-attention layers' implementation in unet blocks based on
- how to get densepose videos without shutering andd blurry flicker HOT 1
- About LAION-Human dataset download?
- ai
- CUDA out of memory HOT 3
- why need do_classifier_free_guidance for hidden_states doing self.attn1 ?
- Failed to tested this on Apple M1 with 16G Memory HOT 1
- Details about computing FVD HOT 4
- Image-video joint training HOT 3
- Training batch size and number of frames (N) of a motion sequence HOT 1
- (IMPORT FAILED): D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MagicAnimate HOT 1
- How to generate high quality video HOT 1
- attn_weight 在这里的作用是什么呀? HOT 1
- 对于其他模型的支持 HOT 1
- You Made the Demo Worthless HOT 1
- Can anyone share TED-talks dataset?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from magic-animate.