I saw the text prompt "Tony Stark wearing blue suit walking forwards"
in this condition, the result of Text2mesh should like this
but I got this , with the default settings of ninja demo in text2mesh
can you tell me about the setting of the experiment between Text2mesh and Clip-Actor while using this text promt.
I tried the demo "Freddie Mercury dancing" using a tesla P40(24G)GPU
And I got Freddire results like this
But in paper , It looks like this
Can you tell me how to reproduce the example results in paper?
Thanks for sharing the project and your help.
Hey there! Congrats on being accepted to ECCV 2022, this is an amazing project! Would there be any interest in publishing a demo on Hugging Face Spaces? I'm sure our users and the broader ML community would love to play around with this. The demo for ICON was really well received, and I imagine setting this up on Hugging Face would be pretty similar. I'm more than happy to guide you through it, and if you're interested I can also put in a request internally to assign a GPU to your demo.
Let me know if this is something that sounds interesting to you ๐