Comments (13)
I think it is because the units of interval
are [mSec]
from motionclip.
Thanks.
I have a confusion about the rendering duration. In the default, the fps is 20. In the paper-model's param, 'num_frames' is 60. So all rendered sample durations are 3 seconds. What should I do if I want to render 6-second-sample.
I once tried to change 'num_frames' from 60 to 120. Although the total duration has been to 6 seconds, it only adds 3 seconds to the rendering time. The first three seconds are the same as before, and the last three seconds are stationary.
from motionclip.
This model was trained for motions with a fixed length of 60 frames. Please try and retrain it for 120 frames.
from motionclip.
Thanks.
I have another confusion about the rendering. All rendered motions used paper-model can appear the rendered frames and their text description used during training. Just like Figure 4 in the paper.
from motionclip.
Do you ask about the appearance? If so, they all have the same appearance as in Fig 4
https://drive.google.com/file/d/1F8VLY4AC2XPaV3DqKZefQJNWn4KY2z_c/view?usp=sharing
from motionclip.
No.
My question is that fig.4 is the training-phase frame. Does the inference phase also have such frames
from motionclip.
At inference, you do text-to-motion - i.e. encode text and decode notion, so the rendered frames are unnecessary.
from motionclip.
What can I do if I want to see more details about the inference frames.
from motionclip.
Do you mean that you want to render the results with a more elaborate body model such as SMPL, instead of the stick figures?
from motionclip.
No, my mean is that I want to know how the frame numbers are allocated for each action.
For example, the input text is ’360 degree left jump and standing and turning back‘. There are three actions. They are jump, stand and turn back. How are the frame numbers of jump, stand and turn back allocated.
from motionclip.
Got you. So they are not explicitly allocated by the user, but by the model. If you want to interpret the model decisions you can try and adapt transformer interoperability papers to the motion domain. Anyway, that isn't a trivial one.
from motionclip.
Thanks.
And which files are the transformer interoperability papers
from motionclip.
Sorry, I'm not enough familiar with this field.
from motionclip.
Related Issues (20)
- Why input size is 25 x 6? HOT 6
- Visualize failed HOT 2
- Text-to-Motion issue HOT 3
- AMASS Dataset issue
- The generated action is reversed HOT 3
- use smplx add-on HOT 3
- continuous issue about rendering the sample HOT 3
- SImilarities computed using motion and text embeddings are incorrect HOT 2
- two extra loss terms: mmd and hessian_penalty HOT 3
- results on HumanML3D dataset HOT 1
- Training for action recognition
- train in num_frames == -2
- Reproducing paper results HOT 1
- question about amass_parser.py HOT 1
- 'amass_30fps_legacy_db.pt' HOT 2
- consistency of the motion encoder and the motion decoder HOT 1
- AttributeError: 'AMASS' object has no attribute 'nfeats' HOT 2
- ffmpeg version HOT 4
- What device did you use to train the model and how long did it cost?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from motionclip.