Giter Club home page Giter Club logo

multimodal-future-prediction's People

Contributors

os1a avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

multimodal-future-prediction's Issues

Question related to make_sampling_loss function

Hi,

It may sound stupid to ask this question, but I really need help in implementing the sampling training process. Currently, I can train the sampling part neural network with the default loss function (I was using mode="epe", top_n=1) successfully. However, according to the paper and code instructions located around the make_sampling_loss function, I need to change the mode and top-n value during training. My question is how to update the mode and top-n parameters in the loss function? Should I use placeholders for the mode and top-n and then use feed_dict to parse the real value during epoch loops? I tested this method but got errors about no gradients can be computed in the loss function, which I think is related to the if ... else.. conditions in the loss function. In this case, do I need to separate the make_sampling_loss function into several functions for different mode input? I would appreciate it a lot if you can give me some guidance!

Question about training the fitting stage

Hi,

I want to ask could I use the make_graph function in the net.py file to build the optimizers for sampling and fitting training stages? I encountered circled graph errors when I did this. I first saved the trained sampling model, then load the weights and try to continue to train the fitting network. Here is my pipeline:
image
image

The errors happened when it runs to _, loss_value = session.run([fit_train_op, fit_loss_op]), the errors are:
image
...
image

If I ignore these errors, the loop can continue but the loss_value keeps almost constant. I think the way I define the graph and optimizer is not correct. Could you give me some guidance on this kind of problem? Thanks in advance!

Figure3 on paper

Hello I have questions about input and labels about figure3.(paper) when you are training.

Can you please elaborate the training process for EWTA including inputs and ground truths labels?

Figure3. mentions 3 ground truths but what are they???
My experience of training MDN my training process included as below,

  • label: one ground truth path(X,Y)
  • input: object detected images(rasterized image shape in 16, 25, 300, 300(Batch, C, H, W)
  • model(input) outputs 3 modes of hypothesis(pi, mean, sigma * 3 modes)
  • mdn loss function

I assume you started with 8 modes in your paper(8 black dots) which contains pi, mean, (sigma for EWTAD).
Am I on the right track??......

Feasibility for the training script

Hi,

Thanks for sharing your excellent work! I plan to reproduce your work and I wonder is it possible to share your training script with me? I see there is a test script in the repository, but no training files included. I would appreciate it very much if it is possible~

EMD on CPI dataset

Hi!

Could you specify the way (or even better, the code you've used) to evaluate the EMD metric on the CPI Dataset? The only EMD-related stuff in your code seems to be related with the SEMD metric calculated for the SDD results.

Thanks in advance! :)

Questions about dataset creation

Hey, thanks for sharing your code! Is the code specifically for generating the SDD dataset public? I'm interested in playing around with different target times. Thanks!

Questions about the optimizers used for training the sampling and fitting neural networks

Hi,

Thanks for your wonderful work! I am trying to replicate the training with the SSD dataset. Could I ask:

  1. Which optimizer you used for the sampling and fitting neural network? And what are the learning rates you set for the optimizers?
  2. Is it correct to use hyps, hyps_sigmas in the given test.py file to compute the loss for the sampling neural network? These two parameters are outputs from the make_graph function:
    means, sigmas, mixture_weights, bounded_log_sigmas, hyps, hyps_sigmas, input_blob, output_blob, tmp = session.run(output, feed_dict={x_objects: objects, x_imgs: imgs})

Thank you in advance for your help!

Questions about sampling network training process

Are these lines fc7,8 in Table8 in thesis?
https://github.com/lmb-freiburg/Multimodal-Future-Prediction/blob/master/encoder.py#L92-L97

Second question is, in Step2,3 below, should I use nll loss when I'm training?
Step1. training sampling netwrok(fitting network freeze)
Step2. training fitting network(sampling network freeze)
Step3. Train all

Third question is, how many iteration do you recommend for me to train for Step2, Step3 if I trained Step1 30,000*5 iterations?

Fourth question is, when I trained fitting network sometimes loss went into minus. Was it same for you? Do you recommend tanh activation function?(which seems to be not on the paper but on the code)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.