Giter Club home page Giter Club logo

Comments (5)

hongzimao avatar hongzimao commented on July 27, 2024

I think there's randomness in tensorflow action sampling too. As a result, each round of training will get different action trajectory, and model will go down different path. Try fixing a random seed for that too and see if results are repeatable.

One other potential problem I remember is some numerical instability of tensorflow. The training has multiple agents collecting experience in different processes. Mathematically, the order of getting the experiences to compute the gradient shouldn't matter. But empirically it seems that tensorflow gets different gradient when assembling the experiences in different order. You might also want to keep this in mind if you want repeatable outcome at every run. Hope these help!

from decima-sim.

hzx-ctrl avatar hzx-ctrl commented on July 27, 2024

Thanks for your reply!
And since the algorithm picked up different DAG each episode, how can we tell if Decima has already converged?

from decima-sim.

hongzimao avatar hongzimao commented on July 27, 2024

Looking at reward and entropy signal. You can set a criteria (e.g., signal flat out, or stay within x standard deviation computed from past n signal data point) for training convergence. This part is similar to standard RL training.

from decima-sim.

hzx-ctrl avatar hzx-ctrl commented on July 27, 2024

Thank you very much and sorry to bother you again, I trained with --num_init_dags 5 --num_stream_dags 10, and after several thousand episodes I find the output of policy network is so large that valid_mask can't work at all,which leads to take illegal actions. Could you please tell me is it normal and any possible reasons why could this happen? Thanks!

from decima-sim.

hongzimao avatar hongzimao commented on July 27, 2024

hmmm I don't recall valid_mask failed. If the policy network can output something, valid_mask is in the same shape. I don't quite get what you meant by "policy network is so large"? Are the numeric values being too large? That might leads to NaN when very large (basically being treated as Inf) number multiplies 0 at valid_mask. In another context I have seen behavior like this, it's usually because the agent selects an invalid action in the previous step. Because it was masked with 0, the gradient descent will have an Inf for some parameters, then things blow up. But I don't recall seeing this in this training code.

Here's a pre-trained model #12 You might want to try the same parameters and compare with the model?

from decima-sim.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.