Giter Club home page Giter Club logo

Comments (3)

awjuliani avatar awjuliani commented on August 18, 2024

Hi Marsan,

I agree that this isn't necessarily the best way to handle this. What it is doing is allowing us to collect the gradients outside of the graph over time, and then apply them to the correct variables. Because it has already computed the exact gradient values, and there are only two trainable variables, when we use zip(batchGrad,tvars) we are letting Tensorflow know which variables should receive which gradients. The process of actually applying them is just a matter of them being multiplied by the learning rate, and then added to the weights directly, so no extra computation needs to be done using the graph logic. I hope that makes sense.

One could alternatively collect the experience traces over multiple episodes outside the graph, and do the gradient computation all at once within the graph. I may end up rewriting this algorithm, as it was one of the earliest ones, and aspects of it such as this are less than ideal.

from deeprl-agents.

Marsan-Ma-zz avatar Marsan-Ma-zz commented on August 18, 2024

hi awjuliani,

Thanks for your explanation!
I know you want to collect the gradients and apply later, I want to do the same but I can't figure out how to pass the design rule check that "apply_gradient should have some tf.Variable to train".

Here I want to do the same as yours, but I need to workaround like the following to make tensorflow "thought" that apply_gradient do connected with some tf.Variable in graph.

self.losses = control_flow_ops.cond( 
      self.en_ext_losses,  
      lambda: self.ext_losses,  
      lambda: self.losses,  
)  

Where the original self.losses is evaluated from the graph, so tensorflow pass the rule check. Then the self.ext_losses is the out-of-graph loss which later been injected while reinforcement learning episodes finished.

That's an ugly workaround, and that's why I am trying to figure out how why yours work.

from deeprl-agents.

awjuliani avatar awjuliani commented on August 18, 2024

I am actually unsure of how to compute a loss from outside the graph in your case. My suggestion would be to find a way to tie it into the graph, if possible. The problem is that Tensorflow has no way of dealing with the loss, since there is no way of assigning credit for the loss to any variables in the graph since the loss didn't come from the graph itself, or pass through a differentiable function in the graph.

I hope you can figure out a solution, and wish you luck.

from deeprl-agents.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.