Giter Club home page Giter Club logo

Comments (9)

IpadLi avatar IpadLi commented on May 21, 2024 3

Hi,

It seems that the encoder in actor never be updated either by loss or soft update (EMA), except that in the initialisation.

# tie encoders between actor and critic, and CURL and critic self.actor.encoder.copy_conv_weights_from(self.critic.encoder)

Only the encoder in critic/critic_target is updated by the critic_loss and the cpc.

Is there any insight for not updating the encoder in the actor?

from curl.

LostXine avatar LostXine commented on May 21, 2024

Hi @wassname
Thank you for pointing it out and I believe you are correct. Based on my very limited observation, optimizing the encoder once per call didn't affect the performance significantly in cheetah run. But it will be very helpful if someone could test it in more environments.
Thanks.

from curl.

tejassp2002 avatar tejassp2002 commented on May 21, 2024

Hi @IpadLi, I wondered on this a while back and mailed @MishaLaskin about it.
This is the question I asked:

Why you are not updating the Shared encoder with the Actor Loss? Is there any specific reason for this?

@MishaLaskin 's reply:

I found that doing this resulted in more stable learning from pixels but it is also an empirical design choice and can be changed

from curl.

IpadLi avatar IpadLi commented on May 21, 2024

Hi @tejassp2002 Thanks a lot.

from curl.

Sobbbbbber avatar Sobbbbbber commented on May 21, 2024

Hi, can we integrate the update_critic function and update_cpc function by adding the critic_loss and cpc_loss together?
Meanwhile, we only need two optimizers.
Is it feasible?

self.cpc_optimizer = torch.optim.Adam([self.CURL.W], lr=encoder_lr)
self.critic_optimizer = torch.optim.Adam(self.critic.parameters(), lr=critic_lr, betas=(critic_beta, 0.999))
loss = critic_loss + cpc_loss
loss.backward()
self.critic_optimizer.step()
self.cpc_optimizer.step()

from curl.

KarlXing avatar KarlXing commented on May 21, 2024

Hi,

It seems that the encoder in actor never be updated either by loss or soft update (EMA), except that in the initialisation.

# tie encoders between actor and critic, and CURL and critic self.actor.encoder.copy_conv_weights_from(self.critic.encoder)

Only the encoder in critic/critic_target is updated by the critic_loss and the cpc.

Is there any insight for not updating the encoder in the actor?

The work of SAC+AE (https://arxiv.org/pdf/1910.01741.pdf) suggests to use the gradient from critic only (no actor) to update the encoder. Since this repo is based on the implementation of SAC+AE (as said in readme), I think CURL just follows it.

from curl.

yufeiwang63 avatar yufeiwang63 commented on May 21, 2024

Hi @IpadLi, I wondered on this a while back and mailed @MishaLaskin about it. This is the question I asked:

Why you are not updating the Shared encoder with the Actor Loss? Is there any specific reason for this?

@MishaLaskin 's reply:

I found that doing this resulted in more stable learning from pixels but it is also an empirical design choice and can be changed

Hi, thanks for posting the reply from the author!
Yet I don't think the reply answers the question -- even if we don't update the encoder with the actor loss, why shouldn't the actor encoder weights be copied from the critic encoder weights after each update to the critic encoder using the critic loss and cpc loss?
It is a bit strange to me that two different encoders are used for the actor and the critic, where the paper seems to indicate there is only 1 shared encoder. Moreover, the weights of the actor encoder is never updated after initialization, so essentially only the MLP part of the actor is being trained/updated.

Update -- Sorry, the tie_weight function actually make the actor encoder and critic encoder share the same weights.

from curl.

RayYoh avatar RayYoh commented on May 21, 2024

Hi @IpadLi, I wondered on this a while back and mailed @MishaLaskin about it. This is the question I asked:

Why you are not updating the Shared encoder with the Actor Loss? Is there any specific reason for this?

@MishaLaskin 's reply:

I found that doing this resulted in more stable learning from pixels but it is also an empirical design choice and can be changed

Hi, thanks for posting the reply from the author! Yet I don't think the reply answers the question -- even if we don't update the encoder with the actor loss, why shouldn't the actor encoder weights be copied from the critic encoder weights after each update to the critic encoder using the critic loss and cpc loss? It is a bit strange to me that two different encoders are used for the actor and the critic, where the paper seems to indicate there is only 1 shared encoder. Moreover, the weights of the actor encoder is never updated after initialization, so essentially only the MLP part of the actor is being trained/updated.

Update -- Sorry, the tie_weight function actually make the actor encoder and critic encoder share the same weights.

Hello! Does it mean the weights of actor encoder are still same with the critic encoder after the critic encoder is updated?

from curl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.