Giter Club home page Giter Club logo

Comments (4)

guozhiyao avatar guozhiyao commented on May 30, 2024

I was training a face recognition with SAM (backbone is ResNet, and the loss is arcface). Firstly, the backbone load a pretrianed model, and then train the classifier while freeze the backbone. Finally, I train the whole model with SAM. But something wired happens:

  1. When I freeze the bn running variable as you recommend, which is right, the first loss is larger than second loss. And the feature obtained from backbone will become much larger (up to 10^9) in the second iteration. And the model cannot converge.
  2. When I update the bn running variable, which is wrong. The first loss is smaller than second loss. And the feature obtained from backbone becomes normal. But the model still cannot converge.
    Hope to get your reply, thanks.

from sam.

davda54 avatar davda54 commented on May 30, 2024

Most likely, the BN freezing won't make a significant difference, so I would advise you to not focus on that until you fix the convergence issue. I guess the losses should be of similar magnitude, but I don't see a problem if one is slightly larger than the other one.

Does your model converge with a standard optimizer? Have you tried different hyperparameters?

from sam.

guozhiyao avatar guozhiyao commented on May 30, 2024

My mode will converge with standard optimizer, but not with sam.
I test for a few times, and the loss become a little more normal. Here is my opinion:
When we set the model.eval(), the BN will use the running_mean and running_var to normalize the input, instead of the statistics of current batch data. Which will make the output of second forward different from first forward. So I change the process as follow:

# first time
loss_fn(model(input), label).backward()
optimizer.first_step(zero_grad=True)
# second time
bn_bak = save_bn_running(model) # save the running_mean and running_var
loss_fn(model(input), label).backward()
optimizer.second_step(zero_grad=True)
reset_bn_running(model, bn_bak)

Before the second forward, I will save the running_mean and running_var of BN and set the model to be train mode, so the statistics of BN will be the current, which is constant with first forward, and reset the running_mean and running_var of BN after second backward to avoid modification of BN statistics.

from sam.

stale avatar stale commented on May 30, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from sam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.