Giter Club home page Giter Club logo

Comments (5)

rtqichen avatar rtqichen commented on July 2, 2024 1

This is a problem of the ODE becoming stiff, essentially acting too erratic in a region and the step size becomes so close to zero that no progress can be made in the solver. We were able to avoid this with regularization such as weight decay and using "nice" activation functions, but YMMV. Another option is to use a fixed step size solver to get an approximate solution.

from torchdiffeq.

HelenZhuE avatar HelenZhuE commented on July 2, 2024

I got the same error unfortunately. How can I use a fixed step size solver to get an approximate solution? Can you able to provide me some code about it?

from torchdiffeq.

simitii avatar simitii commented on July 2, 2024

@HelenZhuE
just give method as parameter:
odeint(func, y0, t, method="fixed_adams")

Adaptive-step:

  • dopri5 Runge-Kutta 4(5) [default].
  • adams Adaptive-order implicit Adams.

Fixed-step:

  • euler Euler method.
  • midpoint Midpoint method.
  • rk4 Fourth-order Runge-Kutta with 3/8 rule.
  • explicit_adams Explicit Adams.
  • fixed_adams Implicit Adams.

from torchdiffeq.

jandono avatar jandono commented on July 2, 2024

This is a problem of the ODE becoming stiff, essentially acting too erratic in a region and the step size becomes so close to zero that no progress can be made in the solver. We were able to avoid this with regularization such as weight decay and using "nice" activation functions, but YMMV. Another option is to use a fixed step size solver to get an approximate solution.

@rtqichen I am experiencing the same issues. Can you be more precise on which activation functions are considered nice and why? I have tried using ReLU and Softplus and I get the error with both.

Swapping the solver to 'rk4', solved the issue for me, however I am not sure how much performance I might lose because of this.

Additionally, it might be worth noting that even with the nans, the model was working and learning. I was only able to notice a problem and the nans once I run within the torch.autograd.detect_anomaly() scope. Can anyone explain this?

from torchdiffeq.

densechen avatar densechen commented on July 2, 2024

@simitii @jandono I got the same error, however I found that the NaN will occur while forward computing, and caused this error. And I have tried many method, including different non-linearity function, different solver, but all failed.
Can you give me same advice on how to debug this?
Best
Chen

from torchdiffeq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.