Comments (4)
This is my proposed fix for the starting step size, would you please look at it and decide whether it makes sense?
def _select_initial_step(fun, t0, y0, order, rtol, atol, f0=None):
t0 = t0.to(y0[0])
if f0 is None:
f0 = fun(t0, y0)
rtol = rtol if _is_iterable(rtol) else [rtol] * len(y0)
atol = atol if _is_iterable(atol) else [atol] * len(y0)
scale = tuple(atol_ + torch.abs(y0_) * rtol_ for y0_, atol_, rtol_ in zip(y0, atol, rtol))
d0 = _norm(tuple(y0_/scale_ for y0_, scale_ in zip(y0, scale)))
d1 = _norm(tuple(f0_/scale_ for f0_, scale_ in zip(f0, scale)))
if d0 < 1e-5 or d1 < 1e-5:
h0 = torch.tensor(1e-6).to(t0)
else:
h0 = 0.01 * (d0/d1)
y1 = tuple(y0_ + h0 * f0_ for y0_, f0_ in zip(y0, f0))
f1 = fun(t0 + h0, y1)
d2 = _norm(tuple((f1_ - f0_) / scale_ for f1_, f0_, scale_ in zip(f1, f0, scale))) / h0
if d1 <= 1e-15 and d2 <= 1e-15:
h1 = torch.max(torch.tensor(1e-6).to(h0), h0 * 1e-3)
else:
h1 = (0.01 / max(d1, d2))**(1. / float(order + 1))
return torch.min(100 * h0, h1)
from torchdiffeq.
Is the only change the line "h0 = 0.01 * (d0/d1)"?
d0 and d1 are tuples so I don't think this would work. You could try changing the max to a min, which actually makes more sense since we would want to be pessimistic.
from torchdiffeq.
No, the _norm function (if you look at its definition in misc.py) also works on tuples. Instead of computing four initial step sizes, here I am treating the four tuple as one single vector, and following the reference you give. It works for my case (only to the extent that I don’t see error message), but hope I can use your expertise to double check it. So if you can look at it (maybe diff with the original version), that would be great! Thanks!
from torchdiffeq.
Ah, I missed the _norm(tuple()) modification
It might be that sometimes one of the elements of the tuple is stiffer than another and requires more careful consideration. This is why the error estimates and step-size control are done per element of the tuple, as different elements may have different interpretations. If you're fine with treating the tuples as if it was one giant concatenated tensor, then this works fine.
As for slight modifications to this function, I wouldn't worry too much. The main reason for this initial step size selection is to find a large step size that would still pass the error control. So the worst case that can happen is if it returned a step size too large, and the algorithm would need to spend more time redoing steps with smaller step sizes.
Thanks for your report and suggestion. I'll keep the other issue open but close this one.
from torchdiffeq.
Related Issues (20)
- learn_physics.py is very bad.
- code for reproducing Fig. 8
- Making forward function different than backward function
- reuse solver object
- Integrate Forced Andronov-Hopf Bifurcation HOT 3
- Export to ONNX?
- Is func variable in odeint(func, y0, t) the derivative part of the ode?
- Initial Condition changes when calling the odeint_adjoint
- How to use the summary function for model description?
- How to work with control namely PID controller
- Why odeint sometimes provides a wrong solution? HOT 5
- Non uniform time step in example/ode_demo.py
- runtime of ode_demo.py using adjoint vs. not using it HOT 1
- underflow in dt nan HOT 4
- Typo in paper (?) HOT 1
- How to pass extra paramaters of func to odeint? HOT 2
- Bug: Memory Leaky with from torchdiffeq import odeint HOT 1
- Perform one integration step HOT 2
- Scipy LSODA for stiff ODE
- Question about the gradient of `odeint`
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchdiffeq.