Comments (6)
Yes, this is on a TODO for now. https://github.com/rtqichen/torchdiffeq/blob/master/torchdiffeq/_impl/adjoint.py#L31
I'd need to get a bit finicky with pytorch's Function.
For now, I think the non-adjoint version should support higher-order autodiffs.
from torchdiffeq.
Hi @rtqichen, thank you for your work! So how can we currently take high order gradients of the integration outputs with respect to the inputs (e.g. initial conditions or else)?
from torchdiffeq.
Hello @rtqichen.
Was there a progress in higher order autodiff feature using the adjoint method?
from torchdiffeq.
No sorry, zero progress has been made since 2019. If anyone wants to submit a PR for this, I can approve it.
from torchdiffeq.
Thanks for your comment, Ricky.
Are there any action items that should be taken?
Eyal
from torchdiffeq.
@rtqichen Thanks for your work. But The GPU memory is lost when I use
import torch.autograd as ag.
My code is as follows:
for itr in range(1, 5):
print('iteration: ',itr)
print("start_memory_allcoated(MB) {}".format(torch.cuda.memory_allocated()/1048576))
optimizer.zero_grad()
batch_y0, batch_t, batch_y = get_batch()
batch_y0.requires_grad = True
print('batch_y0.shape: ',batch_y0.shape)
pred_y = odeint(func, batch_y0, batch_t).to(device)
My results are as follows:
iteration: 1
start_memory_allcoated(MB) 0.0146484375
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.0244140625
iteration: 2
start_memory_allcoated(MB) 0.0244140625
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.025390625
iteration: 3
start_memory_allcoated(MB) 0.025390625
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.0263671875
iteration: 4
start_memory_allcoated(MB) 0.0263671875
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.02734375
(base) [s2608314@node3c01(eddie) networks]$ python test.py
iteration: 1
start_memory_allcoated(MB) 0.0146484375
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.0244140625
iteration: 2
start_memory_allcoated(MB) 0.0244140625
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.025390625
iteration: 3
start_memory_allcoated(MB) 0.025390625
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.0263671875
iteration: 4
start_memory_allcoated(MB) 0.0263671875
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.02734375
iteration: 5
start_memory_allcoated(MB) 0.02734375
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.0283203125
iteration: 6
start_memory_allcoated(MB) 0.0283203125
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.029296875
iteration: 7
start_memory_allcoated(MB) 0.029296875
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.0302734375
iteration: 8
start_memory_allcoated(MB) 0.0302734375
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.03125
iteration: 9
start_memory_allcoated(MB) 0.03125
batch_y0.shape: torch.Size([20, 1, 2])
end_memory_allcoated(MB) 0.0322265625
And it really causes out of memory! Hope to get your reply as soon as possible.
from torchdiffeq.
Related Issues (20)
- learn_physics.py is very bad.
- code for reproducing Fig. 8
- Making forward function different than backward function
- reuse solver object
- Integrate Forced Andronov-Hopf Bifurcation HOT 3
- Export to ONNX?
- Is func variable in odeint(func, y0, t) the derivative part of the ode?
- Initial Condition changes when calling the odeint_adjoint
- How to use the summary function for model description?
- How to work with control namely PID controller
- Why odeint sometimes provides a wrong solution? HOT 5
- Non uniform time step in example/ode_demo.py
- runtime of ode_demo.py using adjoint vs. not using it HOT 1
- underflow in dt nan HOT 4
- Typo in paper (?) HOT 1
- How to pass extra paramaters of func to odeint? HOT 2
- Bug: Memory Leaky with from torchdiffeq import odeint HOT 1
- Perform one integration step HOT 2
- Scipy LSODA for stiff ODE
- Question about the gradient of `odeint`
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchdiffeq.