henklei / pymor Goto Github PK
View Code? Open in Web Editor NEWThis project forked from pymor/pymor
pyMOR - Model Order Reduction with Python
Home Page: https://pymor.org/
License: Other
This project forked from pymor/pymor
pyMOR - Model Order Reduction with Python
Home Page: https://pymor.org/
License: Other
The reductor should be able to reconstruct a high dimensional state from reduced coordinates.
See also #3.
By treating the time as an ordinary parameter, the approach can also handle instationary problems.
A nonlinear diffusion problem using FEniCS/Dolfin could serve as (additional) example for neural networks in model order reduction. Especially in the case where no affine decomposition of the operator and/or right hand side exists, the method could be helpful. Maybe one can show this in an appropriate example.
When the training takes longer, it would be pleasant to be able to interrupt the training procedure and continue it later on without losing the already computed results.
Multiple training runs should be performed and the one with the best result should be chosen as final neural network. The maximal number of trainings runs to perform is determined by a restarts
-parameter. Further, the user should either decide on a basis size or on absolute/relative tolerances. In the latter case, the respective tolerance should be distributed evenly to the reduced basis error and the neural network error. Hence, the final neural network should produce an error on the validation set that lies below a prescribed threshold. Otherwise, no appropriate neural network was found and some kind of error should be thrown.
In the NeuralNetworkReductor
, the training data is split into training and validation set if no validation_set
is provided. Before this is done, the training data should be shuffled randomly to obtain better training results. (Especially since the training set is often chosen uniformly on a grid which means that without random shuffling, the training set does not contain parameters from all over the domain).
I think it would be nice to have a small progress bar to see the current state of the training. Especially if training takes longer, some information on the progress might be helpful.
The results obtained by the neural networks should be deterministic, i.e. when running the demo twice, the result should be the same. Maybe it is possible to set a seed in PyTorch to fix the initial weights?
Especially the part where the batch-size is changed for the LBFGS-optimizer should be commented.
Further, the new methods and classes should receive some documentation.
See also #7.
The model should return the coordinates in the reduced basis and not their reconstruction in the high dimensional state.
Maybe we can still print the current epoch and losses to get a better feeling what is going on during training.
However, we have to somehow make sure that this works in all ordinary types of terminal.
Should parameters like optimizer
, epochs
, batch_size
, learning_rate
and restarts
be fixed when creating a NeuralNetworkReductor
or should they be added to the signature of the reduce
method? Maybe one wants to call reduce
multiple times with different training parameters?
Solving the full problem for the same training snapshots should not happen multiple times. It is sufficient to create the training snapshots, construct the reduced basis via POD, compute and store the coefficients of the projection of the training snapshots onto the reduced basis, and forget about the high dimensional training snapshots.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.