Giter Club home page Giter Club logo

pymor's People

Contributors

alexandre-pasco avatar andreasbuhr avatar artpelling avatar bergdola avatar congpy avatar dependabot[bot] avatar dorotheahinsen avatar ftschindler avatar gdmcbain avatar github-actions[bot] avatar henklei avatar jonas-nicodemus avatar josefinez avatar juliabru avatar lbalicki avatar magnusostertag avatar mdessole avatar mechiluca avatar michaellaier avatar michaelschaefer avatar mohamedadelnaguib avatar peoe avatar pmli avatar pre-commit-ci[bot] avatar probot-auto-merge[bot] avatar pymor-bot avatar sdrave avatar steff-mueller avatar uekerman avatar ullmannsven avatar

pymor's Issues

[neural-networks] support instationary problems

By treating the time as an ordinary parameter, the approach can also handle instationary problems.

  • Common base class for all neural network reductors (same training procedure but on different data)
  • Change documentation in the reductor for instationary problems.
  • Add paper reference, see #7.

[pymordemos] add nonlinear example

A nonlinear diffusion problem using FEniCS/Dolfin could serve as (additional) example for neural networks in model order reduction. Especially in the case where no affine decomposition of the operator and/or right hand side exists, the method could be helpful. Maybe one can show this in an appropriate example.

[NeuralNetworkReductor] training procedure

Multiple training runs should be performed and the one with the best result should be chosen as final neural network. The maximal number of trainings runs to perform is determined by a restarts-parameter. Further, the user should either decide on a basis size or on absolute/relative tolerances. In the latter case, the respective tolerance should be distributed evenly to the reduced basis error and the neural network error. Hence, the final neural network should produce an error on the validation set that lies below a prescribed threshold. Otherwise, no appropriate neural network was found and some kind of error should be thrown.

  • Multiple training runs; select the one with smallest validation error
  • Prescribe size of reduced basis
  • Prescribe error tolerances

[NeuralNetworkReductor] randomly shuffle training snapshots before splitting into training and validation set

In the NeuralNetworkReductor, the training data is split into training and validation set if no validation_set is provided. Before this is done, the training data should be shuffled randomly to obtain better training results. (Especially since the training set is often chosen uniformly on a grid which means that without random shuffling, the training set does not contain parameters from all over the domain).

[neural-networks] progress bar for training

I think it would be nice to have a small progress bar to see the current state of the training. Especially if training takes longer, some information on the progress might be helpful.

[neural-networks] reproducability of tests

The results obtained by the neural networks should be deterministic, i.e. when running the demo twice, the result should be the same. Maybe it is possible to set a seed in PyTorch to fix the initial weights?

[docs] add documentation of the new code

Especially the part where the batch-size is changed for the LBFGS-optimizer should be commented.

Further, the new methods and classes should receive some documentation.

See also #7.

[NeuralNetworkReductor] solve for training snapshots just once

Solving the full problem for the same training snapshots should not happen multiple times. It is sufficient to create the training snapshots, construct the reduced basis via POD, compute and store the coefficients of the projection of the training snapshots onto the reduced basis, and forget about the high dimensional training snapshots.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.