Giter Club home page Giter Club logo

phycrnet's People

Contributors

paulpuren avatar raocp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

phycrnet's Issues

Problem generating data

Hi,
I've been trying to test your code, but while running Burgers_2d_solver_[HighOrder].py there occurs an issue which leads to the values for U and V becoming entirely "nan" after the first iteration and thus all the plots after the first one are blank and the generated file to be used in PhyCRNet_burgers.py does not work correctly.

The warnings which I assume relate to the cause of this problem, when running Burgers_2d_solver_[HighOrder].py, are as follows:
RuntimeWarning: overflow encountered in multiply
u_t = (1.0/R) * laplace_u - U * u_x - V * u_y
RuntimeWarning: invalid value encountered in subtract
u_t = (1.0/R) * laplace_u - U * u_x - V * u_y
RuntimeWarning: overflow encountered in multiply
v_t = (1.0/R) * laplace_v - U * v_x - V * v_y

...and so on, covering what looks like all the operations.

If you have any knowledge on how to solve this issue I'd really appreciate it.
And thank you for making this available for everyone to study.

model input isssues;

Thanks for sharing the code.
Your current model input is the initial condition and hidden states for the ConvLSTM,
From a perspective of generalization, have you considered feeding a batch of initial conditions, instead of a single one to train the model? Would that increase the performance?
Also, does a single initial condition in PICRNN equal to a single collocation point in PINN? Not sure how to justify that.

Thanks again for your work.

A few questions

Hello,

Thanks a lot for this great work that you published transparently

I have a few questions

Firstly, I noticed there are two time-related parameters:

  • time_batch_size
  • time_steps

But I am not sure how to change them when finetuning. Say, I have a model trained until 100 steps, and I want to finetune to 200 steps (and ultimately until 1000 timesteps). In this case, what would the values of time_steps and time_batch_size be?

Similarly, what would the values be when training from an intialized model to 100 timesteps?

Also, for number of adam iterations, is it 10000 for each finetune, or is it 10000 across the entire sequence of finetunings?

Finally, I noticed that in your paper, you initialized h and c to zero for the LSTM, but in the code they are intialized randomly. Is there a preferred initalization between these?

Best,
Rami

Boundary Condition

Hi,

Thanks for sharing your work and code. In your paper, I found that as for the Dirichlet BCs, you used the periodic padding. To be honest, I don't understand why this operator can achieve Dirichlet BCs. Besides, if it is Neumann BCs, how can we deal with?

Looking forward to your reply! Thanks!

There are a couple of questions about accuracy

    Hello author, I have read your paper carefully and also reproduced the code. Following the steps you responded to in the other question, I ran the code. 
    First, I set time_batch_size=100, time_steps=101, and n_iters_adam=5000. I trained this network to get checkpoint100.pt, I tested step 100 with an accuracy of u=0.5% and v=0.8%. But when I went to get checkpoint200.pt, the accuracy was only u=7% and v=3.5%. Is there something wrong with me that leads to the low accuracy? Because I feel that when time_batch_size=1000, the accuracy will be worse and cannot reach the accuracy in the paper.
    I feel that there is something wrong with my running the code step, and I deleted it when time_batch_size=100
    model, optimizer, scheduler = load_checkpoint(model, optimizer, scheduler, pre_model_save_path)
    At time_batch_size=200, I use checkpoint100.pt to load the model to train checkpoint200.pt. And added the code above.
    Can you help me see if there is a problem with the code running steps! At the same time, I set the learning rate to 1e-3, whether the learning rate also affects the result.Thank you for helping me solve these problems, thank you very much!

Jobs fails when loading previous model

Hi Paul,

I hope you are doing well.

I have a question when trying to running the python script. It requires to load a previous trained model, './model/checkpoint500.pt'. Can you please tell me how to obtain this model, or how to define the weights/biases for initializing the network?

many thanks in advance.

the shape of elements

Hi,

I was trying to run the model, but I failed. It reminded: 'c' argument has 16641 elements, which is inconsistent with 'x' and 'y' with size 16384, on Line 506. I found that the shape of x_star is [128,128], but the shape of u_pred is [129,129]. Could you please check the code?

Best wishes

Couldn't find the training data source

When I was training the file PhyCRNet.py, I could find the file burgers_1501x2x128x128.mat in the directory. So I am not able to determine the dimension of uv.

Many thanks!
Johnny

the role of residual connection

Hi Paul,

I have a question regarding the role of residual connection. In PhyCRNet, the temporal derivative is already incorporated in the loss term, why do you still need the residual connection?

I guess each encoder-convlstm-decoder may be considered a Unet. In this sense, the residual connection may function as skip connections right? In my case, the variable u has two components, say, u1 and u2. The PDE is in the form {\partial u1} / {\partial t} = \alpha {\divergence (\nabla (u2)) +...

If I do u = ut + dt*u, it does not seem to work.

Many Thanks.

What is the Domain of the Data

Hi,

I have noticed from the random_fields.py that the data is generated by sampling from a Gaussian distribution, and then passed to an inverse Fourier transform, therefore I would assume that values of u and v in the generated data to be in the frequency domain, including the truth values computed using the Runge-Kutta method. However, when comparing the output of the neural network and the truth values, it seems that you computed the error using the Frobenius
norm without converting the values back to the time domain.

May I ask whether what domain the generated data is in and why it never passed through a second Fourier transform?

Thank you in advance!

Initialization conditions of pre-trained neural networks?

Hello,

Thanks a lot for this great work that you published transparently.

Pretrain the network is from a small number of time steps (e.g., 100) and then gradually train on a longer dynamical evolution. Taking 2D Burgers' equation as an example, we pretrain the model from 100, then 200 and 500, and finally 1000.

I have a questions anout the initialization condition.

For pretrained networks with different time steps(e.g. 100, 200, 500, 1000), is the initialization condition always 'uv0 = uv[0:1,...]
input = torch.tensor(uv0, dtype=torch.float32).cuda()'?

If I keep training from t=0 for different time steps, it turns out to have extremely large loss for 500 timesteps.
[19998/20000 99%] loss: 0.1154880226
[19999/20000 99%] loss: 0.1154880226
[20000/20000 100%] loss: 0.1154880226
The predicted error is: 0.12718813

PDE

Hi,

Very interesting work! I have one small question: taking the 2D Burger's equation as an example, if the viscosity value is changed, we need to retrain the neural network?

Best wishes

some question about the post process

Hello author,
I read several issues you responded to in the other question, then I ran the code.
First, I deleted "load_checkpoint" in the train function. then I set time_batch_size=100, time_steps=101, and n_iters_adam=500, I trained the network, I found that there was a huge loss, and also in the post_process function it came an error "'c' argument has 16641 elements, which is inconsistent with 'x' and 'y' with size 16384.
Can you help me see if there are some problems with the code running steps! Thank you very much!

Computational costs of numerical solver vs. surrogate

Hi,

one common argument to use surrogate models over numerical solvers is that the latter are more costly to use computational wise.

Did you compare the time to rollout the network against the time to solve e.g. the Burgers equation numerically once the network is fully trained? Unfortunately, I could not find such a comparison in the paper.

Thanks a lot!

A question about the code

Hello! Thanks for sharing the code. I have a small question about the code.

In line 431, you write for time_batch_id in range(num_time_batch):, but the number of num_time_batch is 1, as calculated by num_time_batch = int(time_steps / time_batch_size_load) in line 610. This means the code only calculates the first step. Is there something wrong with the code or my understanding?

I am looking forward to your early reply.

initialization

Hi
Thanks for sharing the code. What's the rationale for using this uniform range? What do the numbers 3,3,20 represent?

module.weight.data.uniform_(-cnp.sqrt(1 / (3 * 3 * 320)),
c
np.sqrt(1 / (3 * 3 * 320)))

Can this method be applied to equations containing elements with a mixture of time and space derivatives?

Hello,

Thanks a lot for this great work that you published transparently.

I have a question that can this method be applied to equations containing elements with a mixture of time and space derivatives? such like Ut+Utt+Uxxt+Uyyt=0.

And in the code, ' output = torch.cat((output[:, :, :, -1:], output, output[:, :, :, 0:2]), dim=3) ', why are the first(output[:, :, :, -2:]) and last(output[:, :, :, 0:3]) elements not symmetrical? ( why not-2: and 0:2 )

Thank you very much for your answer, looking forward to your reply!

Extremely big loss?

I'm doing the experiment on coupled Burger's, following instructions described in README.md that firstly turn the argument "steps" to a relatively small number. However, it turns out to have extremely large loss:
[1/20000 0%] loss: 18560561113464832.0000000000
[2/20000 0%] loss: 18553907806470144.0000000000
[3/20000 0%] loss: 18551367970848768.0000000000
[4/20000 0%] loss: 18550044584050688.0000000000
[5/20000 0%] loss: 18548749886291968.0000000000
[6/20000 0%] loss: 18547451732426752.0000000000
Does this happen normally?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.