Giter Club home page Giter Club logo

learning-scientific_machine_learning_residual_based_attention_pinns_deeponets's Introduction

Hi, thanks for stopping by! I am a Ph.D. student in the Applied Mathematics department at Brown University. I earned my Bachelor's in Mechanical Engineering from the Universidad de las Fuerzas Armadas ESPE in Ecuador. My research journey began as an undergraduate when I collaborated remotely with the University at Buffalo on several projects, exploring machine learning modeling of physical systems. Since 2022, I have been working under the mentorship of Prof. George Karniadakis, delving into the realm of scientific machine learning (SciML).

My research focuses on obtaining reliable and stable machine learning methods to study and understand complex physical systems that cannot be analyzed using traditional methods (e.g., cerebrospinal fluid). To this end, I am currently developing optimization methods for physics-informed neural networks and artificial intelligence velocimetry (AIV).

learning-scientific_machine_learning_residual_based_attention_pinns_deeponets's People

Contributors

jdtoscano94 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learning-scientific_machine_learning_residual_based_attention_pinns_deeponets's Issues

Wrong derivative on DeepONet example

On the DeepONet anti-derivative notebook, the last example shows how to approximate the anti-derivative for $u(x)=cos(2\pi x),∀x\in[0,1]$. However, the u_fn lambda is defined as $u(x)=-cos(2\pi x),∀x\in[0,1]$.

From what I could understand $s(x)$ would be equals to $-\frac{1}{2\pi}sin(2\pi x)$ in this case.

Actual text:


Let's obtain the anti-derivative of a trigonometric function. However, remember that this neural operator works for $x\in[0,1]$ when the antiderivative's initial value ($s(0)=0$). To fulfill that conditions, we will use $u(x)=cos(2\pi x),∀x\in[0,1]$.

#u_fn = lambda x, t: np.interp(t, X.flatten(), gp_sample)
u_fn = lambda t, x: -np.cos(2*np.pi*x) # Should be np.cos(2*np.pi*x)
# Input sensor locations and measurements
x = np.linspace(0, 1, m)
u = u_fn(None,x)
# Output sensor locations and measurements
y =random.uniform(key_train, (m,)).sort()

May I ask how to represent a system of differential equations?

I want to ask if you think you can use PINN to solve this problem.

We will solve a simple ODE system:

$$ {\frac{dV}{dt}}=10- {G_{Na}m^3h(V-50)} - {G_{K}n^4(V+77)} - {G_{L}(V+54.387)}$$

$${\frac{dm}{dt}}=\left(\frac{0.1{(V+40)}}{1-e^\frac{-V-40}{10}}\right)(1-m) - \left(4e^{\frac{-V-65}{18}}\right)m $$

$$\frac{dh}{dt}= {\left(0.07e^{\frac{-V-65}{20}}\right)(1-h)} - \left(\frac{1}{1+e^\frac{-V-35}{10}}\right)h$$

$$\frac{dn}{dt}= {\left(\frac{0.01(V+55)}{1-e^\frac{-V-55}{10}}\right)}(1-n) - \left(0.125e^{\frac{-V-65}{80}}\right)n$$

$$\qquad \text{where} \quad t \in [0,7],$$

May I ask how to represent a system of differential equations?

with the initial conditions
$$V(0) = -65, \quad m(0) = 0.05 , \quad h(0) = 0.6 , \quad n(0) = 0.32 $$

The reference solution is here, where the parameters $G_{na},G_{k},G_{L}$ are gated variables and whose true values are 120, 36, and 0.3, respectivly.

The code below is the data description

data = np.load('g_Na=120_g_K=36_g_L=0.3.npz')
t = data['t']
v = data['v']
m = data['m']
h = data['h']
n = data['n']

Issue with using your code in Colab because of JAX!

Hi Juan,
Thanks for your helpful video. I subscribed!
However, I have a problem here!
the "from jax.experimental import optimizers" does not work for me! it says "cannot import name 'optimizers' from 'jax.experimental'".
Then I have to switch to CPU and also install "!pip install jax[cpu]==0.2.27" to work!
It is confusing for me and as I searched the net for other people.
Could you please let me know how you use GPU in your video? I have to use my CPU and it takes like a year for training!!!

Thank you

PINN model didn't perform good outside the training points

I run your simple ode solver using PINN. It seems that the model perform well on the training interval,

output

But couldn't manage to predict outside the training interval,

output 2

I was surprised, as the paper was considered so good (from my theoretical perspective). Isn't it should learn the periodicity of the solution? Do you have tested it and get better performance outside the training interval? Let me know your thought to improve it.

Thanks for your consideration.

how to deal with integral operator with DeepOnet?

A great repository!Thank you very much for your open source, i also really like your videos about this on your Youtube! It is so awesome!

I have a question about deeponet.

$$ \frac{d y}{d x}+y(x)=\int_0^x e^{t-x} y(t) d t $$

i want to use DeepOnet to solve this integro-differential equations, but i have no idea how to implement it. The main problem is that i don't want to approximate integral op-
erators numerically, i just hope to make it with DeepOnet

Is it possible to solve it using DeepOnet only?

Best Regrads!

Question regarding the generalization of PINN

Hi, thank you so much for your work! I am wondering about the generalization ability of such PINN? I tried with a simple sine function, when training and testing within the range: [0,2pi], both training loss and validation loss are good. However, when I feed the network with a new set of x, ranges from [2pi, 4pi], then the prediction looks bad. Is it because that the network neve sees such number? I feel like it is memorizing the distribution of things it has been trained on, but not generalized to unseen floats?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.