Giter Club home page Giter Club logo

diffeqoperators.jl's Introduction

DiffEqOperators.jl

Join the chat at https://julialang.zulipchat.com #sciml-bridged Global Docs

codecov Build Status Build status

ColPrac: Contributor's Guide on Collaborative Practices for Community Packages SciML Code Style

This Package is in the Process of being Deprecated

Alternatives:

README

DiffEqOperators.jl is a package for finite difference discretization of partial differential equations. It allows building lazy operators for high order non-uniform finite differences in an arbitrary number of dimensions, including vector calculus operators.

For automatic Method of Lines discretization of PDEs, better suited to nonlinear systems of equations and more complex boundary conditions, please see MethodOfLines.jl

For the operators, both centered and upwind operators are provided, for domains of any dimension, arbitrarily spaced grids, and for any order of accuracy. The cases of 1, 2, and 3 dimensions with an evenly spaced grid are optimized with a convolution routine from NNlib.jl. Care is taken to give efficiency by avoiding unnecessary allocations, using purpose-built stencil compilers, allowing GPUs and parallelism, etc. Any operator can be concretized as an Array, a BandedMatrix or a sparse matrix.

Documentation

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation which contains the unreleased features.

Example 1: Finite Difference Operator Solution for the Heat Equation

using DiffEqOperators, OrdinaryDiffEq

# # Heat Equation
# This example demonstrates how to combine `OrdinaryDiffEq` with `DiffEqOperators` to solve a time-dependent PDE.
# We consider the heat equation on the unit interval, with Dirichlet boundary conditions:
# ∂ₜu = Δu
# u(x=0,t)  = a
# u(x=1,t)  = b
# u(x, t=0) = u₀(x)
#
# For `a = b = 0` and `u₀(x) = sin(2πx)` a solution is given by:
u_analytic(x, t) = sin(2*π*x) * exp(-t*(2*π)^2)

nknots = 100
h = 1.0/(nknots+1)
knots = range(h, step=h, length=nknots)
ord_deriv = 2
ord_approx = 2

const Δ = CenteredDifference(ord_deriv, ord_approx, h, nknots)
const bc = Dirichlet0BC(Float64)

t0 = 0.0
t1 = 0.03
u0 = u_analytic.(knots, t0)

step(u,p,t) = Δ*bc*u
prob = ODEProblem(step, u0, (t0, t1))
alg = KenCarp4()
sol = solve(prob, alg)

diffeqoperators.jl's People

Contributors

ajozefiak avatar akashkgarg avatar anriseth avatar archermarx avatar arnostrouwen avatar avik-pal avatar briochemc avatar charleskawczynski avatar chenwilliam77 avatar chrisrackauckas avatar christopher-dg avatar dangirsh avatar dependabot[bot] avatar emmanuellujan avatar eschnett avatar femtocleaner[bot] avatar fgerick avatar github-actions[bot] avatar grahamas avatar jagot avatar jw3126 avatar killah-t-cell avatar mjsheikh avatar mseeker1340 avatar shivin9 avatar toollu avatar valentinsulzer avatar vpuri3 avatar xtalax avatar yingboma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diffeqoperators.jl's Issues

Taking operators on dimensional models seriously

It's nice to have the user's system be a matrix so "x,y,z" makes sense as u[x,y,z]. It is really really nice. It's easy to build and plot models which are written in this form. But sadly, tools are not all designed in this manner. The main tool I am speaking of is numerical linear algebra tools like IterativeSolvers.jl or PETSc. These are designed with a linear map with vector spaces in mind.

Let me describe some of the current workflows and it will highlight the current questions and difficulties. The standard way of doing this in other languages is what I'll call "the kron method". Open up a book or tutorial and it's probably what they do, but @haampie shows it very clearly in this Gist:

https://gist.github.com/haampie/a5edeb5df916725687b5cfd7c3b523c7

Essentially what he does is use second_order_central_diff with a bunch of kron commands to build an operator A which works with vec(u). Then A*u does the differentiation u_xx + u_yy. To translate it back to the matrix, you can think of it essentially as building the A for reshape(A*vec(u)). Here, A is NM x NM, a big sparse operator so that way algorithms designed for matrix-vector multiplication "just work". This method using kron goes to nD using similar tricks.

Now let's take a look at the other method. In the linear operator form, you can define the linear map:

L(u) = A*u + u*A

This works directly on the matrix u. What is the size(L)? Technically this is equivalent to above, so it's actually NM x NM. So to make this work in a linear solver, we actually need another step:

function L(u_mat)
  u = reshape(u_mat,N,M)
  vec(A*u + u*A)
end

The action of this linear operator on a vector of length NM is equivalent to the one built by @haampie, but does not explicitly build the operator. Using #20, it can extend easily to nD. Also, I should note that reshape and vec are just views on the same piece of memory (ordered the same way), so those operations are free. In the end, this is essentially a lazy matrix-free way of writing the operator that the kron method builds.

There are advantages and disadvantages of both routes. The first one is familiar to users. Also, if you want to interface with non-Julia tools like PETSc, you need to build the full operator and send the vec(u). Therefore we should definitely have a way to easily build this operator. Luckily, what the kron method needs is exactly the 1D stencil, so building that full sparse A from PDEOperators.jl is the right thing to do.

However, the linear map is extremely useful in some cases. For example, let's take a look at the upstream operator. We haven't built it yet, but it's the one-sided (directional) derivative with a caveat: it always have to move "upstream". So A*a(t)*u, it's the one-sided derivative in the direction of a(t)*u, so if the sign of that flips it should flip direction. Using the kron method, you'd have to recompute the full sparse operator each timestep to make sure it's the right direction (or lose stability). But in the lazy operator form, we can build a A*a(t) = B(t) "lazy upstream", and do:

function L(u_mat)
  u = reshape(u_mat,N,M)
  vec(A*u + u*A + B(t)*u)
end

so this is strictly more powerful when it can be used. Luckily IterativeSolvers.jl seems to allow AbstractLinearMaps (at least in what we've tried), so we can do this.

But notice what this is missing. We can't say "what is the A*vec(u) that L(u) generates"? We don't have the tools to output that. full(L) technically does this, but in most cases that won't actually fit into memory. But LinearMaps.jl can definitely add a sparse(L) that builds the sparse matrix for the linear map.

So in total, after going through the utility and use cases of these tools, we are missing two essential tools:

  1. sparse(L) in LinearMaps.jl. It can probably be implemented similarly to full(L)? I have no idea. But this would nicely transition the dimensional model to the sparse matrix linear operator.
  2. Algebraic support for the kron method. Currently you can write sparse(A) for A = LinearOperator .... However, because of directionality, you cannot do something like sparse(Axx + Ayy), i.e. directly build the matrix for the 2D Laplacian. It would be nice to have an algebra that supports this somehow. One way of course is to build the L as I described and do sparse(L) of course, but is there something more direct?

DiffEqOperator Interface Compatibility

It'll be important to match the interface for it all to work in the stiff solvers "automatically". Here's a few points:

The problem is with update_coefficients!(A,t,u) which is different here.

https://github.com/JuliaDiffEq/DiffEqBase.jl/blob/master/src/diffeq_operator.jl#L6

The ODE solver needs to be able to call that an have the ODE update its coefficients. But also

https://github.com/JuliaDiffEq/PDEOperators.jl/blob/8ddcb5368824919ed46608f4ad2117b268e78b25/src/linear_operator.jl#L399

the call doesn't update the coefficients. I think that somehow there needs to be a system for if <: Number then BC is constant, otherwise assume the user gives a function a = bc(t) where a is the value for the constant you want.

Simple example of discretizing a univariate diffusion with reflecting boundaries

@FernandoKuwer As a first test of discretizing an operator to solve a HJBE:

You can discuss the plan's with Chris about the steps to accomplish this (as upwinding is incomplete in the library), but one idea is to start with the no drift example (which doesn't require upwind) then start adding tests. Eventually, we will want to support having absorbing barriers rather than only reflecting barriers, but this is a good place to start.

Handling BCs

From the Gitter discussion:

  • 1) Periodic. You just wrap around the vector when the coeffients flow over. This is not a banded matrix, so BandedMatrix(L) should error there.
  • 2) Dirichlet. You replace the overflow with a constant. Just assume it's a single constant for all points beyond the boundary.
  • 3) No Flux. Constants which "flow over the edge" are reflected over the boundary.
  • 4) Neumann. Once constants would be flowing over, use coefficients from one-sided difference. At the very last point, use a one-sided first derivative approximation of the correct order. This is still banded, but the band width is increased.
  • 5) Leaky. Can do the same as Neumann, but different input at the end.

1 type, with fields that can be set to nothing (so make parametric so they can be Void). Two type-parameters: one for right BC, other for left BC. Might want accompanying helper functions for get_left_bc(L) and get_right_bc(L).

norm(matrix) -> opnorm(matrix) in Julia 0.7

It looks like may be using norm(matrix). In Julia 0.7, this will compute the Frobenius norm (vecnorm in Julia 0.6), due to JuliaLang/julia#27401. If you want the induced/operator norm as in Julia 0.6, use opnorm(matrix) instead, or Compat.opnorm(matrix) to work in 0.6 and 0.7 (JuliaLang/Compat.jl#577).

Note that, for testing purposes, rather than @test norm(A - B) ≤ tol, it is usually preferred to do @test A ≈ B or @test A ≈ B rtol=... (which uses isapprox).

Support partially overlapping operator implementations

Consider for instance the case du(t) = A(t)u + B(t), where A(t) = A1(t) + A2(t) and B(t) = A1(t) + B1(t). The current implementation would require doing the calculation for A1(t) twice, sacrificing some performance. While hackish workarounds are easily possible for simple cases, ideally a more flexible general solution should be found giving the users the expressive power to specify more details about how they prefer to structure the computation.

Operators defined on parts of the domain

Right now, my understanding of the domains and the boundary values is that if the operators coincide in the domain, we can take just add in as many extra ghost nodes outside of the domain in order to determine boundary values. I have a good sense of how that would work.

But just to make sure we are thinking through the boundary values and domains generally:
There are a lot of problems in economics (including one of the ones I need to solve) where different regions of the state space are governed by different stochastic processes. When this is a choice and/or a free-boundary, the problem is written as a variational equality - which works perfectly fine since the underlying functions are still defined on the entire space.

But what if the differential operators are defined on fixed but connected regions of the domain, with boundary values determining the interface? For example, L_1 = mu_1(x) d/dx + sigma^2/2 d^2/dx^2 defined on x = [0, .5] and L_2 = mu_2(x) d/dx + sigma^2/2 d^2/dx^2 on x = [0.5,1].

Normally, you might do this with two functions and boundary values, for a function v_1(x) defined on [0,0.5] and v_2(x) on [0.5,1]:

  • Reflecting barrier at x=0, i.e. v_1'(0) = 0
  • Reflecting barrier at x=1, i.e. v_2'(1) = 0
  • Matching: v_1(0.5) = v_2(0.5)
  • Smooth pasting: v_1'(0.5) = v_2'(0.5)

How would we do this sort of setup in the general setup? Do we separate two functions or is this a single function with "interior" boundary values?

Package DiffEqOperators does not have LinearAlgebra in its dependencies

I am trying to run the quick start notebook for https://github.com/JuliaRobotics/RigidBodySim.jl and I am running into an issue when I try to run using RigidBodySim.

This error prints out repeatedly (indefinitely as far as I can tell):

┌ Warning: Package DiffEqOperators does not have LinearAlgebra in its dependencies:
│ - If you have DiffEqOperators checked out for development and have
│   added LinearAlgebra as a dependency but haven't updated your primary
│   environment's manifest file, try `Pkg.resolve()`.
│ - Otherwise you may need to report an issue with DiffEqOperators
└ Loading LinearAlgebra into DiffEqOperators from project dependency, future warnings for DiffEqOperators are suppressed.
┌ Warning: The call to compilecache failed to create a usable precompiled cache file for OrdinaryDiffEq [1dea7af3-3e70-54e6-95c3-0bf5283fa5ed]
│   exception = Required dependency DiffEqOperators [9fdde737-9c7f-55bf-ade8-46b3f136cc48] failed to load from a cache file.
└ @ Base loading.jl:963

I am opening an issue here as it seems like that's what Julia is telling me to do. Sorry if this is isn't the right place, I am very new to Julia.

Test trivial HJBE for the simple driftless examples

After #46, we should go back through those examples and solve simple HJBE equations for the different cases of boundaries and variances. In general, pick a payoff function $u(x)$ and a discount rate $\rho &gt; 0$, then for the operator $\mathcal{A}$, the ODE to solve is
$$
\rho v(x) = u(x) + \mathcal{A} v(x)
$$
Or to setup the differential equation in a simpler form,
$$
(\rho I - \mathcal{A})v(x) = u(x)
$$
So we can solve the ODEs by lazily getting the $B \equiv (\rho I - \mathcal{A})$ operator and then solving $B v(x) = u(x)$. Discuss some examples of how to solve this with Chris.

Naming

There's a few naming things I think we should clear up.

PDEOperators.jl is a little off. There are many non-PDE differential equations for which you would want to use these. The classic example is a 2nd order BVP where the Laplacian shows up. I think we should move the whole AbstractDiffEqOperator stuff here (which is actually like 40 lines...) and just name it DiffEqOperators.jl since that covers the field better and matches the naming scheme anyways (everything has DiffEq in it somewhere).

The LinearOperator type is a little vague. It doesn't capture all or most linear operators, so it should probably be DerivativeOperator since all of these operators are discretizations of derivatives.

Lazy ngrid and meshgrid

ngrid and meshgrid are common MATLAB tools for building the X and Y space matrices. For things like u_xx + u_yy = f(x,y) , this is really nice because then f.(X,Y) computes the operation at each point in space, giving the result matrix (or vector if you vec it). However, ngrid and meshgrid are dense, so this is actually really wasteful in memory.

Thankfully, Julia gives us really nice lazy tools 👍 . It would be nice to make lazy ngrid and meshgrids to match. There can be two versions. The first is equal spacing. In this setup,

X,Y,Z = meshgrid(dx,dy,dz)

gives back the appropriate 3d arrays which show what each value in X, Y, and Z are. In reality, this is really just X[i,j,k] == i*dx (of course with start and end values, so it might need to be meshgrid(xstart:dx:xend,ystart:dy:yend,zstart:dz:zend) an interface on ranges). ngrid is similar.

But then the next thing is the non-uniform grid versions. Essentially, for vectors x,y,z:

X,Y,Z = meshgrid(dx,dy,dz)

does the vector form;

X[i,j,k] == x[i]

This then makes it so that way you only have to keep around N+M+P memory for "what space is" instead of 3NMP, yet these operators will still "just work" with f(X,Y,Z) and in other places where users want to know say the y coordinate of Y[i,j,k].

Integrating StaticArrays(Documentation)

We decided to implement stencil_coefs of the PDEOperator as a StaticVector as they are stored on stack rather than on the heap and are hence fast.

But for optimum performance they need to be initialized with both Type and Length like SVector{Length, Type} which made the constructor for LinearOperator look ugly. A = LinearOperator{Float64, SVector{1000, Float64}} for an operator working on 1000 dimensional vectors.

So we fixed it by creating an internal constructor

    (::Type{LinearOperator{T}}){T<:Real}(dorder::Int, aorder::Int, dim::Int) =
    LinearOperator{T, SVector{dorder+aorder-1,T}}(dorder, aorder, dim)

which decides the length of the SVector at runtime using the given input parameters.

BC when stencil reaches out of grid

Probably somewhere this has already been discussed!? When the stencil length is larger than 3 the stencil coefficients of the boundaries, except for the None boundary condition, are not calculated correctly. I understand the way BC are imposed is being overhauled, but this seems to me to be decoupled from this issue? When constructing the linear operator, in the new or the old way, the boundary coefficients need to be calculated in the same manner. Please tell me, if I'm missing something. I'd be happy to help implementing the boundary conditions correctly, i.e. taking the stencil length into account and calculating left and right handed coefficients for the "out of boundary" points.

Appropriate scaling after taking derivatives

After applying an operator on an array, the values diminish by a constant factor depending on the grid_step size.
eg.

julia> x = collect(0 : 1/99 : π);
julia> y = sin.(x);
julia> A = LinearOperator{Float64}(2,2,312,:Neumann,:Neumann;bndry_fn=(1,-1));
julia> res = A*y
312-element Array{Float64,1}:
 -2.9697    
 -1.03058e-6
 -2.06106e-6
 -3.09133e-6
 -4.12128e-6
 -5.15082e-6
 -6.17982e-6
 -7.2082e-6 
 -8.23584e-6
 -9.26264e-6
           
 -9.28078e-6
 -8.25399e-6
 -7.22637e-6
 -6.198e-6  
 -5.16901e-6
 -4.13948e-6
 -3.10954e-6
 -0.0302921 
 -1.0488e-6 
 -3.0       

This is clearly incorrect as the second derivative of sin(x) is cos(x).

However if we do

julia> @. res = res[2:end-1]*100000
julia> res
312-element Array{Float64,1}:
    -2.9697  
    -0.103058
    -0.206106
    -0.309133
    -0.412128
    -0.515082
    -0.617982
    -0.72082 
    -0.823584
    -0.926264
            
    -0.928078
    -0.825399
    -0.722637
    -0.6198  
    -0.516901
    -0.413948
    -0.310954
 -3029.21    
    -0.10488 
    -3.0     

we get the correct result (neglecting the spurious -3029.21 and scaling the boundaries separately)

Error due to Floating Point inaccuracies (Documentation)

The sum of calculated stencil doesn't come out to be exactly 0 with the error increasing as we increase the order of approximation.
For a stencil for 4th order derivative and 10th order approximation, we have

julia> sum(A.stencil_coefs)
-1.4052778038453617e-14

This is not a problem with Float64 as tested with BigFloat also.
So the solution is to adjust the middle coefficient of the stencil to set the overall sum = 0.
Reference:
SO Post: https://scicomp.stackexchange.com/questions/11249/numerical-derivative-and-finite-difference-coefficients-any-update-of-the-fornb
Improved Foenberg paper: http://epubs.siam.org/doi/pdf/10.1137/S0036144596322507

julia> A.stencil_coefs[7] -= sum(A.stencil_coefs)
1.371740740740740209188154585806529439651058055460453033447265625000000000000000e+01

julia> sum(A.stencil_coefs)
0.000000000000000000000000000000000000000000000000000000000000000000000000000000

julia> res[500]
2.399999481622449663473228831378492031944915652275085449218750000000000000000000e+01

which is what we want.

More efficient `expm`s

Base.expm(L::DiffEqArrayOperator) = expm(L.α.coeff*L.A)

We don't want to overwrite the matrix because then calculations further down the line will be incorrect, but we can make this more efficient via something like:

scale!(L.A,L.α.coeff)
out = expm(L.A)
scale!(L.A,inv(L.α.coeff))
out

That'll get rid of a matrix allocation. We might need to pull this off other places as well.

Non-uniform grid spacing?

Is there a plan for implementing support for non-uniform grid spacing? I see the finite difference scheme used here is named after Fornberg. If I remember correctly, the Fornberg paper supports this?

Next steps for the documentation and tests for simple cases

This issue summarizes the main takeaways from the meeting that Steven and I had last Thursday:
i) We agreed that the first thing is to check if the ghost nodes approach suggested in Chris/Jesse tread (SciML/DifferentialEquations.jl#260) indeed maps the problem we are trying to solve.
ii) In order to do that, we think we should test this approach vs some simple problem, ideally, one with analytical solution

Based on that, we defined that our planned next steps are:

  • Finishing writing the linear_operators_overview.tex
  • Look for simple processes with known solution for future tests of the new approach (especially a jump-diffusion)
  • test the approach for a simple diffusion process (one dimensional, no drift)

Add tests for central differences for a 2nd order operator

I think we should probably add in a new test in https://github.com/JuliaDiffEq/DiffEqOperators.jl/tree/master/test called univariate_diffusion.jl or something like that. The steps will be to create simple tests (where https://github.com/JuliaDiffEq/DiffEqOperators.jl/blob/master/test/derivative_operators_interface.jl already has much of the starting code)

For the following notation

I would test the generation with the following

  • Simple $d X_t = d W_t$ discretization with central differences and reflecting boundaries at $\underline{x}$ and $\bar{x}$
  • Simple $d X_t = d W_t$ discretization with central differences and with an absorbing boundary at $\underline{x}$ and a reflecting boundary at $\bar{x}$. For the absorbing barrier, test $\underline{v}= 0$
  • Simple $d X_t = d W_t$ discretization with central differences and with a reflecting boundary at $\underline{x}$ and an absorbing boundary at $\bar{x}$. For the absorbing barrier, test $\bar{v} = 0$
  • $d X_t = \sigma d W_t$ discretization with central differences and reflecting boundaries at $\underline{x}$ and $\bar{x}$. Let $\sigma &gt; 0$
  • $d X_t = \sigma(X_t) d W_t$ discretization with central differences and reflecting boundaries at $\underline{x}$ and $\bar{x}$. Ensure $\sigma(\cdot) &gt; 0$ but make sure you test variation in the $\sigma(\cdot)$ function.
  • Simple $d X_t = \sigma(X_t) d W_t$ discretization with central differences and with an absorbing boundary at $\underline{x}$ and a reflecting boundary at $\bar{x}$. For the absorbing barrier, make sure to try $\underline{v} = 0$ and make sure there is variation in the $\sigma(\cdot)$ function.

Composing and denoting operators with non-derivative terms?

A reminder to install: https://chrome.google.com/webstore/detail/github-with-mathjax/ioemnmodlmafdkllaclgeombjnmnbima

My canonical example is the stationary problem with $d x_t = \mu dt + \sigma d W_t$, with a reflecting barrier at $x = 0$ and $x=1$. This underlying process pays dividends $b(x)$ while operative, and it is discounted at rate $\rho$.

The Bellman equation for the value of this asset is $u(x)$ where,
$$
r u(x) = b(x) + \mu u'(x) + \sigma^2/2 u''(x)
$$
Subject to $u'(0) = 0$ and $u'(1) = 0$ boundary values. Let the infinitesimal generator of the process be $\mathcal{A} \equiv \mu \partial_z + \sigma^2/2 \partial_{zz}$ then the ODE could be written,
$$
r u(x) = b(x) + \mathcal{A} u(x)
$$
Rearranging,
$$
(r - \mathcal{A})u(x) = b(x)
$$
Let $\mathcal{B} \equiv r - mathcal{A}$. I just want to verify that this is the right calculus. i.e. $\mathcal{B} u(x) = r u(x) - \mathcal{A} u(x)$

So it seems like there are two ways that the ODE can be solved. Discretizing the $\mathcal{A}$ operator to $A$ and solving
$u = (r I - A)^{-1} b$ or alternatively, discretizing the composition in $\mathcal{B}$ to $B$ and then solving $u = B^{-1} b$.

problems adding package

I am keen to play with this package and tried adding it using the standard approach but I get an error.

Can someone point out what might have gone wrong?


julia> Pkg.add("DiffEqOperators")
ERROR: unknown package DiffEqOperators
macro expansion at ./pkg/entry.jl:53 [inlined]
(::Base.Pkg.Entry.##1#3{String,Base.Pkg.Types.VersionSet})() at ./task.jl:335
Stacktrace:
 [1] sync_end() at ./task.jl:287
 [2] macro expansion at ./task.jl:303 [inlined]
 [3] add(::String, ::Base.Pkg.Types.VersionSet) at ./pkg/entry.jl:51
 [4] (::Base.Pkg.Dir.##4#7{Array{Any,1},Base.Pkg.Entry.#add,Tuple{String}})() at ./pkg/dir.jl:36
 [5] cd(::Base.Pkg.Dir.##4#7{Array{Any,1},Base.Pkg.Entry.#add,Tuple{String}}, ::String) at ./file.jl:70
 [6] #cd#1(::Array{Any,1}, ::Function, ::Function, ::String, ::Vararg{String,N} where N) at ./pkg/dir.jl:36
 [7] add(::String) at ./pkg/pkg.jl:117

New DiffEqOperator Interface for Array and Composite Operators

The current interface of AbstractDiffEqOperator is a bit messy and could use some rework. Some of the major problems are:

  • The definitions are not centralized, with some of the type hierarchy defined in DiffEqBase instead of DiffEqOperators.

  • AbstractDiffEqOperator is defined as a subtype of AbstractLinearMap from LinearMaps.jl. This is problematic because the combination and composition of operators will be of type LinearCombination and CompositeMap, which goes beyond the AbstractDiffEqOperator type hierarchy. Also the implementation in LinearMaps.jl for operator combinations and compositions are not optimal for out standard (e.g. multiplication requires allocation of additional memory).

  • Time-dependent coefficients. Currently they are implemented as DiffEqScalar and included within array operators. Presumably we need to do the same for all operators, which can be cumbersome.

Below is my draft for a new DiffEqOperator interface that gets rid of the LinearMaps dependence and build up the type hierarchy from scratch. The overall structure is similar to LinearMaps while at the same time tailored to the needs of JuliaDiffEq. Feedbacks are welcomed :P

New DiffEqArrayOperator Interface Draft

Taking 2D operators seriously

First, I apologize for the presumptuous title; I couldn't think of a better one since the question is a bit broad. Please feel free to edit it to something more appropriate.

For prototyping algorithms for PDE-constrained optimization, I frequently need a simple discretization of two-dimensional elliptic partial differential operators on a uniform structured grid, for which FEniCS is overkill. (To demonstrate how simple: in Matlab I often just use A = gallery("poisson",N)...)

I am aware that I can apply such an operator via the tensor trick Au = D1*u + u*D1 and use IterativeSolvers to solve the corresponding PDEs, but for various reasons (avoiding unnecessary inexactness in linear solves, precomputing factorizations for performance, including in block operators for one-shot or Newton-type methods for solving optimality systems), I need an actual matrix representation for A (ideally, in either a sparse or banded form).

Is there any way to obtain such a matrix from DiffEqOperators? Are there any (plans for) convenience constructors?

Macros for "directional derivatives"

In 2D doing directions is easy. A*u vs u*A. In 3D it's harder. What you instead need to do is, for example, slice u along z and apply A*u in a loop (if u is a 3D tensor). This technique handles all directions and scales to nD. It would be nice to make it easier on the user.

I proposed something like *(A,u,1) meaning "differentiate through axis 1 (x)", etc, which would do the following:

for i in 1:size(u,3)
  A*view(u,:,:,i)
end

and of course generalizing to ND (the library EllipsisNotation.jl could be helpful here). Taking it to the next level, maybe we can have a macro:

@dimensional Ax*u + Ay*u + Az*u + f(x,y,z)

which interprets the multiplications as being along axis 1,2,3. That's of course far down the line.

Add tests for central differences for a 2nd order operator with non-zero Dirichlet boundaries

This expands on #46 to add in the cases where things are non-zero. Not worth doing until a redesign of the interface for boundary values occurs:

  • Simple $d X_t = d W_t$ discretization with central differences and with an absorbing boundary at $\underline{x}$ and a reflecting boundary at $\bar{x}$. For the absorbing barrier, make sure to try $\underline{v}\neq 0$
  • Simple $d X_t = d W_t$ discretization with central differences and with a reflecting boundary at $\underline{x}$ and an absorbing boundary at $\bar{x}$. For the absorbing barrier, make sure to try $\bar{v}\neq 0$
  • Simple $d X_t = \sigma(X_t) d W_t$ discretization with central differences and with an absorbing boundary at $\underline{x}$ and a reflecting boundary at $\bar{x}$. For the absorbing barrier, make sure to try $\underline{v}\neq 0$ and make sure there is variation in the $\sigma(\cdot)$ function.

Partial derivatives

What is the easiest way to do partial derivatives with these operators?
Say I have an array F = [f(x,y) for x in xarr, y in yarr], and would like to approximate \partial d/\partial x with some linear operator, i.e. F_x = Dx*F.

Should I do map or broadcast along rows/columns (depending on \partial_x or \partial_y)?

Document the new discretization and boundary values

@stevenzhangdx @FernandoKuwer
I added in a placeholder .tex file in /docs/linear_operators_overview.tex which you can compile. The idea is that you can write up all of the algebra required to understand the discretization of the operators and boundary values. Remember that the main description is the SciML/DifferentialEquations.jl#260 issue. In particular, figuring out the B, Q, R, A, etc. matrices.

Instead of worrying about the solutions to the full PDEs, we should start by just looking at the stationary problems. The PDEs can come later once we figure out how the discretization works.

I also put the julia code for at least one or two of them in the /docs/operator_examples directory. We should put all of the code there that implements our descriptions.

Lazy expm and expmv

Matrix exponentials are essential to exponential integrators. It would be nice if we came up with a lazy way to do this. I have no idea if anyone has worked this out.

How Dirichlet boundary conditions are imposed

I was putting together a notebook to explain the Diffusion example in DiffEqOperator and found that the boundary conditions are not imposed the way that I thought.

https://github.com/francispoulin/DiffEqOperators.jl/blob/master/docs/DiffusionEqTutorial.ipynb

It seems that the boundary conditions are not not imposed considering that the ends of the boundaries are included in their domain.

One way to do this is to pick a staggered grid so that you have points in the centre of each cells, but it might be that the BC's are still not imposed correctly.

I would think that the best way of doing this is to assume that the end points are contained in the domain and then worry about whether you have Dirichlet, Neuman or Mixed BCs.

Has anyone thought about how to do this and can we fix this soon or is this a big task?

Lazy composed PDE Operators

A nice feature would be able to compose these lazy operators. This will be difficult though. It would be killer to be able to support intermediates of scalars, functions, and AbstractArrays so that way something like

u_t = a(t,x)u_xx + b(t,x)u_x

will compose to a single operator on the LHS, or

u_t = (D*u_x)_x

where D is some vector of diffusion constants.

The composition should include the ability to combine with I, so that way I - gamma*L works.

Of course, we'd have to start with a simpler version, but this kind of composition would be absolutely awesome for using stiff integrators.

L*u mutates u if L has Dirichlet1 boundary condition

using DiffEqOperators
using DiffEqOperators: convolve_BC_left!, convolve_BC_right!
N = 10
u = ones(N)
utmp = zeros(10)
lb = 2.2; rb = 3.3
L = DerivativeOperator{Float64}(2,2,1.9,N,:Dirichlet,:Dirichlet,BC=(lb,rb))

@show u
A_mul_B!(utmp, L, u)
@show u
u = ones(N)
convolve_BC_left!(utmp, u, L)
@show u
u = ones(N)
convolve_BC_right!(utmp, u, L)
@show u

u = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
u = [2.2, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 3.3]
u = [2.2, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
u = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 3.3]

This is not an issue if we enforce u to satisfy the boundary condition. Nevertheless L * u having a mutating effect on u is probably not good, especially when L is used in a differential equation solver.

Using the convention proposed in #42 should make this a non-issue.

Define indexing

You should define getindex operations so that way indexing works.

Fixing the conventions

The issues that @francispoulin pointed out are pretty gruesome. While things "work", what was discovered is that the operators we build are changing conventions left and right. This is because "the standard" operators tend to act this way: people naturally discretize differently for different BCs. @shivin9 I think was warning me about this, but I didn't realize until way to late...

To fix this issue, I triaged with @dextorious and @jagot and we tentatively have a plan:

  1. Always assume the discretization on on-grid unless by default.
  2. Always assume the boundaries are the first and last points by default.

Here's the damage. Note that Neumann is unchanged by these choices.

  • Currently Dirichlet0 and Dirichlet assume the boundaries are off the grid. We can instead take the values of the boundary to be u[1] and u[end]. Using this definition, the operator needs no hidden constant to operate correctly since all of the information is in the vector. We define for higher order methods via constant extrapolation of the end point value. Should we keep Dirichlet0 and document it, given its importance as the stencil matrix?
  • Periodic BCs would then have to operate a little differently. The current operator can be renamed to CenteredPeriodic since it's the center-grid discretization for periodic BCs. Then, we need to modify the stencil so at the boundaries it skips over the repeat. For example, at the left end it should be u[end-2],u[1],u[2] instead of grabbing from the other end. It should check if u[1] != u[end] and warn accordingly.
  • Document that Neumann works for center-grid discretization as well.
  • It's clear how Robin BCs are defined for this, but what we're doing probably doesn't follow it.
  • Rename be BC argument to derivative since now in both Dirichlet and Robin the BC value comes from the end point of the array itself.

Why these choices?

  1. Having the end points in the grid makes for easier plotting.
  2. It makes Dirichlet with time dependence a lot simpler to implement and understand.
  3. The total memory cost is trivial.

This needs to get comments before action.

A simple roadmap for solving HJBE and optimal stopping problems

@ChrisRackauckas @FernandoKuwer
More later, but this should give a sense of where we will eventually want to go, adding on pieces to test each time. First, solve everything in the stationary setup (i.e. only a singleton in the time dimension, which leads to an ODE).

  • stationary, univariate diffusion with reflecting barriers, a uniform grid, and a constant discount rate. Posted as #44
  • add in absorbing barriers instead of reflecting barriers (especially an absorbing at the bottom and a reflecting at the top). Roughly shown in (10) and (11) of Finite Differences
  • add in a jump-diffusion process with a jump size as a function of the current state. The reflecting barriers there are tricky.
  • add in support for non-uniform grids. Discussed in detail in Finite Differences

In all cases, the easiest test is to look at the generated discretized matrix and to solve a simple HJBE, as discussed in (54) to (56) of Finite Differences

While this is the most important set of features for our immediate project, there are plenty of other useful features. First, look at the time varying versions, which generate a PDE:

  • Add in time-variation in all parameters/functions of the univariate jump-diffusion process (including boundaries). The assumption is that the final step the parameters have converged (and hence it is at the stationary level). See Section 2 of Finite Differences
  • Add in time-varying discount rates

Next, expand the set of stochastic processes

  • Add in an additional discretely-valued state variable which evolves according to a continuous-time markov chain.
  • Add in 2 dimensional diffusion processes. Need to be careful with the upwind process there, since monotonicity is subtle.

full function for various boundary conditions

The full(A) functionality to retrieve the matrix of transpositions needs to be updated to correctly accommodate the various boundary conditions eg. Dirichlet and Neumann.
For Dirichlet with a specific boundary condition it is working as follows:-

julia> A = LinearOperator{Float64}(2,2,1.0,10,:D1,:D1;bndry_fn=(u[1],u[end]));

julia> full(A)
10×10 Array{Float64,2}:
  1.84135   2.84135     1.84135       1.84135     1.84135   1.84135
 -1.84135  -3.84135    -0.841347      -1.84135    -1.84135  -1.84135
  0.0       1.0        -2.0            0.0         0.0       0.0    
  0.0       0.0         1.0            0.0         0.0       0.0    
  0.0       0.0         0.0            0.0         0.0       0.0    
  0.0       0.0         0.0           0.0         0.0       0.0    
  0.0       0.0         0.0            1.0         0.0       0.0    
  0.0       0.0         0.0           -2.0         1.0       0.0    
  0.0      -0.0136593  -0.0136593      0.986341   -2.01366   1.0    
  0.0       0.0136593   0.0136593      0.0136593   1.01366  -1.0    

with the matrix multiplication not getting the right values at the boundaries.

julia> res = A*u
401-element Array{Float64,1}:
  0.0392531  
 -0.00115466 
 -0.00111008 
 -0.00106763 
 -0.0010272  
 -0.000988661
 -0.000951915
 -0.000916862
 -0.00088341 
 -0.00085147 
            
 -1.54722e-6 
 -1.53094e-6 
 -1.51485e-6 
 -1.49893e-6 
 -1.48318e-6 
 -1.46761e-6 
 -1.45221e-6 
 -1.43698e-6 
 -0.00013916 

julia> full(A)*u
401-element Array{Float64,1}:
 -215.342      
  215.38       
   -0.00111008 
   -0.00106763 
   -0.0010272  
   -0.000988661
   -0.000951915
   -0.000916862
   -0.00088341 
   -0.00085147 
              
   -1.54722e-6 
   -1.53094e-6 
   -1.51485e-6 
   -1.49893e-6 
   -1.48318e-6 
   -1.46761e-6 
   -1.45221e-6 
    1.55873    
   -1.55887    

julia> res[3:398](full(A)*u)[3:398]
true

julia> res(full(A)*u)
false

Investigating Neumann BC using ghost points

Ghost point approach gives lesser accuracy on the boundary but a more accurate solution elsewhere as compared to the one-sided Fornberg coefficient approach.
So we can think of implementing that later.

BC Overhaul

Long discussions with @dlfivefifty and @jlperla led to a big shift in the way we are doing BCs. The idea is outlined SciML/DifferentialEquations.jl#260 (comment) . For DiffEqOperators.jl to change to support this, we need to do the following:

  1. The DerivativeOperators should be the non-square A interior stencil calculations. All interior, no boundaries. full, sparse, banded, etc. should be defined on the stencil to make it easy to get matrices out.
  2. Lazy R should be created. That's super simple of course. And full/space/etc. here too.
  3. Lazy Q should be created for each of the FDM boundary types. So there should be a constructor FDMExtension(...) which takes in boundary types and spits out an operator which does this operation. Q can be an affine operator. We'll need to define that such that (Qa,Qb) = full(Q). And full/space/etc. here too.
  4. In this form, v_t = L*v can be written at v_t = A*Qa*v + A*Qb.
  5. We should have a lazy L, dependent on the BCs, which is L*v = A*Qa*v + A*Qb.
  6. In this form, (L1+L2)*v is well-defined as (A1+A2)*Qa*v + (A1+A2)*Qb.
  7. We should rename DerivativeOperator to FDMDerivativeOperator and then FiniteDifference to FDMIrregularDerivativeOperator (or something like that). Both <: AbstractDerivativeOperator.

Every step of the way should get tests of course.

Implement collect

collect in Julia v0.7 is basically the keyword for "un-lazify". For operators, it should return the most efficient unlazy version between full, banded, and sparse. For example, for the derivative operators it should return banded matrices.

`BandedMatrix` constructor

Should extend the BandedMatrix constructor so that you can directly turn these operators into a banded matrix.

Lazy norm

#55 does a full norm which should probably be replaced by a lazy norm.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.