Comments (10)
The transforms all come from FastTransforms.jl, it's just a question of which transform is used.
Note the transforms are also used in ClassicalOrthogonalPolynomials.jl
from approxfunbase.jl.
TBH I don't remember. I'm redesigning a lot of this in ClassicalOrthogonalPolynomials.jl. Maybe what you are trying to do works there already?
from approxfunbase.jl.
I need to differentiate and integrate on a 2D domain in Chebyshev tensor product basis. I believe ClassicalOrthogonalPolynomials.jl can't do that yet?
from approxfunbase.jl.
You are right it's not built in, but perhaps you can build the operators you want via:
julia> using ClassicalOrthogonalPolynomials, LazyBandedMatrices
julia> T,U = ChebyshevT(), ChebyshevU();
julia> x = axes(T,1); D = Derivative(x);
julia> D_x = KronTrav(U\D*T, U\T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
⋅ │ ⋅ 1.0 │ ⋅ 0.0 ⋅ │ ⋅ -0.5 ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼───────────────── …
⋅ │ ⋅ ⋅ │ ⋅ 0.5 ⋅ │ ⋅ 0.0 ⋅ ⋅ │ ⋅ -0.5 ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ 2.0 │ ⋅ ⋅ 0.0 ⋅ │ ⋅ ⋅ -1.0 ⋅ ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ 0.5 ⋅ ⋅ │ ⋅ 0.0 ⋅ ⋅ ⋅ │ ⋅ -0.5 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0 ⋅ │ ⋅ ⋅ 0.0 ⋅ ⋅ │ ⋅ ⋅ -1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 3.0 │ ⋅ ⋅ ⋅ 0.0 ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼───────────────── …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ 0.5 ⋅ ⋅ ⋅ │ ⋅ 0.0 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0 ⋅ ⋅ │ ⋅ ⋅ 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 1.5 ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ 4.0 │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ 0.5 ⋅ …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────
⋮ ⋮ ⋮ ⋮ ⋱
julia> D_y = KronTrav(U\T, U\D*T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
⋅ │ 1.0 0.0 │ 0.0 0.0 0.0 │ 0.0 0.0 -0.5 ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼────────────────────── …
⋅ │ ⋅ ⋅ │ 2.0 0.0 0.0 │ 0.0 0.0 0.0 ⋅ │ 0.0 0.0 -1.0 ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ 0.5 0.0 │ ⋅ 0.0 0.0 0.0 │ ⋅ 0.0 0.0 -0.5 ⋅ │ ⋅ ⋅ ⋅ ⋅
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ 3.0 0.0 0.0 ⋅ │ 0.0 0.0 0.0 ⋅ ⋅ │ 0.0 0.0 -1.5 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ 1.0 0.0 0.0 │ ⋅ 0.0 0.0 0.0 ⋅ │ ⋅ 0.0 0.0 -1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ 0.5 0.0 │ ⋅ ⋅ 0.0 0.0 0.0 │ ⋅ ⋅ 0.0 0.0
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼────────────────────── …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ 4.0 0.0 0.0 ⋅ ⋅ │ 0.0 0.0 0.0 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ 1.5 0.0 0.0 ⋅ │ ⋅ 0.0 0.0 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0 0.0 0.0 │ ⋅ ⋅ 0.0 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 0.5 0.0 │ ⋅ ⋅ ⋅ 0.0
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ 5.0 0.0 0.0 ⋅ …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ 2.0 0.0 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.5 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────
⋮ ⋮ ⋮ ⋮ ⋱
from approxfunbase.jl.
That looks interesting. Thanks, for the help.
from approxfunbase.jl.
Btw I'd love to get kronecker products working in ContinuumArrays.jl/ClassicalOrthogonalPolynomials.jl. If you felt like helping with that I can walk you through the planned design
from approxfunbase.jl.
Is the plan to replace ApproxFunOrthogonalPolynomials.jl with ClassicalOrthogonalPolynomials.jl?
from approxfunbase.jl.
Yes that is the plan.
- ContinuumArrays.jl is "functions-as-vectors". This is a more natural language for doing linear algebra, e.g., solving linear ODEs, taking subspaces of bases.
- ApproxFun.jl will be "functions-as-functions". It will call ContinuumArrays.jl for operations that are linear (e.g., solving ODEs, addition, etc.) so will be a thin wrapper.
- ClassicalOrthogonalPolynomials.jl will add the support for orthogonal polynomials
Note there's a WIP PR to make the change
The biggest hold up is there is a lot of functionality missing in terms of analogues of ArraySpace
and TensorSpace
. Though this is moving quickly, and things that have been updated are much faster. E.g. building multiplication operators via Clenshaw is 10x faster in ClassicalOrthogonalPolynomials.jl: MikaelSlevinsky/FastTransforms#74
from approxfunbase.jl.
So, my plan at the moment is to replace all my current high level ApproxFun use with FastTransforms.jl for the transforms from and to the two kinds of Chebyshev Bases and to replace all the derivatives, integrals and conversions with matrix multiplications so I can preallocate the matrices. Whether I use ApproxFunOrthogonalPolynomials.jl or ClassicalOrthogonalPolynomials.jl to generate the matrices in the first place shouldn't matter too much, so it's not such a high priority to switch over for me on that front.
However, I do have a few little things to add to FastTransforms.jl. For example, a method for the Padua Transform that allows me to do something like Padua_transform(into, from, plan)
so I can preallocate the output. May be some documentation/comments, too. And if you have any big changes planned for FastTransforms.jl, that would be quite interesting, since the transforms are the biggest limiting factor for my use case at the moment.
If you're curious, I'm the student working on PoincareInvariants.jl.
from approxfunbase.jl.
Actually, my plan doesn't quite work out because the transform for Chebyshev polynomials is in ApproxFunOrthogonalPolynomials.jl as I understand? There'd also a version in FastTransforms, but it's not used you said?
So, one of the transforms will still come from ApproxFun I guess.
from approxfunbase.jl.
Related Issues (15)
- *(c::Number, A::Operator) HOT 5
- Multiplications of derivatives is too permissible, even if spaces aren't compatible HOT 2
- Why is `Fun` mutable? HOT 2
- Multiplication of 2D functions not working properly HOT 1
- Evaluation operator does not work with ArraySpaces HOT 1
- What would be the best way to implement support differentiation of a SubSpace HOT 1
- Seem like the Bug in the code ApproxFunBase/src/Caching/almostbanded.jl line106 HOT 3
- iszero type piracy causes a lot of invalidations HOT 21
- ApproxFun+IntervalArithmetic: make ODE solving really rigorous
- TagBot trigger issue HOT 319
- Why does ZeroOperator have a rangespace?
- Predicting number of coefficients in output of operator HOT 3
- Why are coefficients in tensor product bases represented the way they are? HOT 1
- Accidental piracy of Base.real
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from approxfunbase.jl.