Giter Club home page Giter Club logo

approxfunbase.jl's Introduction

ApproxFunBase.jl

Core functionality of ApproxFun

CI IntegrationTest codecov Aqua QA deps version pkgeval

Join the chat at https://gitter.im/JuliaApproximation/ApproxFun.jl

ApproxFun.jl is a package for approximating functions. This package contains core functionality used by the various components of ApproxFun. It is possible to add spaces on top of this without the rest of ApproxFun, but this is undocumented for now.

approxfunbase.jl's People

Contributors

benjione avatar dependabot[bot] avatar dlfivefifty avatar github-actions[bot] avatar gyoshi avatar jishnub avatar jondeuce avatar juliatagbot avatar lucasaschenbach avatar macd avatar mikaelslevinsky avatar mjp98 avatar mzaffalon avatar nsajko avatar putianyi889 avatar wsshin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

approxfunbase.jl's Issues

Why is `Fun` mutable?

Are there instances where one needs to modify the fields of a Fun, as opposed to the coefficients?

Error in iszero in Julia v1.10

The following works well in Julia 1.9 but fails in Julia 1.10

In Julia 1.9:

julia> Af = Fun(t -> [0  1; -10*cos(t)-1 -24-19*sin(t)],Fourier(0..2pi));
julia> iszero(Af)
false

The old definition of iszero is

iszero(f::Fun) = all(iszero,f.coefficients)

In Julia 1.10:

julia> Af = Fun(t -> [0  1; -10*cos(t)-1 -24-19*sin(t)],Fourier(0..2pi));
julia> iszero(Af)
ERROR: Override for ApproxFunBase.ArraySpace(ApproxFunBase.SumSpace{Tuple{CosSpace{PeriodicSegment{Float64}, Float64}, SinSpace{PeriodicSegment{Float64}, Float64}}, PeriodicSegment{Float64}, Float64}[Fourier(【0.0,6.283185307179586❫) Fourier(【0.0,6.283185307179586❫); Fourier(【0.0,6.283185307179586❫) Fourier(【0.0,6.283185307179586❫)])
Stacktrace:
 [1] error(s::String)
   @ Base .\error.jl:35
 [2] checkcanonicalspace(sp::ApproxFunBase.ArraySpace{ApproxFunBase.SumSpace{…}, 2, PeriodicSegment{…}, Float64, Matrix{…}})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:478
 [3] ApproxFunBase.ICanonicalTransformPlan(space::ApproxFunBase.ArraySpace{…}, v::Vector{…}, ip::Val{…})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:493
 [4] plan_itransform(sp::ApproxFunBase.ArraySpace{…}, v::Vector{…})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:497
 [5] itransform(S::ApproxFunBase.ArraySpace{…}, cfs::Vector{…})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:564
 [6] _values
   @ C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Fun.jl:428 [inlined]
 [7] values
   @ C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Fun.jl:426 [inlined]
 [8] iszero(f::Fun{ApproxFunBase.ArraySpace{…}, Float64, Vector{…}})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Fun.jl:776
 [9] top-level scope
   @ REPL[90]:1
Some type information was truncated. Use `show(err)` to see complete types.

The new definition of iszero is

iszero(f::Fun) = all(iszero, coefficients(f)) || all(iszero, values(f))

so, the failure is caused by

julia> values(Af)
ERROR: Override for ApproxFunBase.ArraySpace(ApproxFunBase.SumSpace{Tuple{CosSpace{PeriodicSegment{Float64}, Float64}, SinSpace{PeriodicSegment{Float64}, Float64}}, PeriodicSegment{Float64}, Float64}[Fourier(【0.0,6.283185307179586❫) Fourier(【0.0,6.283185307179586❫); Fourier(【0.0,6.283185307179586❫) Fourier(【0.0,6.283185307179586❫)])
Stacktrace:
 [1] error(s::String)
   @ Base .\error.jl:35
 [2] checkcanonicalspace(sp::ApproxFunBase.ArraySpace{ApproxFunBase.SumSpace{…}, 2, PeriodicSegment{…}, Float64, Matrix{…}})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:478
 [3] ApproxFunBase.ICanonicalTransformPlan(space::ApproxFunBase.ArraySpace{…}, v::Vector{…}, ip::Val{…})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:493
 [4] plan_itransform(sp::ApproxFunBase.ArraySpace{…}, v::Vector{…})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:497
 [5] itransform(S::ApproxFunBase.ArraySpace{…}, cfs::Vector{…})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Space.jl:564
 [6] _values
   @ C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Fun.jl:428 [inlined]
 [7] values(::Fun{ApproxFunBase.ArraySpace{…}, Float64, Vector{…}})
   @ ApproxFunBase C:\Users\Andreas\.julia\packages\ApproxFunBase\sJBXX\src\Fun.jl:426
 [8] top-level scope
   @ REPL[91]:1
Some type information was truncated. Use `show(err)` to see complete types.

I wonder why the second test involving all(iszero, values(f)) is necessary!

I would appreciate very much if the fix for this very basic function can be done in a short time. Thanks in advance.

Evaluation operator does not work with ArraySpaces

I am working with vector valued functions, R -> R^n, and would like to use the Evaluation operator.

If we have f = Fun(x -> [1, 2]) we get the following problems:

  • Evaluation(0.5)*f gives an error as the implementation tries to call zero(Vector{Float64}), which is undefined;
  • Evaluation in endpoints, Evaluation(-1)*f and Evaluation(1)*f still give the same error as before;
  • rangespace(Evaluation(space(f), 0.) returns ConstantSpace(Point(0.)) instead of ArraySpace(ConstantSpace(Point(0.)), 2); and
  • Composition with finite operators: FiniteOperator([1. 2.; 3., 4.]) * Evaluation(space(f), 0.) errors.

iszero type piracy causes a lot of invalidations

The definition here:

iszero(x::Number) = x == 0

changes the definition of iszero for all numbers (type piracy) and causes a huge number of invalidations

 inserting iszero(x::Number) in ApproxFunBase at /home/kc/.julia/packages/ApproxFunBase/r0W9A/src/Fun.jl:506 invalidated:
   backedges: 1: superseding iszero(x) in Base at number.jl:40 with MethodInstance for iszero(::UInt32) (1 children)
              2: superseding iszero(x) in Base at number.jl:40 with MethodInstance for iszero(::UInt64) (2 children)
              3: superseding iszero(x) in Base at number.jl:40 with MethodInstance for iszero(::Int64) (6236 children) <---------------

You can see the effect of this by loading ApproxFunBase and notice that the whole julia session becomes sluggish while it is recompiling everything.

What would be the best way to implement support differentiation of a SubSpace

At this moment

julia> S = ApproxFunBase.SubSpace(Fourier(), 1:10);

julia> Derivative(S)
DerivativeWrapper : Fourier(【0.0,6.283185307179586❫)|1:10  SinSpace(【0.0,6.283185307179586❫)CosSpace(【0.0,6.283185307179586❫)
 0.0  0.0  -1.0  0.0   0.0  0.0   0.0  0.0   0.0  0.0
 0.0  0.0   0.0  0.0   0.0  0.0   0.0  0.0   0.0  0.0
 0.0  0.0   0.0  0.0  -2.0  0.0   0.0  0.0   0.0  0.0
 0.0  1.0   0.0  0.0   0.0  0.0   0.0  0.0   0.0  0.0
 0.0  0.0   0.0  0.0   0.0  0.0  -3.0  0.0   0.0  0.0
 0.0  0.0   0.0  2.0   0.0  0.0   0.0  0.0   0.0  0.0
 0.0  0.0   0.0  0.0   0.0  0.0   0.0  0.0  -4.0  0.0
 0.0  0.0   0.0  0.0   0.0  3.0   0.0  0.0   0.0  0.0
 0.0  0.0   0.0  0.0   0.0  0.0   0.0  0.0   0.0  0.0
 0.0  0.0   0.0  0.0   0.0  0.0   0.0  4.0   0.0  0.0

while the first dimension could be finite.

Where would be the best place to interrupt the ApproxFunBase flow to create an Operator that is finite in both dimensions?

Why won't `AbstractMatrix(::SubOperator)` return a `BandedBlockBandedMatrix`?

I see that the line to return such a matrix is commented out below. Why?

# isbandedblockbanded(P) && return BandedBlockBandedMatrix

I'd like to preallocate large operator matrices and currently the RaggedMatrix type seems to be the default. The issue is that RaggedMatrix uses too much memory. Currently, I'm just manually using BandedBlockBandedMatrix, but that's not as nice as just being able to use AbstractMatrix(view(SomeOperator, FiniteRange, 1:N))), since I don't want to have to care about the exact memory layout myself.

Why are coefficients in tensor product bases represented the way they are?

If you have a basis of Chebyshev Polynomials, for example, that looks like this:

[T00 T01 T02;
 T10 T11  0 ;
 T20  0   0 ]

These are stored like so

[T00, T01, T10, T02, T11, T20]

My question is: why? Would it not be more natural to simply keep using the matrix? Or perhaps reshape the matrix to a vector, so columns are concatenated together?

In my case, using the matrix directly means I don't have a differentiation operator with 1e12 entries, but rather one with 1e6 entries, which is the difference between feasible and impossible. (The reduction occurs because in the matrix way of doing things, I can use the 1D operators on each column/row, which amounts to a simple matrix matrix multiplication).

Of course, this comes at the cost of doubling the memory, but it seems more than worth it to double the memory at this stage if you can save many orders of magnitude in memory/time when building operators and applying them.

*(c::Number, A::Operator)

This function, at https://github.com/JuliaApproximation/ApproxFunBase.jl/blob/master/src/Operators/general/algebra.jl#L574, doesn't quite work with dual numbers, particularly because dual(0,1) == 0 and dual(1,1) == 1, which do not behave like other field extensions, e.g. complex.

julia> dual(0,1)*Evaluation(S, 1)
ZeroOperator : Chebyshev()  ConstantSpace(Point(1))
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  

julia> dual(eps(),1)*Evaluation(S, 1)
ConstantTimesOperator : Chebyshev()  ConstantSpace(Point(1))
 2.220446049250313e-16 + 1.0ɛ    2.220446049250313e-16 + 1.0ɛ  

julia> dual(1,1)*Evaluation(S, 1)
ConcreteEvaluation : Chebyshev()  ConstantSpace(Point(1))
 1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  

julia> dual(1+eps(),1)*Evaluation(S, 1)
ConstantTimesOperator : Chebyshev()  ConstantSpace(Point(1))
 1.0000000000000002 + 1.0ɛ    1.0000000000000002 + 1.0ɛ  

Why does ZeroOperator have a rangespace?

ZeroOperator maps whatever Fun to zero on a specific Domain. Whatever space on that Domain doesn't matter.

For example, there is an Operator called C from space S1 to space S2. There is a ZeroOperator called D from space S1 to space S3. S2 and S3 are both spaces on [-1,1].

Executing C+D will trigger a conversion between S2 and S3, which can't succeed in general for random S2 and S3. But C+D should be C AFAIK.

IMO the field rangespace of ZeroOperator should be replaced by domain.

Accidental piracy of Base.real

# This creates ApproxFunBase.real, ApproxFunBase.eps and ApproxFunBase.dou
# which we override for default julia types
real(x...) = Base.real(x...)

The comment suggests that ApproxFunBase.real differs from Base.real, but it's actually identical as ApproxFunBase imports Base.real. This leads to stack overflows if used incorrectly.

Eg.

julia> ApproxFunBase.real === Base.real
true

julia> real(1,2)
ERROR: StackOverflowError:
Stacktrace:
 [1] real(::Int64, ::Int64) (repeats 79984 times)
   @ ApproxFunBase ~/.julia/packages/ApproxFunBase/gHG1c/src/LinearAlgebra/helper.jl:59

This should ideally be a method error.

Are ProductFun points broken?

This doesn't feel right...

julia> S = Chebyshev(1..2)Chebyshev(3..4)
Chebyshev(1 .. 2)  Chebyshev(3 .. 4)

julia> [p  domain(factor(S, 1)) for p in points(S, 10, 10)[1]]
10×10 Matrix{Bool}:
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0

julia> [p  domain(factor(S, 2)) for p in points(S, 10, 10)[2]]
10×10 Matrix{Bool}:
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0

ApproxFun+IntervalArithmetic: make ODE solving really rigorous

The following code returns successfully for solving Airy's equation (the extra definitions are work arounds for bugs):

julia> Base.broadcast(::typeof(*), a::IntervalArithmetic.Interval, b::AbstractVector) = 
           map(x -> a*x, b)

julia> Base.Broadcast.broadcasted(::typeof(*), a::IntervalArithmetic.Interval, b::AbstractVector) = 
           map(x -> a*x, b)

julia> Base.length(d::UnitRange{<:IntervalArithmetic.Interval}) = Integer(maximum(d.stop.hi)-minimum(d.start.lo)+1)

julia> u = [Dirichlet(); D^2 - x] \ [[1, 0], 0]
Fun(Chebyshev([-1, -1]..[1, 1]),IntervalArithmetic.Interval{Float64}[[0.528647, 0.528648], [-0.522247, -0.522246], [-0.023162, -0.0231619], [0.0223854, 0.0223855], [-0.00557882, -0.00557881], [-0.000122104, -0.000122103], [9.37066e-05, 9.37067e-05], [-1.68122e-05, -1.68121e-05], [-2.4373e-07, -2.43729e-07], [1.63064e-07, 1.63065e-07]    [1.54639e-10, 1.5464e-10], [-1.89644e-11, -1.89643e-11], [-1.63813e-13, -1.63812e-13], [9.21385e-14, 9.21386e-14], [-9.91957e-15, -9.91956e-15], [-7.12466e-17, -7.12465e-17], [3.76658e-17, 3.76659e-17], [-3.63796e-18, -3.63795e-18], [-2.23433e-20, -2.23432e-20], [-1.12162e-20, -1.12161e-20]])

It happens to have worked:

julia> parse(BigFloat,"0.546136459064141770613483785351029091974067802242010192661484687977676021081218637004123196030567166414008915304513669838826570884623515686") in u(0) # true value from mathematica
true

However, this is mostly luck: the convergence criteria doesn't actually control the decay in the tail. To do this properly (and thereby have apriori guaranteed success) we would need to modify the convergence check at

while (k  min(m+M,A_dim) || BLAS.nrm2(M,yp,1) > tolerance )

to properly bound the tail.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Seem like the Bug in the code ApproxFunBase/src/Caching/almostbanded.jl line106

Debug the error code, I found that
almostbanded.jl line81 --->ds=domainspace(io)
almostbanded.jl line106 ---->bl = blocklengths(ds[k])
where, k is the integer, however, In the below case, ds=domainspace(A),is like Laurent(0.1🕒+1.0 + 0.0im)⨄Laurent(0.1🕒+-0.4999999999999998 + 0.8660254037844387im)⨄Laurent(0.1🕒+-0.5000000000000004 - 0.8660254037844385im)
which means ds[1] is no meaning!!!

This headache me for many days, when I try to complete the example code in SingularIntergralEquation Package.
I tested my code here:
using ApproxFun, SingularIntegralEquations,Plots,ApproxFunBase
g1(x,y) = 0.5
N=3
r=0.1
cr = exp.(im*2π*(0:N-1)/N)
crl,crr = (1-2im*r)cr,(1+2im*r)cr
@show dom=Circle(cr[1],r) ∪ Circle(cr[2],r)∪ Circle(cr[3],r)
#@show dom = ∪(Segment.(crl[1:end],crr[1:end])...)

@show sp = Space(dom)
cwsp = CauchyWeight(sp⊗sp,0)
⨍ = DefiniteLineIntegral(dom)
@time G = GreensFun(g1,cwsp;method=:Cholesky)
A=⨍[G]
ApproxFun.qr(A) #error occurs

Here A is union InterlaceOperator such like that: Laurent(0.1🕒+1.0 + 0.0im)⨄Laurent(0.1🕒+-0.4999999999999998 + 0.8660254037844387im)⨄Laurent(0.1🕒+-0.5000000000000004 - 0.8660254037844385im) → Laurent(0.1🕒+1.0 + 0.0im)⨄Laurent(0.1🕒+-0.4999999999999998 + 0.8660254037844387im)⨄Laurent(0.1🕒+-0.5000000000000004 - 0.8660254037844385im)

3+ dim calculus operator do not work and Syntax proposal

Currently,

julia> Derivative(Chebyshev()^2, [1,0])
DerivativeWrapper : Chebyshev() ⊗ Chebyshev() → Ultraspherical(1) ⊗ Chebyshev()
 0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ⋯
 0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  ⋱
 0.0  0.0  0.0  0.0  0.0  2.0  0.0  0.0  0.0  0.0  ⋱
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0  ⋱
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  2.0  0.0  ⋱
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.0  ⋱
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ⋱
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ⋱
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ⋱
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ⋱
  ⋮    ⋱    ⋱    ⋱    ⋱    ⋱    ⋱    ⋱    ⋱    ⋱   ⋱

julia> Integral(Chebyshev()^2, [1,0])
ERROR: Implement Integral(Chebyshev() ⊗ Chebyshev(),[1, 0])
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:35
 [2] DefaultIntegral(sp::TensorSpace{Tuple{Chebyshev{ChebyshevInterval{Float64}, Float64}, Chebyshev{ChebyshevInterval{Float64}, Float64}}, DomainSets.FixedIntervalProduct{2, Float64, ChebyshevInterval{Float64}}, Float64}, k::Vector{Int64})
   @ ApproxFunBase ~/.julia/packages/ApproxFunBase/VD9Rj/src/Operators/banded/CalculusOperator.jl:41
 [3] Integral(::TensorSpace{Tuple{Chebyshev{ChebyshevInterval{Float64}, Float64}, Chebyshev{ChebyshevInterval{Float64}, Float64}}, DomainSets.FixedIntervalProduct{2, Float64, ChebyshevInterval{Float64}}, Float64}, ::Vararg{Any})
   @ ApproxFunBase ~/.julia/packages/ApproxFunBase/VD9Rj/src/Operators/banded/CalculusOperator.jl:53
 [4] top-level scope
   @ REPL[94]:1

Integral(Chebyshev()) ⊗ I
KroneckerOperator : Chebyshev() ⊗ ApproxFunBase.UnsetSpace() → Chebyshev() ⊗ ApproxFunBase.UnsetSpace()

It would be nice to have a Syntax for Integral(Chebyshev()^2, [1,0]) for integration over the first variable as well as for DefiniteIntegral.

Furthermore, currently the following throws an error:

julia> Op = Integral(Chebyshev()) ⊗ I ⊗ I 
KroneckerOperator : Chebyshev() ⊗ ApproxFunBase.UnsetSpace() ⊗ ApproxFunBase.UnsetSpace() → Chebyshev() ⊗ ApproxFunBase.UnsetSpace() ⊗ ApproxFunBase.UnsetSpace()

julia> f3 = Fun(Chebyshev()^3, rand(20));

julia> Op * f3
ERROR: Implement Conversion from Chebyshev{ChebyshevInterval{Float64}, Float64} to TensorSpace{Tuple{Chebyshev{ChebyshevInterval{Float64}, Float64}, ApproxFunBase.UnsetSpace}, DomainSets.VcatDomain{2, Float64, (1, 1), Tuple{ChebyshevInterval{Float64}, DomainSets.FullSpace{Float64}}}, Union{}}
Stacktrace:

as well as the same for the Derivative.

Maybe I am misusing the package for something it is not intended to be, but I want to use it for implementing closed form integration, marginalization, and derivatives of SOS polynomials and therefore would need this operations for higher dimensions.

@jishnub Do you have an advice for me where I have to start for implementing this?

Multiplications of derivatives is too permissible, even if spaces aren't compatible

Currently

julia> D1 = Derivative(Chebyshev()) * Derivative(Chebyshev())
ConcreteDerivative : Chebyshev()  Ultraspherical(2)
     4.0                                   
         6.0                               
             8.0                           
                 10.0                      
                      12.0                 
                           14.0            
                                16.0       
                                     18.0  
                                          
                                          
                                          

julia> D2 = Derivative(Ultraspherical(1)) * Derivative(Chebyshev())
ConcreteDerivative : Chebyshev()  Ultraspherical(2)
     4.0                                   
         6.0                               
             8.0                           
                 10.0                      
                      12.0                 
                           14.0            
                                16.0       
                                     18.0  
                                          
                                          
                                          

julia> D1 == D2 == (Derivative()^2 : Chebyshev())
true

Shouldn't the first one check for compatible domain and range spaces?

Multiplication of 2D functions not working properly

julia> f2_1 = Fun(Chebyshev()^2, rand(20));

julia> f2_2 = Fun(Chebyshev()^2, rand(30));

julia> (f2_1*f2_2)(0.2, 0.2) ≈ f2_1(0.2,0.2)*f2_2(0.2,0.2)
true

julia> (f2_1*f2_2)(0.1, 0.8) ≈ f2_1(0.1,0.8)*f2_2(0.1,0.8)
false # should be true

multivariate function multiplication is not working properly

Subtracting two simple, seemingly compatible functions (f - g) results in wrong answer upon evaluation (typeof(f) = Chebyshev ⨁ CosSpace ⨁ SinSpace and typeof(g) = Chebyshev)

using ApproxFun
Dc = Chebyshev(Interval(0,1.))
Df = Fourier(Interval(0,1.))
rx = Fun(t -> cospi(2*t), Df)
ry = Fun(t -> sinpi(2*t), Df)
twoπx = Fun(x -> 2π*x, Dc)
ell = sqrt(differentiate(rx)^2 + differentiate(ry)^2)
L = cumsum(ell)
println(L(1) == twoπx(1)) # expected value: true
println((L - twoπx)(1) ≈ 0) # therefore, expected value: true
println((L - twoπx)(1) == L(1)) # expected value: false

Doing this on a chalkboard we would find L := 2πx and so (L - twoπx) ≡ 0. However, the actual results from this example are: true, false, true.

Predicting number of coefficients in output of operator

I want to apply some operators like derivatives to Funs and would like to know how many coefficients and points I'll get out. I found that

function view(A::Operator,::Type{FiniteRange},jr::AbstractVector{Int})
cs = (isbanded(A) || isblockbandedbelow(A)) ? colstop(A,maximum(jr)) : mapreduce(j->colstop(A,j),max,jr)
view(A,1:cs,jr)
end

seems to be where the number of output coefficients is decided. I tried to look into the colstop function but I can't seem to figure out what it does. The one that gets called in my case is:

function colstop(A::KroneckerOperator,k::Integer)
k == 0 && return 0
K=block(A.domaintensorizer,k)
st=blockstop(A.rangetensorizer,blockcolstop(A,K))
# zero indicates above dimension
min(size(A,1),st)
end

Could you explain what the two functions shown above do and why? Specifically, I want to understand how to predict what number of output coefficients derivatives on a Chebyshev^2 basis will produce and what number of coefficients I will get when converting from Chebyshev to Ultraspherical basis.

Here's a MWE

using FastTransforms: paduapoints
using ApproxFunOrthogonalPolynomials

np = 1000
ppoints = paduapoints(ceil(Int, (-3 + sqrt(1 + 8 * np)) / 2))

f(v) = cos(v[1]) * sin(v[2])
vals = f.(eachrow(ppoints))

CC = Chebyshev(-1..1)          Chebyshev(-1..1)
UC = Ultraspherical(1, -1..1)  Chebyshev(-1..1)
CU = Chebyshev(-1..1)          Ultraspherical(1, -1..1)
UU = Ultraspherical(1, -1..1)  Ultraspherical(1, -1..1)

CCtoUU = Conversion(CC, UU)

D1 = Derivative(CC, [1,0])
UCtoUU = Conversion(UC, UU)

coeffs = transform(CC, vals)
pf = Fun(CC, coeffs)

ncoefficients(pf)  # 1035
ncoefficients(D1 * pf)  # 990: why?
ncoefficients(UCtoUU * (D1 * pf))  # also 990
ncoefficients(CCtoUU * pf)  # 1035

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.