julialinearalgebra / bandedmatrices.jl Goto Github PK
View Code? Open in Web Editor NEWA Julia package for representing banded matrices
Home Page: http://julialinearalgebra.github.io/BandedMatrices.jl/
License: Other
A Julia package for representing banded matrices
Home Page: http://julialinearalgebra.github.io/BandedMatrices.jl/
License: Other
Depending on GPUArrays.jl means the tests are failing on 0.7-alpha. I tried SharedArray
but there are issues with that JuliaLang/julia#27389 and JuliaLang/julia#27388
Nothing else that's mutable comes to mind at the moment. Maybe a BandedMatrix
with BandedMatrix
backend π€ͺ (Not a crazy idea actually: occasionally you get banded matrices with a "skew", this can represent)
This should be straightfoward:
bandwidths(A::Adjoint) = reverse(bandwidths(parent(A)))
This is supported in gbmm
. Perhaps we support it as extra cases, for example:
# current
banded_matmatmul!(C::BandedMatrix{T}, tA::Char, tB::Char, A::BandedMatrix{T}, B::BandedMatrix{T})
# proposal
banded_matmatmul!(Ξ²::Int, C::BandedMatrix{T}, tA::Char, tB::Char, Ξ±::T, A::BandedMatrix{T}, B::BandedMatrix{T})
@randy3k Any thoughts/objections to this change in your code?
Discussed in discourse.
Wikipedia: Cuthill-McKee algorithm
This was temporarily disabled as showarray has changed
julia> using BandedMatrices
julia> A = brand(12,10,4,3)
12Γ10 BandedMatrix{Float64,Array{Float64,2}}:
0.515614 0.847866 0.668239 5.75042e-5 β
β
β
β
β
β
0.0271786 0.376856 0.854746 0.267271 0.950013 β
β
β
β
β
0.633681 0.4955 0.766189 0.60617 0.972881 0.868546 β
β
β
β
0.0163846 0.286059 0.893334 0.432784 0.539194 0.702265 0.309379 β
β
β
0.918769 0.00809012 0.670235 0.955873 0.943607 0.514218 0.0639507 0.453154 β
β
β
0.699388 0.202257 0.29749 0.841451 0.153339 0.705461 0.268778 0.636174 β
β
β
0.629772 0.521645 0.839571 0.0830189 0.0573653 0.333635 0.838724 0.663034
β
β
β
0.3884 0.593774 0.782569 0.38906 0.724952 0.00687896 0.701035
β
β
β
β
0.0997583 0.264544 0.364324 0.0849868 0.4499 0.436927
β
β
β
β
β
0.34214 0.722436 0.258858 0.625005 0.100361
β
β
β
β
β
β
0.201614 0.163405 0.745637 0.680499
β
β
β
β
β
β
β
0.95437 0.552991 0.43266
julia> b = randn(12)
12-element Array{Float64,1}:
0.581609308542756
-0.06182422895766597
0.6883875840861988
-0.511903323191731
2.2619341532372976
-0.5498214114059174
1.2561663727738273
0.07375952101193864
0.4762735361511112
-0.42276151466803963
0.005729783930582276
0.5210936169002679
julia> A\b
ERROR: DimensionMismatch("matrix is not square: dimensions are (12, 10)")
Stacktrace:
[1] \(::BandedMatrix{Float64,Array{Float64,2}}, ::Array{Float64,1}) at /Applications/julia/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/LinearAlgebra.jl:216
[2] top-level scope at none:0
julia> A = Matrix(A)
12Γ10 Array{Float64,2}:
0.515614 0.847866 0.668239 5.75042e-5 0.0 0.0 0.0 0.0 0.0 0.0
0.0271786 0.376856 0.854746 0.267271 0.950013 0.0 0.0 0.0 0.0 0.0
0.633681 0.4955 0.766189 0.60617 0.972881 0.868546 0.0 0.0 0.0 0.0
0.0163846 0.286059 0.893334 0.432784 0.539194 0.702265 0.309379 0.0 0.0 0.0
0.918769 0.00809012 0.670235 0.955873 0.943607 0.514218 0.0639507 0.453154 0.0 0.0
0.0 0.699388 0.202257 0.29749 0.841451 0.153339 0.705461 0.268778 0.636174 0.0
0.0 0.0 0.629772 0.521645 0.839571 0.0830189 0.0573653 0.333635 0.838724 0.663034
0.0 0.0 0.0 0.3884 0.593774 0.782569 0.38906 0.724952 0.00687896 0.701035
0.0 0.0 0.0 0.0 0.0997583 0.264544 0.364324 0.0849868 0.4499 0.436927
0.0 0.0 0.0 0.0 0.0 0.34214 0.722436 0.258858 0.625005 0.100361
0.0 0.0 0.0 0.0 0.0 0.0 0.201614 0.163405 0.745637 0.680499
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.95437 0.552991 0.43266
julia> A\b
10-element Array{Float64,1}:
2.0220856901879043
-0.7755543365867119
0.12320906644714069
0.7697964508307981
0.004005824993302602
-0.8375737553555263
-0.3631581638887185
0.1523444365407198
0.12808400544193294
0.6872899542927047
Why not just Symmetric(B::BandedMatrix)
?
It appears that LAPACK methods such as dsbtrd.f use two-dimensional indexing on the data
array (ab
), so if the LAPACK routines are provided the right integers that describe the sufficient data implied by the symmetry, then the type might not be necessary.
I'm not entirely sure about LAPACK calls, because there are also variables such as inca
which may not be correct if the data isn't the right size.
The BLAS/LAPack routines are partially overloaded. It would be good to systematically overload all routines.
a=brand(1,10,0,9)
b=brand(10,10,255,255)
a*b # triggers @assert Cu == Au+Bu in gbmm.jl line 210
I was thinking that it might be easier to get a use-case working for DistributedArrays.jl than it is with GPUArrays.jl. We could do something simple like a finite-difference solve, but where the data is spread between different machines.
The current version seems to have problems with precompilation:
julia> versioninfo()
Julia Version 0.6.1
Commit 0d7248e2ff (2017-10-24 22:15 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
LAPACK: libopenblas64_
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, haswell)
julia> Pkg.add("BandedMatrices")
INFO: Installing BandedMatrices v0.3.0
INFO: Package database updated
julia> using BandedMatrices
INFO: Precompiling module BandedMatrices.
^C
signal (15): Ended
while loading /home/.../.julia/v0.6/BandedMatrices/src/BandedMatrices.jl, in expression starting on line 80
It just hangs and Ctrl+C
doesn't stop Julia from using 100% of the CPU.
I've added documentation using Documenter.jl.
@andreasnoack I need access to BandedMatrices.jl settings to add a public key so that Travis can build the documentation.
The overloading a[:,:]
to mean only the banded entries is confusing, so I think it should be changed to a[BandRange,BandRange]
. a[:,:] = 0
would be allowed, as well as a[:,:] = M
where M
has zeros in the appropriate entries.
This is making BandedMatrix * Vector compilation very slow, due to generally_banded_matvecmul!
I think the whole banded matrix multiplication code should be rewritten.
We should change from
mutable struct BandedMatrix{T} <: AbstractBandedMatrix{T}
data::Matrix{T} # l+u+1 x n (# of columns)
...
end
to
mutable struct BandedMatrix{T} <: AbstractBandedMatrix{T}
data::Matrix{T} # l+u+1 x n (# of columns)
...
end
Ideally this would be in Base, but GPUArrays.jl has started usage like
zeros(CLArray{Float32},(n,m))
for making a CLArray{Float32}
that is n x m
. Semantically I'm skeptical that this is the right choice (I prefer zero(CLArray{Float32},(n,m))
), but I suppose we should follow the already established pattern and introduce:
zeros(BandedMatrix{Float64},(n,m),(l,u))
The first feature is supporting *(::BandedMatrix{T,<:GPUMatrix}, ::GPUVector)
.
Now that parameterized types are supported, I think the main thing to be done is to resolve JuliaGPU/GPUArrays.jl#114
That should just be a copy-and-paste job as CLBLAS.jl already supports gbmv!
.
@randy3k There's been a huge slowdown in compiling multiplication:
julia> using BandedMatrices
julia> A = brand(10,10,0,2);
julia> B = brand(10,10,1,1);
julia> @time A*B
6.308210 seconds (10.99 M allocations: 435.596 MiB, 3.14% gc time)
10Γ10 BandedMatrices.BandedMatrix{Float64}:
0.12743 0.650602 0.640555 0.433851 β¦
0.196636 0.323239 0.38833 0.317936
0.0537917 0.142114 0.606653
0.0791999 0.680316
0.334344 0.0718981
β¦ 0.130922 0.020899
0.983563 0.336551 0.309136
0.84535 0.779808 0.617574
0.324883 0.25855 0.586666
0.135284 0.587271
In BandedMatrices v0.2.3 this was 0.093452 seconds (7.56 k allocations: 449.235 KiB)
One of the applications for general backends is infinite-dimensional arrays via InfiniteArrays.jl. This works fine when the number of rows is finite:
julia> _BandedMatrix(Vcat((1:β)',Ones(1,β)), 10,1,0)
BandedMatrix{Float64,Vcat{Float64,2,Tuple{Adjoint{Int64,InfiniteArrays.InfUnitRange{Int64}},Ones{Float64,2,Tuple{Int64,InfiniteArrays.Infinity}}}}} with indices Base.OneTo(10)ΓOneToInf():
1.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β¦
1.0 2.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
1.0 3.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
1.0 4.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
1.0 5.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
1.0 6.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β¦
β
β
β
β
β
1.0 7.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
1.0 8.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
1.0 9.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
1.0 10.0 β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
But usually we also want the number of columns to be infinite. That is broken:
julia> _BandedMatrix(Vcat((1:β)',Ones(1,β)), β,1,0)
ERROR: MethodError: no method matching Int64(::InfiniteArrays.Infinity)
Closest candidates are:
Int64(::T<:Number) where T<:Number at boot.jl:725
Int64(::Union{Bool, Int32, Int64, UInt8, Int128, Int16, Int8, UInt128, UInt16, UInt32, UInt64}) at boot.jl:717
Int64(::Ptr) at boot.jl:727
...
Stacktrace:
[1] Int64(::InfiniteArrays.Infinity) at ./deprecated.jl:466
[2] convert at ./number.jl:7 [inlined]
[3] _BandedMatrix(::Vcat{Float64,2,Tuple{Adjoint{Int64,InfiniteArrays.InfUnitRange{Int64}},Ones{Float64,2,Tuple{Int64,InfiniteArrays.Infinity}}}}, ::InfiniteArrays.Infinity, ::Any, ::Int64) at /Users/sheehanolver/Projects/BandedMatrices.jl/src/banded/BandedMatrix.jl:25
[4] top-level scope at none:0
Hi,
I try to use BandedMatrices package and I canβt set values in the symmetric case:
using BandedMatrices
a=BandedMatrix{Float64}(10,1,1)
fill!(a,0.0) #OK
BandedMatrices.inbands_setindex!(a,12.0,1,2) #OK
b=SymBandedMatrix{Float64}(10,1)
fill!(b,0.0) #KO indexing not defined
BandedMatrices.inbands_setindex!(b,12.0,1,2) #KO indexing not defined
The documentation does not say much about symmetric matrices so I start to read the implementation but it is not trivial to meβ¦
What would be the correct way to set values in symmetric banded matrices
Thank you for your help.
Laurent
I was discussing the banded generalized symmetric eigenvalue problem and realized that it is covered by the code here which is great. However, I also realized that the style of the LAPACK wrappers here are a bit different from the base wrappers since some, but not all of them, pass pointers. I think that is unfortunate for two reasons
pointer(A)
in a function.I assume that it is to avoid the overhead of SubArray
s but has the overhead been verified. The overhead should also have been reduced in 0.7 so maybe it is worth reconsidering. Anyway, just my thoughts. Feel free to dismiss this issue if you are certain the Fortran style is necessary here.
julia> using FillArrays
julia> kron(Diagonal([1,2,3]), Eye(3))
9Γ9 Diagonal{Float64,Array{Float64,1}}:
1.0 β
β
β
β
β
β
β
β
β
1.0 β
β
β
β
β
β
β
β
β
1.0 β
β
β
β
β
β
β
β
β
2.0 β
β
β
β
β
β
β
β
β
2.0 β
β
β
β
β
β
β
β
β
2.0 β
β
β
β
β
β
β
β
β
3.0 β
β
β
β
β
β
β
β
β
3.0 β
β
β
β
β
β
β
β
β
3.0
julia> using BandedMatrices
julia> kron(Diagonal([1,2,3]), Eye(3))
ERROR: MethodError: kron(::Diagonal{Int64,Array{Int64,1}}, ::Diagonal{Float64,Ones{Float64,1,Tuple{Base.OneTo{Int64}}}}) is ambiguous. Candidates:
kron(A::Diagonal, B::Union{RectDiagonal{T,Ones{T,1,Tuple{Axes}},Axes1}, Diagonal{T,Ones{T,1,Tuple{Axes}}}} where Axes1 where Axes where T) in BandedMatrices at /Users/solver/Projects/BandedMatrices.jl/src/interfaceimpl.jl:53
kron(A::Diagonal{T1,V} where V<:AbstractArray{T1,1}, B::Diagonal{T2,V} where V<:AbstractArray{T2,1}) where {T1<:Number, T2<:Number} in LinearAlgebra at /Users/solver/Projects/julia/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/diagonal.jl:383
Possible fix, define
kron(::Diagonal{T1<:Number,V} where V<:AbstractArray{T1<:Number,1}, ::Diagonal{T2<:Number,Ones{T2<:Number,1,Tuple{Axes}}} where Axes)
Stacktrace:
[1] top-level scope at none:0
This is ridiculously slow.
using BandedMatrices
A=brand(1000,1000,-2,2)
B=brand(1000,1000,2,2)
@time A*B # 1.39s
Consider this
julia> a = bones(5, 5, 1, 1)
5x5 BandedMatrices.BandedMatrix{Float64}:
1.0 1.0
1.0 1.0 1.0
1.0 1.0 1.0
1.0 1.0 1.0
1.0 1.0
julia> a[1, 1:2] = 2
ERROR: BoundsError
in setindex! at /Users/davide/Software/BandedMatrices.jl/src/BandedMatrices.jl:243
The BoundsError is actually taking place at https://github.com/ApproxFun/BandedMatrices.jl/blob/master/src/BandedMatrices.jl#L245.
I can prepare a PR.
Hi,
I almost got close to reinventing the wheel, but then had a look on METADATA and found this very interesting package.
In my work, I need to solve a banded system of equation. If you do not mind, I would like to contribute this functionality and open a WIP PR, so that you can track the progress and comment.
Thanks,
Davide
The following constructor from the Readme.md
no longer works:
BandedMatrix((-1=> 1:5, 2=>1:3), (n,m),(l,u))
(even after inserting the missing comma).
What is now the easiest way of constructing a banded matrix with given size and (constant) bands such as the classical finite difference stencil for the Laplacian? (Something like
BandedMatrix((-1=>1.0, 0=>-4.0, 1=>1.0), (n,n))
would be ideal...)
see my comment at e88cdf0#commitcomment-22741020
It would be great to add functions to compute the eigenvectors/eigenvalue with maximum real part, since BandedMatrices are often used to model linear elliptic partial differential equations. For now I use the KrylovKit package that operates on AbstractMatrices β not sure if there could be something better in this package.
This is slower than it should be:
julia> n = 1_000_000; A = brand(n,n,1,1);
julia> @btime A[band(0)] .= 1;
3.620 ms (4 allocations: 144 bytes)
julia> @btime A.data[2,:] .= 1;
1.569 ms (5 allocations: 112 bytes)
Even worse for CuArray backends.
I think it's easy to fix via a BandStyle <: AbstractArrayStyle{1}
override that looks at the MemoryLayout
.
The current implementation uses rand
, which may cause bad conditioning and failed tests. We should replace the instances of rand
by consistent samples that are guaranteed to be well-conditioned.
I solve non linear PDEs of the form 0 = f(V, βxV ,βxxV) using finite-difference schemes. I usually use NLsolve
as non linear solver.
Critically, the Jacobian of the system is a banded matrix. I would love a way for DiffEqDiffTools
to know that, so that it only needs to compute the relevant derivatives (and also stores the jacobian in an efficient form). Basically what I'd like is a special DiffEqDiffTools.finite_difference_jacobian!
signature for BandedMatrix
(not sure where it should live, so also @ChrisRackauckas).
HermTridiagonal{T}
hbtrd!
to return a HermTridiagonal{T}
Create an @inbands
to replicate the behaviour of @inbounds
for the case where a user knows that the operation is inside the bands. This way the performance of @inbands A[k,j]
should be equivalent to
@inbounds A.data[A.u + k - j + 1, j]
This would more closely reflect the current Base's construction of arrays.
When keywords are fast, perhaps this should be
BandedMatrix{Float64}(n, m; bandwidths=(l,u))
Diagonal * Banded, SymTridiagonal * Banded work (at least in dl/continuumarrays branch). Tridiagonal * Banded should also produce banded. Probably a simple fix.
This implementation is clearly not ideal:
#BandedMatrices.jl line 1023
*{T<:Number,V<:Number}(A::StridedMatrix{T},B::BLASBandedMatrix{V}) =
A*Array(B)
Negative bandwidths are now supported, but not bandwidths that result in no bands. Should this be supported?
There are no tests at the moment.
julia> using BandedMatrices
julia> n = 10_000_000; A = brand(n,n,3,3); b = randn(n); @time A*b;
Bus error: 10
This is surely an OpenBLAS issue, but I'll try with a different BLAS.
On Julia v1.2, using LinearAlgebra: has_offset_axes
throws a warning, and then calls to it later cause an error. using Base: has_offset_axes
works on 1.0, 1.1, and 1.2.
This should be straightforward by wrapping, e.g., A_b to be A[1-u:end,:]_b[1-u:end] when the upper bandwidth u is negative.
Both
SymBandedMatrix{Float64}(Zeros(4,4), 1)
and
sbrand(10,2)
yield the following error:
MethodError: no method matching BandedMatrices.SymBandedMatrix{Float64}(::Compat.UndefInitializer, ::Array{Float64,2}, ::Int64)
The tag name "v0.4" is not of the appropriate SemVer form (vX.Y.Z).
cc: @dlfivefifty
julia> LU = lufact(brand(10,10,1,1))
BandedMatrices.BandedLU{Float64}([2.29299e-314 2.40694e-314 β¦ 0.432928 0.0; 0.370261 0.586154 β¦ -0.273931 0.787497; 0.591802 0.620072 β¦ 0.669758 0.250051; 0.632051 -0.528174 β¦ 0.260353 0.311277], Int32[1, 3, 4, 5, 5, 7, 8, 8, 10, 10], 1, 1, 10)
julia> LU[:L]
ERROR: MethodError: no method matching getindex(::BandedMatrices.BandedLU{Float64}, ::Symbol)
This should be easier now that BandedMatrix
supports general backends.
SymBandedMatrix{T} <: AbstractBandedMatrix{T}
BandedMatrix{T}
and SymBandedMatrix{T}
ssbmv
sbtrd
to return a SymTridiagonal{T}
SubArray{T,<:SymBandedMatrix}
.A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.