Giter Club home page Giter Club logo

arnoldimethod.jl's People

Contributors

abraemer avatar andreasnoack avatar dependabot[bot] avatar fgerick avatar haampie avatar jaakkor2 avatar krish8484 avatar nymanlauri avatar ranocha avatar viralbshah avatar yingboma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

arnoldimethod.jl's Issues

Stable orthogonalization

Currently we use modified Gram-Schmidt which is known to be sufficient for explicit restart methods, but might develop instabilities when used with implicit restarts.

The issue is that after sufficiently many restarts V' * V = I fails to hold to required precision. Our method relies on this identity: we solve "Ax = λx" in the sense that x = Vy for some y such that Ax - λx ⟂ im V; this means that V'AVy = λV'Vy --> V'AVy = λy. This last step might then fail to hold.

A better approach is the DGKS-criterion / repeated classical Gram-Schmidt. Doing classical Gram-Schmidt twice is ~ as fast as doing modified Gram-Schmidt once due to BLAS2 goodies :). See http://www.netlib.org/utk/people/JackDongarra/etemplates/node221.html and http://stoppels.blog/posts/orthogonalization-performance

Overly strict type constraint in partialeigen

In this line I think the type constraint to too tight. We only need some size and eltype

https://github.com/haampie/ArnoldiMethod.jl/blob/c1e12341cb8a2452a4a08ce90f4e0868e2a82050/src/eigvals.jl#L39

eigenvalues(A::AbstractMatrix{T}, tol = eps(real(T)))

A doesn't need to be an AbstractMatrix. This impacts LightGraphs because we have linear operators that are implicitly defined in terms of other linear operators which are represented with a type that is not a subtype of AbstractMatrix.

Avoid type unstable eigenvalue computation

Turns out eigvals can either return complex or real numbers, the test failure of #22 showed:

Evaluated: Complex{Float64}[-0.0829261+0.0im, -0.123207+0.0im, -0.372096+0.0im, 0.402597+0.0im, 0.477362+0.0im, 0.601788+0.0im, -0.700002+0.0im, -0.703339+0.0im, -0.712221+0.0im, 0.743262+0.0im, 0.913093+0.0im, 0.921769+0.0im, 1.14274+0.0im, 1.26877+0.0im, 1.33171+0.0im, 1.7648+0.0im, 1.79766+0.0im, 2.36594+0.0im] ≈ [-0.0829261, -0.123207, -0.372096, 0.402597, 0.477362, 0.601788, -0.700002, -0.703339, -0.712221, 0.743262, 0.913093, 0.921769, 1.14274, 1.26877, 1.33171, 1.7648, 1.79766, 2.36594]

The values look equal, but are of different type.

In principle this happens almost never, but still, this means eigvals is type unstable, so it might be problematic in the general case as well.

With real arithmetic, we should maybe just work with the Schur values and vectors, which would handle conjugate eigenpairs better and allows us to stay in the reals all the way.

Feature request: Allow an initial start-vector

Right now, the initial vector for the iteration seems to be chosen at random. I think Arpack had the option to select an initial vector. This would allow two things:

  1. If one had an idea what the end result should look like, they could give this as an initial vector which would lead to fewer iterations.
  2. Determinism in regression tests. For example GraphPlots.jl compares the output image with previously generated images. Right now, it does not do that for spectral graph layouts and without an initial starting value, the output might look slightly different every time.

Use a Krylov-Schur implementation to make life easier

Suppose that we have an Arnoldi relation

AV = VH + hveₖ' = [V v]B   where B = [H; heₖ']

and some unitary matrix Q, then a similar Krylov-relation would be

AVQ = VHQ + hveₖ'Q

If you update B ← [Q 0; 0 1]'BQ and V ← VQ

AV = [V v]B

But B is no longer Hessenberg. Fortunately it's easy to restore via householder reflections like this:

xxxx    ****    ****    ***x    ****    **xx    ****
xxxx    ****    ****    ***x    ****    **xx    ****
xxxx -> **** -> **** -> ***x -> **** ->  *xx ->  xxx
xxxx    ****    ****      *x      xx      xx      xx
xxxx       *       x       x       x       x       x
  B     B←BW₁   B←W₁B   B←BW₂   B←W₂B   B←BW₃   B←W₃B

The proces above is just B ← [W 0; 0 1]'BW repeatedly. And then in fact the unitary matrix Z = QW₁⋯Wₙ would result in another Arnoldi relation with V←VZ and B←[Z 0; 0 1]'BZ.

Now the interesting bit is that we can pick Q as the Schur matrix in HQ = QR where R is upper triangular. We can freely reorder the diagonal blocks of R s.t. the Ritz values we don't want appear bottom right. Now we can just drop the last columns of the new R and only then restore Hessenberg form.

This seems a lot easier than the convoluted purging strategies etc etc. Worth a shot.

Generalized EP: convergence sometimes lacking

An eigenvalue problem is solved with Arpack and ArnoldiMethod. The eigenvalues agree, the eigenvectors do not.

#####################################################                                                                                                                                                            
# unit_cube_modes                                                                                                                                                                                                
Vibration modes of unit cube  of almost incompressible material.                                                                                                                                                 
                                                                                                                                                             
Eigenvalues: [6.67261e-07, 6.72742e-07, 7.01002e-07, 7.03235e-07, 7.21761e-07, 7.34494e-07, 2.62599e-01, 2.62599e-01, 3.57453e-01, 3.57453e-01, 3.57453e-01, 3.60635e-01, 3.60635e-01, 3.60635e-01, 4.08373e-01, 
4.08385e-01, 4.08385e-01, 4.61679e-01, 4.61679e-01, 4.61679e-01] [Hz]                                                                                                                                            
[ Info: (3.60928e-12, 1.79179e-12)                                                                                                                                                                               
[ Info: (2.66454e-15, 1.42247e-15)                                                                                                                                                                               
[ Info: 1.8647224970419253e-10                                                                                                                                                                                   
#####################################################                                                                                                                                                            
# unit_cube_modes_arnoldimethod                                                                                                                                                                                  
Vibration modes of unit cube  of almost incompressible material.                                                                                                                                                 
                                                                                                                                                           
Eigenvalues: [6.67261e-07, 6.72742e-07, 7.01002e-07, 7.03235e-07, 7.21761e-07, 7.34494e-07, 2.62599e-01, 2.62599e-01, 3.57453e-01, 3.57453e-01, 3.57453e-01, 3.60635e-01, 3.60635e-01, 3.60635e-01, 4.08373e-01, 
4.08385e-01, 4.08385e-01, 4.08385e-01, 4.61679e-01, 4.61679e-01] [Hz]                                                                                                                                            
[ Info: (1.44905e+00, 7.83647e+00)                                                                                                                                                                               
[ Info: (1.11022e-15, 9.80491e-01)                                                                                                                                                                               
[ Info: 0.07090617444834676    

The "info" prints orthogonality checks and the residual norm. Clearly the modes are not mass orthogonal for the ArnoldiMethod.jl computation.
The orthogonality checks begin on line https://github.com/PetrKryslUCSD/FinEtoolsDeforLinear.jl/blob/d44694b9829a4feb2755c89341ac5207f9e7d0ef/examples/dynamics/free_vibration/3-d/unit_cube_mode_examples.jl#L55
The residual is ten orders of magnitude bigger for ArnoldiMethod.jl.

The EP computation is implemented in https://github.com/PetrKryslUCSD/GEPHelpers.jl.git.

I may be misunderstanding something, but I tried to reproduce the example from the ArnoldiMethod.jl documentation exactly.

Steer convergence

For matrices with very dominant eigenvalues at one end of the spectrum, partialschur has trouble targeting eigenvalues on the other side.

For example:

julia> A = Diagonal([1:0.001:2; 4:7]);
julia> round.(eigvals(partialschur(A, nev=10, which=SR())[1].R), digits=3)
10-element Array{Float64,1}:
 7.0
 6.0
 5.0
 4.0
 1.0
 1.001
 1.002
 1.003
 1.004
 1.005

I'm not sure what the best way is to deal with this issue.

If you would remove them from the search subspace with purging techniques, they would enter the space soon enough again.

If you would lock them in the search subspace such that these directions are deflated (what happens now), they might slow down the convergence to the other eigenvectors because the locked vectors reduce the size of the active search subspace -- you should have to increase maxdim then to get the same rate of convergence if the matrix were deflated explicitly.


Further, there is the question when we know for sure there are unwanted dominant eigenvalues converged. The simplest criterion: after nev Ritz vectors have converged, we should let another converge, and see if it is closer to the target than anything that has converged already. If so, continue, if not, we're done. (This is still heuristics!)


Another interesting approach would be to purge the unwanted Ritz vectors from the search subspace, but remember their Ritz values, and then use these as shifts in implicit restart s.t. there's a filter to remove these vectors. Maybe it's not unstable to do this implicit restart with exact eigenvalues if the corresponding ritz vector has not converged? idk.

Partial Schur decomp does not hold?

> A = sprand(10_000, 10_000, 10 / 10_000) + 3I
> r = IRAM.restarted_arnoldi(A, 5, 30, 3, 0.00001, 100);
...
> vecnorm(A * r.Q[:,1:1] - r.Q[:,1:1] * r.R[1:1,1:1])
6.287699132134216e-10
> vecnorm(A * r.Q[:,1:2] - r.Q[:,1:2] * r.R[1:2,1:2])
0.35214245086088336

In complex arithmetic the second eigenvalue needs way more iterations to converge, but it does converge:

> B = convert(SparseMatrixCSC{Complex128,Int}, A);
> r = IRAM.restarted_arnoldi(B, 5, 30, 3, 0.00001, 200);
> vecnorm(B * r.Q[:,1:1] - r.Q[:,1:1] * r.R[1:1,1:1])
4.6068872080697494e-10
> vecnorm(B * r.Q[:,1:2] - r.Q[:,1:2] * r.R[1:2,1:2])
8.367884143667247e-6

Apply Given's rotations right away to the whole H matrix

In the implicit restart we have a structure

H11 H12
    H22

where the Given's rotations are being applied to H22, and via a callback to H12. It seems to make more sense to just pass the full H matrix to the implicit restart routine, so that we can apply the Given's rotations to H12 an H22 at the same time when multiplying from the right.

--

In the locking procedure we could basically do the same:

H11 H12 H13
    H22 H23
        H33

When applying Given's rotations on H22, just apply them to H23 and H12 right away.

So basically pass the start and end positions of the H-block to which you apply Given's rotations to the shift functions, rather than a view of H22.

  • Fix this in implicit restart (#13)
  • Fix this in locking once we have our own QR algorithm working

Feature Request -- support for Lanczos

Great Package!
Having support for symmetric/Hermitian matrices (namely implementation of Lanczos) would be extremely useful for quantum physics-related problems.

Handle some edge cases better

  • All zeros matrix gives zero norm of Hessenberg matrix: < should be <= in convergence criterion
  • Division by zero when computing eigenvectors of quasi upper triangular matrix
  • Sylvester equation can be very ill-conditioned with close eigenvalues -- currently it checks if the eqn is exactly singular, but we should probably improve the situation where it's close to singular

Matrix-induced (semi-)inner product when B ≥ 0.

In the generalized eigenvalue problem Ax = λBx where B' = B and B ≥ 0 is makes sense to obtain a Schur decomp with the <x, y> = dot(x, B * y) (semi-)inner product.

Also, some care has to be taken to remove "infinite" eigenvalues: after the shift and invert transformation (A - σB)⁻¹Bx = xθ where θ = 1/(λ - σ) there are x s.t. Bx = 0; transforming back λ = σ + 1/θ would lead to λ = ∞.

Looks like ARPACK uses some daunting looking "purification" process to remove these eigenvalues, but there is also this reference [1] which seems easier.

[1] Meerbergen, Karl, and Alastair Spence. "Implicitly restarted Arnoldi with purification for the shift-invert transformation." Mathematics of Computation of the American Mathematical Society 66.218 (1997): 667-689.

Algorithmic roadmap

Open issues:

Finished:

  • Handle close-to-0 eigenvalue better; stopping criterion is probably to strict for them now. (#72)

Improve the selection of shifts and more.

What I more or less get from ARPACK:

  • Expand the search subspace
  • Compute ritz values + error bounds
  • Split in wanted / unwanted
    • Conjugate pair inbetween boundary? Move pair to wanted == one shift less than standard.
  • Apply convergence test to each ritz value + error bound
    • Are there unwanted ritz pairs converged? Avoid shifting with them! Move them to wanted set for now and increase the total number of eigenvalues searched for accordingly -- so we converge to more eigenvalues than originally planned!
  • Do the implicit restart with the unwanted ritz values.

partial_schur not respecting argument nev

This is an unintuitive behavior for me given the way that eigs used to work.

julia> A = randn(100,21); A = A*A';

julia> schur_to_eigen(partial_schur(A, nev=8, which=LM())[1])[1]
9-element Array{Float64,1}:
 181.35175628108203
 174.42634344552079
 142.31044226550978
 132.80295154586017
 126.68141410737701
 121.75800101083811
 109.79068655370247
 104.68006588968174
 100.09676418940714

julia> schur_to_eigen(partial_schur(A, nev=12, which=LM())[1])[1]
21-element Array{Float64,1}:
 181.3517562810819
 174.42634344552076
 142.31044226550975
...

I expected partial_schur to return a schur decomposition of order exactly nev, or schur_to_eigen to take an argument of nev too.

Segmentation fault when asking for zero eigenvalues

By mistake I asked for zero eigenvalues (nev=0) and got a nice segmentation fault:

using ArnoldiMethod
using LinearAlgebra

N = 11
o = ones(N)
T = SymTridiagonal(-2o, o[2:end])

# partialschur(T, nev=0, which=SR()) # Works
# partialschur(T, nev=1, which=LR()) # Works
partialschur(T, nev=0, which=LR()) # Boom
signal (11): Segmentation fault: 11
in expression starting at /Users/jagot/work/projects/td-cis/time-propagation/arnoldi_bug.jl:13
getindex at ./array.jl:729 [inlined]
include_conjugate_pair at /Users/jagot/.julia/packages/ArnoldiMethod/5fDBS/src/run.jl:411 [inlined]
_partialschur at /Users/jagot/.julia/packages/ArnoldiMethod/5fDBS/src/run.jl:219
#partialschur#1 at /Users/jagot/.julia/packages/ArnoldiMethod/5fDBS/src/run.jl:103 [inlined]
#partialschur at ./none:0
unknown function (ip: 0x128481b29)
jl_fptr_trampoline at /Users/osx/buildbot/slave/package_osx64/build/src/gf.c:1864
do_call at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:323
eval_stmt_value at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:362 [inlined]
eval_body at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:759
jl_interpret_toplevel_thunk_callback at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:885
unknown function (ip: 0xfffffffffffffffe)
unknown function (ip: 0x11b1e4c8f)
unknown function (ip: 0x5)
jl_interpret_toplevel_thunk at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:894
jl_toplevel_eval_flex at /Users/osx/buildbot/slave/package_osx64/build/src/toplevel.c:764
jl_parse_eval_all at /Users/osx/buildbot/slave/package_osx64/build/src/ast.c:883
jl_load at /Users/osx/buildbot/slave/package_osx64/build/src/toplevel.c:826 [inlined]
jl_load_ at /Users/osx/buildbot/slave/package_osx64/build/src/toplevel.c:833
include at ./boot.jl:326 [inlined]
include_relative at ./loading.jl:1038
include at ./sysimg.jl:29
include at ./client.jl:403
jl_fptr_trampoline at /Users/osx/buildbot/slave/package_osx64/build/src/gf.c:1864
do_call at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:323
eval_stmt_value at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:362 [inlined]
eval_body at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:759
jl_interpret_toplevel_thunk_callback at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:885
unknown function (ip: 0xfffffffffffffffe)
unknown function (ip: 0x11c3a9bdf)
unknown function (ip: 0xffffffffffffffff)
jl_interpret_toplevel_thunk at /Users/osx/buildbot/slave/package_osx64/build/src/interpreter.c:894
jl_toplevel_eval_flex at /Users/osx/buildbot/slave/package_osx64/build/src/toplevel.c:764
jl_toplevel_eval at /Users/osx/buildbot/slave/package_osx64/build/src/toplevel.c:773 [inlined]
jl_toplevel_eval_in at /Users/osx/buildbot/slave/package_osx64/build/src/toplevel.c:793
eval at ./boot.jl:328
eval_user_input at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:85
macro expansion at /Users/osx/buildbot/slave/package_osx64/build/usr/share/julia/stdlib/v1.1/REPL/src/REPL.jl:117 [inlined]
#26 at ./task.jl:259
jl_apply at /Users/osx/buildbot/slave/package_osx64/build/src/./julia.h:1571 [inlined]
start_task at /Users/osx/buildbot/slave/package_osx64/build/src/task.c:572
Allocations: 13692126 (Pool: 13688883; Big: 3243); GC: 26
zsh: segmentation fault  julia

This is the line that dies: https://github.com/haampie/ArnoldiMethod.jl/blob/master/src/run.jl#L411

Maybe partialschur should throw an ArgumentError when nev<1?

Locked vectors overwritten in real arithmetic

julia> A = sprand(10_000, 10_000, 10 / 10_000) + 3I
julia> r = IRAM.restarted_arnoldi(A, 5, 30, 5, 0.00001, 100)
active = 2
...
active = 2
active = 2
active = 2
active = 2
active = 3 <---
active = 3
active = 2 <---
active = 2
active = 2
active = 2
active = 2

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Eigenvalues of rank deficient matrices are not sorted correctly.

If L is rank deficient (it is a combinatorial laplacian of a graph) then the 0 eigenvalue appears with the LR() eigenvalues.

For example:

julia> L = sparse(lapl)
10×10 SparseMatrixCSC{Float64,Int64} with 40 stored entries:
  [1 ,  1]  =  2.0
  [5 ,  1]  =  -1.0
  [6 ,  1]  =  -1.0
  [2 ,  2]  =  3.0
  [3 ,  2]  =  -1.0
  [4 ,  2]  =  -1.0
  [7 ,  2]  =  -1.0
  [2 ,  3]  =  -1.0
  [3 ,  3]  =  3.0
  [6 ,  3]  =  -1.0
  [8 ,  3]  =  -1.0
  [2 ,  4]  =  -1.0
  [4 ,  4]  =  3.0
  [9 ,  4]  =  -1.0
  [10,  4]  =  -1.0
  [1 ,  5]  =  -1.0
  [5 ,  5]  =  3.0
  [8 ,  5]  =  -1.0
  [10,  5]  =  -1.0
  [1 ,  6]  =  -1.0
  [3 ,  6]  =  -1.0
  [6 ,  6]  =  4.0
  [9 ,  6]  =  -1.0
  [10,  6]  =  -1.0
  [2 ,  7]  =  -1.0
  [7 ,  7]  =  1.0
  [3 ,  8]  =  -1.0
  [5 ,  8]  =  -1.0
  [8 ,  8]  =  3.0
  [9 ,  8]  =  -1.0
  [4 ,  9]  =  -1.0
  [6 ,  9]  =  -1.0
  [8 ,  9]  =  -1.0
  [9 ,  9]  =  4.0
  [10,  9]  =  -1.0
  [4 , 10]  =  -1.0
  [5 , 10]  =  -1.0
  [6 , 10]  =  -1.0
  [9 , 10]  =  -1.0
  [10, 10]  =  4.0

julia> partialeigen(partialschur(L, which=LR())[1])
([1.05398e-15, 0.574619, 5.89222, 5.72277, 4.85151, 3.81628, 3.04859, 1.54485, 2.08427, 2.46491], [-0.316228 -0.294971  0.140312 0.371051; -0.316228 0.339792  -0.0655438 0.372631;  ; -0.316228 -0.135193  0.169911 -0.163748; -0.316228 -0.156412  0.330283 -0.162148])

BigFloat matrix in partialschur() does not work

MWE:

using ArnoldiMethod, LinearAlgebra, SparseArrays
function mwe()
A = spdiagm(
           -1 => -1.0*ones(9),
            0 =>  2.0*ones(10), 
            1 => -1.0*ones(9)
       );
B = spdiagm(
           -1 => -big(1.0)*ones(9),
            0 => big(2.0)*ones(10), 
            1 => -big(1.0)*ones(9)
       );
display(A)
decomp, history = partialschur(A, nev=10, tol=1e-6, which=SR());
display(decomp)

display(B)
decomp, history = partialschur(B, nev=10, tol=1e-6, which=SR());
display(decomp)
end

Output:


julia> mwe()
10×10 SparseMatrixCSC{Float64,Int64} with 28 stored entries:
  [1 ,  1]  =  2.0
  [2 ,  1]  =  -1.0
  [1 ,  2]  =  -1.0
  [2 ,  2]  =  2.0
  [3 ,  2]  =  -1.0
  [2 ,  3]  =  -1.0
  [3 ,  3]  =  2.0
  [4 ,  3]  =  -1.0
  [3 ,  4]  =  -1.0
  ⋮
  [8 ,  7]  =  -1.0
  [7 ,  8]  =  -1.0
  [8 ,  8]  =  2.0
  [9 ,  8]  =  -1.0
  [8 ,  9]  =  -1.0
  [9 ,  9]  =  2.0
  [10,  9]  =  -1.0
  [9 , 10]  =  -1.0
  [10, 10]  =  2.0
10×10 SparseMatrixCSC{BigFloat,Int64} with 28 stored entries:
  [1 ,  1]  =  2.0
  [2 ,  1]  =  -1.0
  [1 ,  2]  =  -1.0
  [2 ,  2]  =  2.0
  [3 ,  2]  =  -1.0
  [2 ,  3]  =  -1.0
  [3 ,  3]  =  2.0
  [4 ,  3]  =  -1.0
  [3 ,  4]  =  -1.0
  ⋮
  [8 ,  7]  =  -1.0
  [7 ,  8]  =  -1.0
  [8 ,  8]  =  2.0
  [9 ,  8]  =  -1.0
  [8 ,  9]  =  -1.0
  [9 ,  9]  =  2.0
  [10,  9]  =  -1.0
  [9 , 10]  =  -1.0
  [10, 10]  =  2.0
PartialSchur decomposition (Float64) of dimension 10
eigenvalues:
10-element Array{Complex{Float64},1}:
 0.08101405277100668 + 0.0im
 0.31749293433763753 + 0.0im
  0.6902785321094296 + 0.0im
   1.169169973996227 + 0.0im
   1.715370323453429 + 0.0im
   2.284629676546568 + 0.0im
   2.830830026003772 + 0.0im
  3.3097214678905678 + 0.0im
   3.682507065662358 + 0.0im
  3.9189859472289927 + 0.0im
ERROR: MethodError: no method matching gemv!(::Char, ::BigFloat, ::SubArray{BigFloat,2,Array{BigFloat,2},Tuple{Base.Slice{Base.OneTo{Int64}},UnitRange{Int64}},true}, ::SubArray{BigFloat,1,Array{BigFloat,2},Tuple{UnitRange{Int64},Int64},true}, ::BigFloat, ::SubArray{BigFloat,1,Array{BigFloat,2},Tuple{Base.Slice{Base.OneTo{Int64}},Int64},true})
Closest candidates are:
  gemv!(::AbstractChar, ::Float64, ::Union{AbstractArray{Float64,1}, AbstractArray{Float64,2}}, ::AbstractArray{Float64,1}, ::Float64, ::AbstractArray{Float64,1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/blas.jl:565
  gemv!(::AbstractChar, ::Float32, ::Union{AbstractArray{Float32,1}, AbstractArray{Float32,2}}, ::AbstractArray{Float32,1}, ::Float32, ::AbstractArray{Float32,1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/blas.jl:565
  gemv!(::AbstractChar, ::Complex{Float64}, ::Union{AbstractArray{Complex{Float64},1}, AbstractArray{Complex{Float64},2}}, ::AbstractArray{Complex{Float64},1}, ::Complex{Float64}, ::AbstractArray{Complex{Float64},1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/LinearAlgebra/src/blas.jl:565

UUID mismatch in Project.toml and JuliaRegistries/General Package.toml

]add ArnoldiMethod#master gives

   Updating git-repo `https://github.com/haampie/ArnoldiMethod.jl.git`
ERROR: UUID `3ad52dbc-8794-42d5-ba69-10f868172940` given by project file `C:\Users\jaakkor2\AppData\Local\Temp\jl_kECCTZ\Project.toml` does not match given UUID `ec485272-7323-5ecc-a04f-4719b315124d`

https://github.com/haampie/ArnoldiMethod.jl/blob/master/Project.toml#L2
has
uuid = "3ad52dbc-8794-42d5-ba69-10f868172940"
and
https://github.com/JuliaRegistries/General/blob/master/A/ArnoldiMethod/Package.toml#L2
has
uuid = "ec485272-7323-5ecc-a04f-4719b315124d"
This dates Aug 31 2018.

Missing function name eigs

Hi @haampie, is there a reason this package doesn't export a function eigs satisfying the old interface?

it looks like the correct way to use it is

function eigs(A; kwargs)
     schur_to_eigen(partial_schur(A; kwargs...)[1])
end

Is there something else to it?

Incorrect results

I noticed that sometimes the results obtained via ArnoldiMethod.jl are incorrect. I attach a minimal example below:

Download myerror.jld2 and run:

using JLD2
using KrylovKit
using ArnoldiMethod
using Test

@load "myerror.jld2" W
println("Real part of rightmost eigevalue")
println(string("Direct: ", maximum(real(eigvals(W)))))

# ArnoldiMethod.jl sometimes reports incorrect results
@testset "ArnoldiMethod.jl" begin
    for i = 1:30
        decomp, history = partialschur(W, nev=3, which=LR(), tol=1e-13);
        λ, V = partialeigen(decomp);
        @test history.nconverged > 1
        @test abs(maximum(real(λ)) - maximum(real(eigvals(W)))) < 1e-6
    end
end

# An alternative implementation seems to work for this problem
# Occasionally KrylovKit reports LAPACKException(1), but this is discussed in https://github.com/Jutho/KrylovKit.jl/issues/10
@testset "KrylovKit.jl" begin
    for i = 1:30
        λ, V, info = eigsolve(W, 3, :LR, tol=1e-13)
        @test info.converged > 1
        @test abs(maximum(real(λ)) - maximum(real(eigvals(W)))) < 1e-6
    end
end

on

Julia version: 1.0.1
ArnoldiMethod.jl version: 0.0.4.

partialschur finds only one of three degenerate eigenvectors

I have matrix with a three-fold degenerate extremal eigenvalue and ArnoldiMethod.partialschur finds only one of them while e.g. Arpack.eigs finds all three. See the following MWE:

using Arpack
using ArnoldiMethod
using SparseArrays

ham_sparse = sparse(I, J, V)     # the numerical values for I, J and V are hosted at https://justpaste.it/5naju

λ, φ = Arpack.eigs(ham_sparse, which=:SR)
print(real(λ))

decomp, _ = partialschur(ham_sparse, nev=6, which=SR())
λ, φ = partialeigen(decomp)
print(real(λ))

prints

[-4.785165067232189, -4.785165067232181, -4.785165067232169, -4.458749101599544, -3.5927605849611726, -3.5927605849611552]
[-4.785165067232141, -4.603267050112509, -4.4587491015995315, -3.59276058496115, -3.502637833700079, -3.2498680277756797]

Improved & early locking of converged Ritz vectors

Current implementation

Current implementation of locking is to wait until a subdiagonal element of the Hessenberg matrix becomes small, and then do a local schur decomposition of the left upper block of the Hessenberg matrix.

Problem with this implementation

Very often Ritz vectors converge without an offdiagonal element getting small. Also we know which Ritz vectors have converged at the end of an outer iteration of Arnoldi. We should use that information better!

Workaround

Use this locking technique by Sorensen [1]. Basic idea / my notes:

  1. For each converged pre-Ritz pair (y, θ) determine a Householder reflection W such that Wy = e₁τ with |τ| = 1.
    1. W'HWe₁ = e₁θ, so W'HW = [θ a'; 0 B], where B is dense (Hessenberg form is lost).
    2. Further eₖ'W = eₖ' + w' where ‖w‖₂ ≤ √2 |yₖ|.
  2. Post-multiply the Arnoldi relation AV = VH + hₖ₊₁,ₖvₖ₊₁eₖ' with this reflector
    1. We get AVW = VWW'HW + hₖ₊₁,ₖvₖ₊₁eₖ'W
    2. Using the above: A(VW) = (VW)[θ a'; 0 B] + hₖ₊₁,ₖvₖ₊₁eₖ' + hₖ₊₁,ₖvₖ₊₁w'
    3. We can drop the term hₖ₊₁,ₖvₖ₊₁w' because it's small according to the convergence criterion.
    4. We have to find series of Householder reflections Z s.t. Z'[θ; a'; 0 B]Z is Hessenberg again.
    5. The new Arnoldi relation is A(VWY) = (VWY)(Y'W'HWY) + hₖ₊₁,ₖvₖ₊₁eₖ'

Now I think that this step should be combined with the implicit shifts. So basically:

  1. Make a maxdim Arnoldi relation
  2. Compute the eigenvalues of H + eigenvectors.
  3. Partition into converged & not converged
  4. Create a matrix Q = I
  5. For each converged eigenvalue, construct W and Y and update Q ← Q * W * Y, H ← Y'W'HWY
  6. Do implicit restart with a subset of unconverged eigenvalues, where Q gets updated with the Given's rotations.
  7. Only now compute V ← V * Q.

Gotcha's

Still have to find out how this is supposed to work with real arithmetic and complex conjugate eigenpairs.


Actually the above looks a lot like standard bulge chasing, but then with large householder reflectors... There's one "special" reflector W, and then the rest Y is about restoring the Hessenberg structure. The difference being that we place the eigenvalue in entry H[1,1] rather than H[end,end].

[1] Deflation techniques for an implicitly restarted Arnoldi iteration.

Number types

Currently things error when calling partial_schur with matrices of integer eltype.

Reuse some temporaries

AFAIK we only need an additional V, H and Q. Better just put them in a struct as well.

matrix-free example

Can someone help me find an example of how to compute eigenvalues in this repo using a matrix-free approach?

Type instabilities

The current implementation feels slow, probably just because of type instabilities:

> A = sprand(10_000, 10_000, 10 / 10_000) + 3I
> @code_warntype IRAM.restarted_arnoldi(A, 5, 30, 3, 0.00001, 200);
Variables:
  #self# <optimized out>
  A::SparseMatrixCSC{Float64,Int64}
  min::Int64
  max::Int64
  converged::Int64
  ε::Float64
  max_restarts::Int64
  restarts <optimized out>
  new_active::Any
  H22::Any
  schur_form::Tuple{Union{AbstractArray{T,2} where T, Array{T,1} where T},Union{AbstractArray{T,2} where T, Array{T,1} where T},Union{AbstractArray{T,2} where T, Array{T,1} where T}}
  V_locked::Any
  H_right::Any
  H_above::Any
  value <optimized out>
  #temp#@_16::Int64
  n::Int64
  arnoldi::IRAM.Arnoldi{Float64}
  h::Array{Float64,1}
  min′::Int64
  active::Any
  V_new::Array{Float64,2}
  V_prealloc::Array{Float64,2}
  #temp#@_24::Int64
  J@_25 <optimized out>
  #temp#@_26 <optimized out>
  J@_27::Any
  #temp#@_28 <optimized out>
  #temp#@_29 <optimized out>
  J@_30::Tuple
  J@_31::Tuple
  J@_32::Any
  #temp#@_33 <optimized out>
  #temp#@_34 <optimized out>
  #temp#@_35 <optimized out>
  #temp#@_36 <optimized out>
  J@_37::Any
  #temp#@_38 <optimized out>
  #temp#@_39 <optimized out>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.