nlesc-jcer / fortran_davidson Goto Github PK
View Code? Open in Web Editor NEWDavidson eigensolver implemented in Fortran
License: Apache License 2.0
Davidson eigensolver implemented in Fortran
License: Apache License 2.0
Currently the maximum dimension of the subspace is set to half the number of columns of the matrix. This number is too big.
Make this dimension an optional value and add a default value of 10 times the number of requested eigenvalues
Add library to the RSD
Currently we are checking that the algorithm converges using the following formula
norm(H * computed_eigenvectors - computed_eigenvalues) < tolerance
This condition is quite stringent for convergence and expensive, specially for the matrix-free version
When using ifort
together when MKL
the following error is thrown:
forrtl: error (65): floating invalid
The subroutine throwing the error is DGEQRF
See for instance: https://github.com/iomega/paired-data-form
The current implementation of the dense correction for the DPR use m x m
matrices to build the correction but some further optimizations can be done for the matrices where all the off-diagonal elements are 0
The ritz_vectors
are recomputed for the correction of the dense version. The correction function should receive the ritz_vector
as input. Also, the residues matrix is recomputed:
TODO:
ritz_vectors
In the current implementation of the Matrix-free version:
QR
factorization could change the sign of the vectors making a block-update unfeasible.Todo:
In the current Davidson implementation, all the pair of eigenvalues/eigenvectors are constantly optimized until every pair converge. It would be desirable to deflate the pairs that are already converge and dynamically changed the size of the block for the next iterations.
See this paper
See unified memory
Append the orthonormalized correction vector to the previous subspace and compute the new reduce Hamiltonian
H' = V^T H V
by computing only the matrix elements involved with the new correction vector.
Host the documentation at GitHub pages (maybe ?)
replace intrinsic matmul
by the corresponding lapack matrix-matrix and matrix-vector multiplications.
Should we separate the dense and free part of the code ? Having everything in one file makes it difficult to navigate (at least for me).
Also some subroutines have a dense and a free version while I think they should be the same (for example the DPR correction). We should simplify that.
It might be me but I get that error message when trying to run the test
UpdateCTestConfiguration from :/home/nico/Fortran_Davidson/build/DartConfiguration.tcl
Parse Config file:/home/nico/Fortran_Davidson/build/DartConfiguration.tcl
UpdateCTestConfiguration from :/home/nico/Fortran_Davidson/build/DartConfiguration.tcl
Parse Config file:/home/nico/Fortran_Davidson/build/DartConfiguration.tcl
Test project /home/nico/Fortran_Davidson/build
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
No tests were found!!!
The calculation of the residues in the free version:
Fortran_Davidson/src/davidson.f90
Lines 388 to 392 in 1ad2748
can be vectorised to avoid the callings to fun_mtx_gemv and fun_stx_gemv. The trick was inspired by Votca lines:
And consists in constructing a squere matrix with the eingevalues in the diagonal first, e.g.:
! 6. Construction of lambda matrix
lambda= eye( size( V, 2), size( V, 2))
do j= 1, size(V,2)
lambda( j, j)= eigenvalues_sub( j)
enddo
then followed by the residue calculation:
! 7. Residue calculation
rs = lapack_matmul('N', 'N', stxV, eigenvectors_sub)
guess = lapack_matmul('N', 'N', rs, lambda)
deallocate(rs)
rs = lapack_matmul('N', 'N', mtxV, eigenvectors_sub) - guess
do j=1,lowest
errors(j) = norm(reshape(rs,(/parameters%nparm/)))
end do
We should replace the QR by a GS ortho and do an update of the projected matrix. That speeds up the code quite a bit for the C++ implementation
Currently the matrix to be diagonalized and the optional matrix for the general eigenvalue problem are real densed matrices.
The interface to call the davidson eigensolver is:
generalized_eigensolver(mtx, eigenvalues, ritz_vectors, lowest, method, max_iters, &
tolerance, iters, max_dim_sub, stx)
We can overload the function call to allow the mtx
and the optional argument stx
to be functions allowing to have a matrix-free version of the algorithm.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.