Comments (5)
- "What not just add CUDSS.jl as a dependency of LinearSolve.jl?"
Not even CUDA is a dependency of LinearSolve.jl. Obviously it
makes no sense to make CUDSS a dependency when CUDA isn't
even a dependency, and when most people don't have a
CUDA-compatible card.
I was talking about the LinearSolve's CUDA.jl extension.
In the end, it depends on what NVIDIA plans to do with cuDSS.
If it's merged with CUSOLVER and added as a standalone package in CUDA, we should move this interface to CUDA.jl.
Note that these factorizations still have bugs, and we frequently interact with NVIDIA about them.
I don't think it should be used by everyone without caution.
If we don't have "official" sparse factorizations in CUDA, there isn't much we can do in the linear algebra interface of CUDA.jl.
Implementing a generic GPU fallback for these cases would be challenging.
We also lack a generic CPU implementation in the Julia module SparseArrays...
from cudss.jl.
-
lu
,cholesky
andldlt
of aCuSparseMatrixCSR
are not defined in CUDA.jl.
We don't have any sparse factorization in CUSPARSE.
CUSPARSE only provides sparse matrix products and sparse triangular solves. -
cuDSS is not part of the CUDA toolkit. It's still a standalone package in preview. I did a package CUDSS_jll.jl because NVIDIA doesn't provide the library like the other CUDA libraries.
cuDSS has a different release system than CUDA toolkit and NVIDIA can break the API at each release.
It will be harder to maintain in CUDA.jl because we can't do new release when we want.
What not just add CUDSS.jl as a dependency of LinearSolve.jl?
It will solve all your issues.
from cudss.jl.
- Yes, that is the definition of type piracy and what the whole issue is about.
- "What not just add CUDSS.jl as a dependency of LinearSolve.jl?" Not even CUDA is a dependency of LinearSolve.jl. Obviously it makes no sense to make CUDSS a dependency when CUDA isn't even a dependency, and when most people don't have a CUDA-compatible card. It will not solve all of the issues, it introduces many many more. Giving hundreds of thousands of people an extra JLL when most don't have a GPU isn't an option, but having it with whatever ships the CUDA sparse array is.
I did a package CUDSS_jll.jl because NVIDIA doesn't provide the library like the other CUDA libraries.
Yes, a separate JLL makes sense. But the question is whether a fundamental type piracy can be fixed. Its fundamental because users who define CuSparseMatrixCSR are missing a core action, but there is no error message or warning from CUDA.jl that to fix part of the linear algebra interface to allow the user to know what's going on. Since CUDA.jl defines the rest of the linear algebra interface, leaving off this one function causes downstream usage issues. It's a red flag that clearly should get resolved somehow. Maybe for now it's fine as a separate package while CUDSS is still being developed but why wouldn't it it be a part of CUDA's SparseArray dispatches by default in the long run? Or if not, what's the plan to fix the type piracy?
from cudss.jl.
CUDA.jl is already way to heavyweight, I don't think adding more libraries is a good way forwards. In fact, we've been doing the inverse, moving stuff like cuDNN and cuTENSOR out and restricting CUDA.jl to only what the CUDA toolkit provides.
from cudss.jl.
What about a separate CUDASparseArrays that is complete with the sparse arrays defintions and solvers? That would make it like SparseArrays which as SuiteSparse to match it?
I was talking about the LinearSolve's CUDA.jl extension.
Extensions cannot have dependencies in Julia v1.10. And if it was added as a requirement to the extension that would break a lot of existing code.
from cudss.jl.
Related Issues (17)
- Segmentation fault with cudssDataDestroy HOT 2
- LDL' factorizations are failing with complex Hermitian matrices HOT 2
- Add continuous integration
- Unable to reuse the analysis if we only store one triangle for symmetric factorizations
- TagBot trigger issue HOT 11
- Add a method \ for CudssSolver
- Fail to call "cudss_set" and "cudss_get" HOT 13
- Replace @ccall with @gcsafe_ccall when CUDA.jl v5.3.0 is released
- Add an option to reuse an handle when we create a CudssData
- Issue with iterative refinement HOT 1
- Irreproducible results for the same linear system HOT 1
- [documentation] Describe the keyword argument view in `ldlt` and `cholesky`
- Allow CEnum v0.5
- [documentation] Add an example with iterative refinement
- CUDSS.jl is broken with CUDA.jl 5.4
- Error with small matrices HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cudss.jl.