Comments (4)
Ok, this worked. Thank you!
from tensoroperations.jl.
The setup is done through a package extension for both cuTENSOR and CUDA, such that it only gets loaded if there has been an import or using statement for both of these at some point.
My best guess is to just add import cuTENSOR
(and thus add this as a dependency to your package)
from tensoroperations.jl.
Ok, I somehow didn't paste the using cuTENSOR
line. I edited my original post. Now I am even more confused. The things I described above happen in Julia 1.9 on my local machine, on 1.10 it seems ok. Yet, during the automerge run in Julia Registry I get
✓ Strided
753
✓ NNlib → NNlibCUDAExt
754
✓ TensorOperations
755
✓ cuTENSOR
756
✓ TensorOperations → TensorOperationscuTENSORExt
757
✓ TensorOperations → TensorOperationsChainRulesCoreExt
758
✗ SpinGlassTensors
759
82 dependencies successfully precompiled in 88 seconds
761
ERROR: The following 1 direct dependency failed to precompile:
762
SpinGlassTensors [7584fc6a-5a23-4eeb-8277-827aab0146ea]
763
Failed to precompile SpinGlassTensors [7584fc6a-5a23-4eeb-8277-827aab0146ea] to "/tmp/jl_pIyevc/compiled/v1.10/SpinGlassTensors/jl_MqPj3t".
764
ERROR: LoadError: ArgumentError: cuTENSOR not loaded: add `using cuTENSOR` or `import cuTENSOR` before using `@cutensor`
765
Stacktrace:
766
[1] var"@cutensor"(__source__::LineNumberNode, __module__::Module, ex::Expr)
767
@ TensorOperations /tmp/jl_pIyevc/packages/TensorOperations/LAzcX/src/indexnotation/tensormacros.jl:300
768
[2] include(mod::Module, _path::String)
769
@ Base ./Base.jl:495
770
[3] include(x::String)
771
@ SpinGlassTensors /tmp/jl_pIyevc/packages/SpinGlassTensors/1PbPU/src/SpinGlassTensors.jl:1
772
[4] top-level scope
773
@ /tmp/jl_pIyevc/packages/SpinGlassTensors/1PbPU/src/SpinGlassTensors.jl:26
774
[5] include
775
@ ./Base.jl:495 [inlined]
776
[6] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt128}}, source::Nothing)
777
@ Base ./loading.jl:2222
778
[7] top-level scope
779
@ stdin:3
780
in expression starting at /tmp/jl_pIyevc/packages/SpinGlassTensors/1PbPU/src/mps/utils.jl:46
781
in expression starting at /tmp/jl_pIyevc/packages/SpinGlassTensors/1PbPU/src/mps/utils.jl:43
782
in expression starting at /tmp/jl_pIyevc/packages/SpinGlassTensors/1PbPU/src/SpinGlassTensors.jl:1
783
in expression starting at stdin:
784
Stacktrace:
785
[1] pkgerror(msg::String)
786
@ Pkg.Types /opt/hostedtoolcache/julia/1.10.2/x64/share/julia/stdlib/v1.10/Pkg/src/Types.jl:70
787
[2] precompile(ctx::Pkg.Types.Context, pkgs::Vector{Pkg.Types.PackageSpec}; internal_call::Bool, strict::Bool, warn_loaded::Bool, already_instantiated::Bool, timing::Bool, _from_loading::Bool, kwargs::@Kwargs{io::IOContext{Base.PipeEndpoint}})
788
@ Pkg.API /opt/hostedtoolcache/julia/1.10.2/x64/share/julia/stdlib/v1.10/Pkg/src/API.jl:1659
789
[3] precompile(pkgs::Vector{Pkg.Types.PackageSpec}; io::IOContext{Base.PipeEndpoint}, kwargs::@Kwargs{})
790
@ Pkg.API /opt/hostedtoolcache/julia/1.10.2/x64/share/julia/stdlib/v1.10/Pkg/src/API.jl:159
791
[4] precompile(pkgs::Vector{Pkg.Types.PackageSpec})
792
@ Pkg.API /opt/hostedtoolcache/julia/1.10.2/x64/share/julia/stdlib/v1.10/Pkg/src/API.jl:148
793
[5] precompile(; name::Nothing, uuid::Nothing, version::Nothing, url::Nothing, rev::Nothing, path::Nothing, mode::Pkg.Types.PackageMode, subdir::Nothing, kwargs::@Kwargs{})
794
@ Pkg.API /opt/hostedtoolcache/julia/1.10.2/x64/share/julia/stdlib/v1.10/Pkg/src/API.jl:174
795
[6] precompile()
796
@ Pkg.API /opt/hostedtoolcache/julia/1.10.2/x64/share/julia/stdlib/v1.10/Pkg/src/API.jl:165
797
[7] top-level scope
798
@ none:17
from tensoroperations.jl.
I'm honestly not entirely sure what is going wrong here. The only thing I can think of is that the check I wrote is expanded before cuTENSOR is being loaded because it happens at compile time.
Could you try and replace the @cutensor ex...
calls with @tensor backend=cuTENSOR allocator=cuTENSOR ex...
? This should effectively bypass the check that cuTENSOR is loaded. If this works, I'll have to come up with a different way of ensuring the dependencies are loaded.
from tensoroperations.jl.
Related Issues (20)
- Is TensorOperations able to take advantage of symmetry in the output? HOT 8
- Manual allocation strategy HOT 2
- Floating Point Accuracy of @tensor results with CUDA HOT 3
- Enable multithreads when doing the permutedims in the TTGT algorithms HOT 2
- Unexpected `DimensionMismatch` (v4.0.2 -> v4.0.3) HOT 3
- Wrong result with subnetworks with equal labels HOT 2
- Bug in CUDA backend HOT 6
- Unintuitive `ncon` result when scalar HOT 2
- Taking gradients of traces HOT 6
- np.einsum_path vs TensorOperations HOT 3
- `ncon` fails with AD HOT 2
- `tensortrace` not working on Arrays of Symbolic Expressions from Symbolics.jl. HOT 2
- Combining LinearAlgebra.Diagonal with a CuArray inside @tensor HOT 2
- Compability with CUDA 5.2 HOT 3
- cuTENSOR not working with automatic differentiation HOT 5
- Freed reference problem when combining cuTENSOR and Zygote HOT 8
- TensorOperationscuTENSORExt fails to compile HOT 4
- Costchecks are not using `GlobalRef`
- Optdata cannot use `Int` as labeltype
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensoroperations.jl.