Comments (5)
That is expected; memory allocated on one device cannot be simply accessed by another. You need unified memory for that, or a mapped host buffer.
The fact that explicit copy operations (as used by the I/O stack) still work, is unrelated. In fact, you can safely copy between arrays on different devices, which will use an appropriate mechanism (stage through CPU, or P2P copy).
from cuda.jl.
I don't want to access memory on one device from another. I just want operation on one device to give output on the same device, irrespective of current_device. I just want the following behavior from pytorch
import torch
device0 = torch.device("cuda:0")
device1 = torch.device("cuda:1")
x0 = torch.zeros(2, device=device0)
x1 = torch.ones(2, device=device1)
print(x0) # tensor([0., 0.], device='cuda:0')
print(x0 - x0) # tensor([0., 0.], device='cuda:0')
print(x1) # tensor([1., 1.], device='cuda:1')
print(x1 - x1) # tensor([0., 0.], device='cuda:1')
Can this be obtained with CUDA.jl?
from cuda.jl.
That is not how our model work. We follow the CUDA programming model, where switching devices is a global operation affecting where the computation happens, whereas in Torch arrays are owned by a device. That is just a different approach which comes with its own set of trade-offs.
from cuda.jl.
Actually, I think we can improve this by either erroring early, or using P2P to enable cross-device usage. Note however that I still want to keep the fact that we execute on the current active device vs. the one an array was allocated on.
from cuda.jl.
This now works on #2335:
julia> using CUDA
julia> CUDA.device!(0)
CuDevice(0): Tesla V100-PCIE-16GB
julia> x = CUDA.ones(2)
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> CUDA.device!(1)
CuDevice(1): Tesla V100S-PCIE-32GB
julia> x
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> y = x - x;
julia> CUDA.device(x)
CuDevice(0): Tesla V100-PCIE-16GB
julia> CUDA.device(y)
CuDevice(1): Tesla V100S-PCIE-32GB
julia> y
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
0.0
0.0
However, do note that we keep with the semantics that we're executing on the globally active device. Which means that you may run into the following if your devices are inaccessible to one another:
julia> CUDA.device!(0)
CuDevice(0): Tesla V100-PCIE-16GB
julia> x = CUDA.ones(2)
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> CUDA.device!(3)
CuDevice(3): Tesla P100-PCIE-16GB
julia> x
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> y = x - x;
ERROR: ArgumentError: cannot take the GPU address of inaccessible device memory.
You are trying to use memory from GPU 0 while executing on GPU 3.
P2P access between these devices is not possible; either switch execution to GPU 0
by calling `CUDA.device!(0)`, or copy the data to an array allocated on device 3.
from cuda.jl.
Related Issues (20)
- CUDA 12.4 Update 1: CUPTI does not trace kernels anymore HOT 1
- Bitonic sort exceeds launch resources HOT 3
- Avoid implementing LAPACK interfaces directly
- v5.3.0: regression in Zygote performance HOT 9
- CUBLASLt wrapper for `cublasLtMatmulDescSetAttribute` can have device buffers as input HOT 1
- Improve error message when assigning real valued arrray with complex numbers HOT 4
- `@device_code_sass` broken HOT 3
- Readme says Cuda 11 is supported but also the last version to support it is v4.4 HOT 1
- `@gcsafe_ccall` breaks inlining of ccall wrappers HOT 5
- Mixed eltype contraction failing with CuTensor HOT 1
- Add helper function to recompile CUDA stack HOT 1
- Unable to use local CUDA runtime toolkit HOT 1
- Kron Support for CuSparseMatrixCSC HOT 1
- Enzyme prevents testing on 1.11
- Segfault during multiGPU tests
- EnzymeCore is an unconditional dependency. HOT 4
- Adapt + strictly-typed fields can trigger confusing errors
- cuBLASLt wrappers ccall into cuBLAS
- generic_trimatmul! error HOT 1
- CUFFT plans seem to leak GPU memory
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cuda.jl.