Comments (10)
Also you will never get an error of the form "X function is not implemented by CUDA". The way GPU compilation works, functions with known CUDA analogues are converted appropriately at compile time. The rest are attempted to compile directly. The error you're getting probably means that lambertw
cannot be completely type-inferred and compiled. For example the error involves a function invoked from strings/io.jl
--- suggesting that lambertw
may be trying to print something from your GPU kernel...
from oceananigans.jl.
Thanks @glwagner. Sorry I didn't test before but I assumed that since we never reached that warning it couldn't cause problems.
I'll close this issue since it's clearly out of the scope for Oceananigans
from oceananigans.jl.
So it seems that execution does not have to hit the @warn for the reported failure. Also the stack trace indicates that the error happens when the macro is expanded.
Precisely.
And maybe convenience interface for people who want to ignore it. That's a more robust interface for other reasons as well.
Perhaps a positional argument to lambertw
, either boolean or perhaps even better a type (to use multiple dispatch) to control behavior, things like
WarnFailures()
(throw warning for failures)MarkFailures(value=NaN)
(mark failures with a specific value, do not throw warning)IgnoreFailures()
?WithSolverInfo()
(return a type that contains the root, booleanconverged
, and possibly also number of iterations)
It would be nice if there were a way to redirect io or send it to dev null or otherwise disable everywhere when running on a GPU.
It is interesting to consider auto-sanitization of GPU code...
from oceananigans.jl.
I suggest starting a thread on LambertW.jl
and ask whether the code is GPU compatible. It will probably be more straightforward to test this first using CUDA.jl
or KernelAbstractions.jl
and then, if that works, get it working in Oceananigans.
from oceananigans.jl.
You might be hitting this warning which I don't think is valid to include in a GPU kernel:
from oceananigans.jl.
Probably the easiest thing to do is to fork LambertW.jl
and remove that warning. The rest seems ok, though a max iterations of 1000 seems a bit high if you want performance. It depends what you want, but as a hack you can return a NaN upon non-convergence rather than throwing a warning.
from oceananigans.jl.
Do you get a warning during CPU execution ?
from oceananigans.jl.
Do you get a warning during CPU execution ?
Nope. Everything seems to run pretty smoothly:
julia> using Oceananigans
julia> using Oceananigans.Grids: xnode, ynode
julia> using CUDA: has_cuda_gpu
julia> using LambertW: lambertw
julia> arch = has_cuda_gpu() ? GPU() : CPU()
CPU()
julia> grid = RectilinearGrid(arch, size = (4, 4, 4), extent = (1,1,1))
4×4×4 RectilinearGrid{Float64, Periodic, Periodic, Bounded} on CPU with 3×3×3 halo
├── Periodic x ∈ [0.0, 1.0) regularly spaced with Δx=0.25
├── Periodic y ∈ [0.0, 1.0) regularly spaced with Δy=0.25
└── Bounded z ∈ [-1.0, 0.0] regularly spaced with Δz=0.25
julia> @inline W(x, y) = lambertw((y/x)^2)
W (generic function with 1 method)
julia> @inline W(i, j, k, grid) = W(xnode(i, grid, Center()), ynode(j, grid, Center()))
W (generic function with 2 methods)
julia> op = KernelFunctionOperation{Center, Center, Center}(W, grid)
KernelFunctionOperation at (Center, Center, Center)
├── grid: 4×4×4 RectilinearGrid{Float64, Periodic, Periodic, Bounded} on CPU with 3×3×3 halo
├── kernel_function: W (generic function with 2 methods)
└── arguments: ()
julia> compute!(Field(op))
4×4×4 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 4×4×4 RectilinearGrid{Float64, Periodic, Periodic, Bounded} on CPU with 3×3×3 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Periodic, north: Periodic, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
├── operand: KernelFunctionOperation at (Center, Center, Center)
├── status: time=0.0
└── data: 10×10×10 OffsetArray(::Array{Float64, 3}, -2:7, -2:7, -2:7) with eltype Float64 with indices -2:7×-2:7×-2:7
└── max=2.84593, min=0.020004, mean=0.833143
from oceananigans.jl.
EDIT: This has nothing to do with Oceananigans.jl per se. Better pursued on LambertW.jl
I agree with #3438 (comment) .
This part of the stack trace suggests that it is the @warn
that is causing the problem. It should be possible to remove that somehow.
Better would be to remove the @warn
entirely and instead return the result along with info on the convergence. And maybe convenience interface for people who want to ignore it. That's a more robust interface for other reasons as well.
I don't know anything about running on GPUs. Does @warn
cause failure if it is anywhere in the package being compiled? or anywhere in the function being called? Or does execution have to hit the @warn
so that io is attempted at run time?
EDIT: I missed this above:
Do you get a warning during CPU execution ?
Nope. Everything seems to run pretty smoothly:
So it seems that execution does not have to hit the @warn
for the reported failure. Also the stack trace indicates that the error happens when the macro is expanded.
EDIT: so the following comment may be relevant, but perhaps not
It would be nice if there were a way to redirect io or send it to dev null or otherwise disable everywhere when running on a GPU.
Reason: unsupported call to an unknown function (call to jl_f__call_latest)
Stacktrace:
[1] #invokelatest#2
@ ./essentials.jl:816
[2] invokelatest
@ ./essentials.jl:813
[3] macro expansion
@ ./logging.jl:381
[4] lambertw_root_finding
@ /glade/work/tomasc/.julia/packages/LambertW/tom9a/src/LambertW.jl:188
[5] lambertw_branch_zero
@ /glade/work/tomasc/.julia/packages/LambertW/tom9a/src/LambertW.jl:117
[6] _lambertw
@ /glade/work/tomasc/.julia/packages/LambertW/tom9a/src/LambertW.jl:93
[7] lambertw (repeats 2 times)
@ /glade/work/tomasc/.julia/packages/LambertW/tom9a/src/LambertW.jl:73
[8] W
@ /glade/derecho/scratch/tomasc/twake4/headland_simulations/mwe.jl:9
from oceananigans.jl.
I tested it and can confirm if the line with @warn
is commented out, the code runs without erroring.
from oceananigans.jl.
Related Issues (20)
- Lagrangian particles on Flat topology HOT 3
- `ImmersedPoissonSolver` is slow HOT 10
- `@eval` considered harmful HOT 3
- Noise below the mixed layer when using `AnisotropicMinimumDissipation` HOT 7
- `Clock` and `QuasiAdamsBashforth2` both have a property representing the previous time-step
- What's the point of neglecting the computation of tendencies when taking a time-step?
- Boundary conditions and diffusion with background fields HOT 17
- Bug in determination of indices of parent array by `parent_index_range` HOT 14
- Broadcasting over regions for `MultiRegionField`s HOT 2
- Forcing functions that depend on internal model fields besides velocities and tracers HOT 18
- Typo in docs on callbacks
- Call it "Simulation setup" in the docs, not "Model setup"
- `BackgroundField` is a confusing name HOT 2
- Derivatives of a reduced field over an Immersed boundary HOT 5
- Pressure has extremely high gradients in random chunks of simulation using `NonhydrostaticModel` with `ImmersedBoundaryGrid` and `BuoyancyTracer` HOT 17
- Simulation tips may be a little misleading
- Lagrangian_particles (tracked_fields and using immersed boundaries) HOT 9
- boundary condition and output questions HOT 1
- Potential 'output_writers' saving bug? HOT 6
- Issue with out-of-bounds access and windowed field indexing HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from oceananigans.jl.