Giter Club home page Giter Club logo

Comments (4)

maleadt avatar maleadt commented on September 27, 2024 1

Here's a much more detailed trace:
image

So it looks more like "death by a thousand papercuts" rather than one operation being slow:

  • methodinstance taking 1us: partly a profiler artifact, as it takes only 300ns outside of NSight. This has been improved on 1.11, only taking 50ns there
  • cudaconvert being surprisingly slow (1us), but also executing twice (once as part of @cuda in order to determine the argument types to compile for, and once during the actual launch). This used to be optimized away, I'm not sure why it isn't anymore, but we can probably work around this by only doing cudaconvert once and having a way to pass converted arguments to the compiled kernel
  • cache accessors (compiler_config, compiler_cache, cached_compilation): these all have to take locks for the purpose of safe multi-threaded execution. Not sure how to optimize these.

Bottom line, the latency is much better on my system, but I agree things can be improved.

I also want to emphasize once more that NSight makes these look much worse. Where each iteration takes around 50us in the profile trace, in a non-profiled session an iteration only takes 30us.

from cuda.jl.

maleadt avatar maleadt commented on September 27, 2024

FWIW, I'm seeing more reasonable timings: 30us between copy and launch, of which around 10us taken by cached_compilation
image

Still a lot of latency, and the timings surprise me, because the functionality in question has been thoroughly micro-optimized:

julia> cache = CUDA.compiler_cache(context());

julia> src = GPUCompiler.methodinstance(typeof(identity), Tuple{Nothing});

julia> cfg = CUDA.compiler_config(device());

julia> @benchmark GPUCompiler.cached_compilation(cache, src, cfg, CUDA.compile, CUDA.link)
BenchmarkTools.Trial: 10000 samples with 788 evaluations.
 Range (min  max):  159.770 ns  322.471 ns  ┊ GC (min  max): 0.00%  0.00%
 Time  (median):     183.145 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   182.211 ns ±  10.717 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

                    ▂█▇▆▄
  ▂▅▆▃▂▂▂▂▂▂▁▂▂▂▁▂▂▃█████▇▄▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▁▂▁▁▁▂▂▂▂▂▁▁▁▁▂▂▂▂ ▃
  160 ns           Histogram: frequency by time          229 ns <

 Memory estimate: 64 bytes, allocs estimate: 2.

from cuda.jl.

maleadt avatar maleadt commented on September 27, 2024

Running the benchmark a couple of times (the profiler can make short measurements like this weird) lowers the time of cached_compilation further down to hundreds of ns

image

Are you sure the cached_compilation latency isn't a profiling artifact on your end? I'm using an isolated copy+launch+copy sequence as follows:

function kernel!(a)
    i = threadIdx().x + blockDim().x*(blockIdx().x-1)
    a[i] = CUDA.cos(a[i]) + i^0.6 + CUDA.tan(a[i])
    return
end

NVTX.@annotate function benchmark(a)
    threads = 256
    blocks = cld(length(a), threads)

    a_d = CuArray(a)
    @cuda threads=threads blocks=blocks kernel!(a_d)
    Array(a_d)
end

function main()
    n = 2^12
    a = rand(n)

    benchmark(a)
    benchmark(a)
    benchmark(a)

    return
end

from cuda.jl.

cibinjoseph avatar cibinjoseph commented on September 27, 2024

I ran these benchmarks again and you're right in that the latency does go down when the profiler is run a few times.
I get delays of around 17 us before and after the kernel launch at the moment. So perhaps it is not too bad.
image

I also noticed you were focusing on the CUDA API line in Nsight systems. I usually look at the CUDA HW line in the timeline and maybe that was throwing me off too.

Thanks for looking into this!

from cuda.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.