Giter Club home page Giter Club logo

Comments (9)

nluehr avatar nluehr commented on July 17, 2024

K10 has compute capability 3.0. By default NCCL is built for sm_35 and newer. If you tweak the makefile to include sm_30 support, NCCL should work on the K10. You will still need to forgo the random number generator, however, as cuRand's MT19937 supports sm_35 and newer only.

from nccl.

fmana avatar fmana commented on July 17, 2024

indeed recompiling including sm_30 support fixes my issue on K10 (cuRand till disabled).
Thanks.

from nccl.

fmana avatar fmana commented on July 17, 2024

I succeeded in running on K10 and so I tried profiling all_reduce_test sample code.
In attach I reported the nvvp picture of all_reduce_test/float/sum "outplace/inplace".
Looking at GTC16 NCCL presentation, I expected to see the whole memory plit in chunks and each
chunk sent in concurrency between each gpu pairs (ring algorithm).

nccl_allreduce_4

What I see is a serial data move cross gpus to complete a loop.
What did I miss?
Something related to K10?

Thanks,
Franco

from nccl.

cliffwoolley avatar cliffwoolley commented on July 17, 2024

Most of what you see there is bookkeeping for the test framework. The
actual reduction is the smaller kernel in the middle that actually does run
simultaneously on all GPUs. Zoom in on the timeline and you'll see it.

PS: For sm_30, rather than forgoing the random number generation in the
test suite completely, you could just switch cuRAND to a different
generator. It has other ones that do support older GPUs.

-Cliff
On Apr 29, 2016 8:59 AM, "fmana" [email protected] wrote:

I succeeded in running on K10 and so I tried profiling all_reduce_test
sample code.
In attach I reported the nvvp picture of all_reduce_test/float/sum
"outplace/inplace".
Looking at GTC16 NCCL presentation, I expected to see the whole memory
plit in chunks and each
chunk sent in concurrency between each gpu pairs (ring algorithm).

[image: nccl_allreduce_4]
https://cloud.githubusercontent.com/assets/18715804/14916585/cd09d64c-0e1a-11e6-8bc8-e2ea8399c0ee.gif

What I see is a serial data move cross gpus to complete a loop.
What did I miss?
Something related to K10?

Thanks,
Franco


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#23 (comment)

from nccl.

nluehr avatar nluehr commented on July 17, 2024

Adding to Cliff's comments, all of the "chunking" and inter-gpu synchronizations are handled by direct peer memory accesses from within a single CUDA kernel (rather than, for example, using cudaEvents and separate cudaMemcpys). So the details of the algorithm won't show up in the nvvp profile.

from nccl.

fmana avatar fmana commented on July 17, 2024

Thanks for comments. I'll had a deeper look at the kernel realizing the core processing (chunk&transfer). It is quite interesting since I never wrote so complex algorithm into a single kernel.
I'll learn a lot.
I'll go to integrate your NCCL lib into a my toy sample code using the lib into my target scenario.
I'll let you know my findings and in case issues.
BTW what about of repeatability?
Do you have fixed order collecting contributes in case of collectives?

Thanks,
Franco

from nccl.

cliffwoolley avatar cliffwoolley commented on July 17, 2024

As long as the mapping of GPU IDs to ranks remains the same, then yes, the
calculations should be deterministic and therefore the output repeatable.
On May 2, 2016 2:04 AM, "fmana" [email protected] wrote:

Thanks for comments. I'll had a deeper look at the kernel realizing the
core processing (chunk&transfer). It is quite interesting since I never
wrote so complex algorithm into a single kernel.
I'll learn a lot.
I'll go to integrate your NCCL lib into a my toy sample code using the lib
into my target scenario.
I'll let you know my findings and in case issues.
BTW what about of repeatability?
Do you have fixed order collecting contributes in case of collectives?

Thanks,
Franco


You are receiving this because you commented.
Reply to this email directly or view it on GitHub
#23 (comment)

from nccl.

fmana avatar fmana commented on July 17, 2024

Hi NCCL team,

I've finished in writing my sample code integrating NCCL into a multi-threads logic application.
I followed your's sample code: no problem at all. I can run w/ our in-house K10/K20/K80.
Just 2 comments:
1] I read about 8GPU limitation (GTC slides). According to my trials, NCCL works till 16GPU
Was I lucky? Did I misunderstand the GTC slide content?
2] I verified your comment on repeatability: changing the commIgpu allocation I lost repeatability.
No work-around on that? This is quite common using cluster that my app will be allocated
to different machines at each run. Isn't possible into your "ring algorithm" define the starting
point of the ring? The thread0 could define to NCCL which is the gpu to be used as start/end
of the ring. In this case, could make sense having repeatability achieved?

Thanks,
Franco

from nccl.

nluehr avatar nluehr commented on July 17, 2024

I'm glad you're up and running. The 8 GPU requirement was relaxed in revision Iaa1841036a7bfdad6ebec99fed0adcd2bbe6ffad. The GTC slide was just out of date.

At this point, we don't have a good solution for general reproducibility. To avoid contending PCIe links, GPUs must communicate in a specific hardware order. If the software changes the mapping of ranks to physical GPUs, the communication order must change too, and the (non-associative) floating-point operations get carried out in a different order. Without a significant performance penalty, we can only provide deterministic output for runs with the same configuration.

from nccl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.