Comments (6)
Your build line is correct. As for cuda_fp16.h, it gets included from nccl.h (which is in turn included in core.h) whenever CUDART_VERSION >= 7050.
I'm a bit confused about the include guards you reference in reduce_kernel.h. These should select either Maxwell (CUDA_ARCH>= 500) or Kepler (CUDA_ARCH >= 300 && CUDA_ARCH < 500). There shouldn't be any reference to a CUDA_ARCH 530.
from nccl.
Do you have more than one CUDA Toolkit version installed? Any chance you
have an RC rather than production release version of CUDA 7.5?
On Jan 24, 2016 5:14 PM, "Nathan Luehr" [email protected] wrote:
Your build line is correct. As for cuda_fp16.h, it gets included from
nccl.h (which is in turn included in core.h) whenever CUDART_VERSION >=
7050.I'm a bit confused about the include guards you reference in
reduce_kernel.h. These should select either Maxwell (CUDA_ARCH>= 500)
or Kepler (CUDA_ARCH >= 300 && CUDA_ARCH < 500). There shouldn't be
any reference to a CUDA_ARCH 530.—
Reply to this email directly or view it on GitHub
#9 (comment).
from nccl.
That's wierd. I think I should have a production release. When I print contents of /usr/local/cuda/version.txt
and /usr/local/cuda-7.5/version
they both say CUDA Version 7.5.7
.
This is what I've found in cuda_fp16.h:
#if __CUDA_ARCH__ >= 530 || !defined(__CUDA_ARCH__)
__CUDA_FP16_DECL__ __half2 __heq2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hne2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hle2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hge2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hlt2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hgt2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hequ2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hneu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hleu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hgeu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hltu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hgtu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hadd2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hsub2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hmul2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hadd2_sat(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hsub2_sat(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hmul2_sat(const __half2 a, const __half2 b);
__CUDA_FP16_DECL__ __half2 __hfma2(const __half2 a, const __half2 b, const __half2 c);
__CUDA_FP16_DECL__ __half2 __hfma2_sat(const __half2 a, const __half2 b, const __half2 c);
__CUDA_FP16_DECL__ __half __hadd(const __half a, const __half b);
__CUDA_FP16_DECL__ __half __hsub(const __half a, const __half b);
__CUDA_FP16_DECL__ __half __hmul(const __half a, const __half b);
__CUDA_FP16_DECL__ __half __hadd_sat(const __half a, const __half b);
__CUDA_FP16_DECL__ __half __hsub_sat(const __half a, const __half b);
__CUDA_FP16_DECL__ __half __hmul_sat(const __half a, const __half b);
__CUDA_FP16_DECL__ __half __hfma(const __half a, const __half b, const __half c);
__CUDA_FP16_DECL__ __half __hfma_sat(const __half a, const __half b, const __half c);
__CUDA_FP16_DECL__ float __low2float(const __half2 l);
__CUDA_FP16_DECL__ float __high2float(const __half2 l);
__CUDA_FP16_DECL__ float2 __half22float2(const __half2 l);
You can see that one of the missing identifiers in on the last line of this snippet. It only appears in a it's forward declaration and it's definition is that file.
from nccl.
7.5.7 was the release candidate build, actually. The production release
build was numbered 7.5.18.
Please redownload and reinstall CUDA 7.5; it should work with the 7.5.18
build.
Thanks!
On Jan 25, 2016 6:51 AM, "Adam Paszke" [email protected] wrote:
That's wierd. I think I should have a production release. When I print
contents of /usr/local/cuda/version.txt and /usr/local/cuda-7.5/version
they both say CUDA Version 7.5.7.This is what I've found in cuda_fp16.h:
#if CUDA_ARCH >= 530 || !defined(CUDA_ARCH)
CUDA_FP16_DECL half2 __heq2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hne2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hle2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hge2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hlt2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hgt2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hequ2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hneu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hleu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hgeu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hltu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hgtu2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hadd2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hsub2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hmul2(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hadd2_sat(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hsub2_sat(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hmul2_sat(const __half2 a, const __half2 b);
__CUDA_FP16_DECL half2 __hfma2(const __half2 a, const __half2 b, const __half2 c);
__CUDA_FP16_DECL half2 __hfma2_sat(const __half2 a, const __half2 b, const __half2 c);
__CUDA_FP16_DECL half __hadd(const __half a, const __half b);
__CUDA_FP16_DECL half __hsub(const __half a, const __half b);
__CUDA_FP16_DECL half __hmul(const __half a, const __half b);
__CUDA_FP16_DECL half __hadd_sat(const __half a, const __half b);
__CUDA_FP16_DECL half __hsub_sat(const __half a, const __half b);
__CUDA_FP16_DECL half __hmul_sat(const __half a, const __half b);
__CUDA_FP16_DECL half __hfma(const __half a, const __half b, const __half c);
__CUDA_FP16_DECL half __hfma_sat(const __half a, const __half b, const __half c);
__CUDA_FP16_DECL float low2float(const __half2 l);
__CUDA_FP16_DECL float high2float(const __half2 l);
__CUDA_FP16_DECL float2 __half22float2(const __half2 l);You can see that one of the missing identifiers in on the last line of
this snippet. It only appears in a it's forward declaration and it's
definition is that file.—
Reply to this email directly or view it on GitHub
#9 (comment).
from nccl.
I'll try it. Thanks for help! 😊
from nccl.
Yes, that worked. Thanks a lot!
from nccl.
Related Issues (20)
- Why duplicate nChannels in connect.cc HOT 1
- All Reduce Performance on H100 VMs HOT 1
- NCCL fallback to Ring,LL on broadcast perf and NCCL_ALGO=Tree HOT 1
- why two GPU far than PXB under intel cpu use P2P will be slower(without NVLink) HOT 2
- About NVLS MC/UC buffer
- nccl-test can use nvidia sharp, but training job can not use nvidia sharp
- Dual 4090 bandwidth slower with PCIe HOT 1
- Profiling Tools for NCCL collective operations
- Local user buffer registration for NVLink SHARP HOT 1
- Some questions about selecting NET when searching channels. HOT 12
- Compute time in the reduction operation
- Understanding LL, LL128, and Simple Protocols
- Performance Degradation in Alltoall Operation with NCCL 2.19 and 2.20 HOT 5
- NCCL2.21 hangs at cudaLaunchKernelExC() HOT 6
- How are threads in different channels parallelized
- How sendProxyProgress() in net.cc works HOT 2
- Execute all_reduce_perf block HOT 1
- Has NCCL support inter-node through NVswitch and NVlink? HOT 7
- For channel computing, why nvlinkBw is accumulated, but pciBw is not? Is this a BUG? HOT 2
- nccl with specified pkey_index HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nccl.