openucx / xccl Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Error during make
...
nvcc -c xccl_cuda_reduce.cu -I/home/chchu/xccl-exp/src -I/home/chchu/xccl-exp/src/core --compiler-options -fno-rtti,-fno-exceptions -arch=sm_50 -gencode=arch=compute_37,code=sm_37 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_52,code=sm_52 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_70,code=compute_70 -Xcompiler -fPIC -o .libs/xccl_cuda_reduce.o
In file included from /home/chchu/xccl-exp/src/api/xccl.h:11:0,
from xccl_cuda_reduce.cu:1:
/home/chchu/xccl-exp/src/api/xccl_tls.h:9:30: fatal error: ucs/config/types.h: No such file or directory
#include <ucs/config/types.h>
^
compilation terminated.
make[4]: *** [xccl_cuda_reduce.lo] Error 1
Command for building XCCL
$ ./autogen.sh && ./configure --prefix=$PWD/install \
--with-ucx=${UCX_INSTALL_PATH}/install --with-cuda=/usr/local/cuda \
CFLAGS="-I${UCX_INSTALL_PATH}/include" \
CPPFLAGS="-I${UCX_INSTALL_PATH}/include" \
CXXFLAGS="-I${UCX_INSTALL_PATH}/include" \
&& make
UCX was installed in a local directory, config options:
../configure --disable-logging --disable-debug --disable-assertions --disable-params-check --prefix=/home/chchu/tools/ucx/build/install --with-cuda=/usr/local/cuda
Workaround: adding ${UCX_CPPFLAGS}
to NVCCFLAGS
in Makefile.am and recompile.
diff --git a/src/utils/cuda/kernels/Makefile.am b/src/utils/cuda/kernels/Makefile.am
index c0ec059..f91a7d9 100644
--- a/src/utils/cuda/kernels/Makefile.am
+++ b/src/utils/cuda/kernels/Makefile.am
@@ -8,7 +8,7 @@
#
NVCC = nvcc
-NVCCFLAGS = "-I${XCCL_TOP_SRCDIR}/src -I${XCCL_TOP_SRCDIR}/src/core" --compiler-options -fno-rtti,-fno-exceptions
+NVCCFLAGS = "-I${XCCL_TOP_SRCDIR}/src -I${XCCL_TOP_SRCDIR}/src/core" --compiler-options -fno-rtti,-fno-exceptions ${UCX_CPPFLAGS}
NV_ARCH_FLAGS = -arch=sm_50 \
-gencode=arch=compute_37,code=sm_37 \
-gencode=arch=compute_50,code=sm_50 \
Is it because the UCX is not installed in a default path? For such scenarios, should we apply a patch shown here or perhaps add a config time option to allow users specify addtional NVCCFLGAS?
The xccl backend crashes when using torch_ucc and creating a process group with torch.distributed.new_group([0])
. This may be an issue with torch_ucc, but xccl appears in the backtrace so I'm filing the issue here.
Error log: https://gist.github.com/froody/d35d7571b1a8df0638867066d96ecc6c
Relevant error message:
[devfair0133:73576:0:73576] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xffffffff00000001)
Steps to reproduce:
TORCH_UCC_COLL_BACKEND=xccl python hello_ucx.py
pytorch version 1.7.0a0+0759809
UCX version 1.9.0
XCCL @ 2e97986
Torch-UCC @ ed0c8dfccf11f73ca60265ce5b6e76220c07f343
Would it be possible to explicitly set the priority of NCCL or native CUDA streams?
I was running some benchmarks with torch-ucc using xccl for collectives, and I noticed very bad performance compared to NCCL. See numbers here: https://gist.github.com/froody/a86a5b2c5d9f46aedba7e930f4b4e08d
It's possible this is due to a misconfiguration, I built xccl with cuda and ucx support, but without sharp or vmc support. My question is - is it expected for xccl to properly utilize NVLink when available (in this case on a DGX-1 doing all-reduce across all 8 GPUs)?
I also noticed when running the benchmarks that CPU utilization as very high for all workers which seemed to be due to high-frequency polling.
Also as you can see in the output, ucc fails trying to reduce a 2gb tensor whereas nccl fails trying to reduce an 8gb tensor. This could be indicative of a leak somewhere.
Repro steps:
Run benchmark here: https://gist.github.com/froody/01ed6ce8d6ab72bd868431d793591379
Use BACKEND=ucc or BACKEND=nccl to select backend
hardware: DGX-1, Driver Version: 418.116.00
cuda: 10.1
pytorch: 1.6.0
ucx: 1.9.0
torch-ucc: a277d7da24ae6e8a40bda658d0f0d4e06fcadb8b
xccl: 2e97986
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.