Traceback (most recent call last):
File "./scripts/my_train.py", line 14, in <module>
from lib.module.PNSPlusNetwork import PNSNet as Network
File "/content/drive/Shareddrives/VPS/VPS_main/lib/module/PNSPlusNetwork.py", line 11, in <module>
from module.PNSPlusModule import NS_Block
File "/content/drive/Shareddrives/VPS/VPS_main/lib/module/PNSPlusModule.py", line 10, in <module>
import self_cuda_backend as _ext
ModuleNotFoundError: No module named 'self_cuda_backend'
Now,, I did run build as instructed in the readme and it generates a lot of log, but this error appears over and over again: -
/content/drive/Shareddrives/VPS/VPS_main/lib/module/PNS
running build
running build_ext
/usr/local/lib/python3.7/dist-packages/torch/utils/cpp_extension.py:387: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
/usr/local/lib/python3.7/dist-packages/torch/utils/cpp_extension.py:788: UserWarning: The detected CUDA version (11.1) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'self_cuda_backend' extension
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.7/dist-packages/torch/include -I/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.7/dist-packages/torch/include/TH -I/usr/local/lib/python3.7/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.7m -c PNS_Module/sa_ext.cpp -o build/temp.linux-x86_64-3.7/PNS_Module/sa_ext.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=self_cuda_backend -D_GLIBCXX_USE_CXX11_ABI=0
In file included from /usr/local/lib/python3.7/dist-packages/torch/include/torch/extension.h:4:0,
from PNS_Module/sa_ext.cpp:2:
/usr/local/lib/python3.7/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4:2: error: #error C++14 or later compatible compiler is required to use PyTorch.
**#error C++14 or later compatible compiler is required to use PyTorch.**
I feel like build needs to run properly for the whole code to execute well. The problem is on Colab, the CUDA version is not 10.0 and there is no way to install C++ there. On my local machine, I only have support for Colab 11.0 and above. Is there any way to restructure this code for CUDA 11.0 or just run it using CPU?