aliutkus / torchsearchsorted Goto Github PK
View Code? Open in Web Editor NEWPytorch Custom CUDA kernel for searchsorted
License: BSD 3-Clause "New" or "Revised" License
Pytorch Custom CUDA kernel for searchsorted
License: BSD 3-Clause "New" or "Revised" License
torchsearchsorted.pycache.cpu.cpython-36: module references file
At first, thank you for applying such good codes.
I followed the README step by step, but when I entered the torchsearchsorted folder, and use the 'pip install.' command, the warning above would appear. And it bothers me a lot.
Is it possible to install torchsearchsorted for CUDA on Windows?
I am using VS 2017, Python 3.8.2, PyTorch 1.4, and CUDA 10.1 but running pip install . I always get "5 errors detected in the compilation of searchsorted_cuda_kernel.cpp1.ii
Thanks for any help!
TypeError: unsupported operand type(s) for +: 'float' and 'str'
----------------------------------------
ERROR: Command errored out with exit status 1: /cluster/anaconda3/nerf_pl/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-pyyowkg5/setup.py'"'"'; file='"'"'/tmp/pip-req-build-pyyowkg5/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-g69q5qqh/install-record.txt --single-version-externally-managed --compile --install-headers /cluster/anaconda3/nerf_pl/include/python3.6m/torchsearchsorted Check the logs for full command output.
请问这种问题怎末解决?
I'm trying to install the module under conda.
# packages in environment at /anaconda3/envs/pytorch:
#
# Name Version Build Channel
_pytorch_select 0.1 cpu_0
astroid 2.3.3 py37_0
blas 1.0 mkl
ca-certificates 2020.1.1 0
certifi 2020.4.5.1 py37_0
cffi 1.14.0 py37hb5b8e2f_0
cycler 0.10.0 py37_0
freetype 2.9.1 hb4e5f40_0
intel-openmp 2019.4 233
isort 4.3.21 py37_0
kiwisolver 1.1.0 py37h0a44026_0
lazy-object-proxy 1.4.3 py37h1de35cc_0
libcxx 4.0.1 hcfea43d_1
libcxxabi 4.0.1 hcfea43d_1
libedit 3.1.20181209 hb402a30_0
libffi 3.2.1 h0a44026_6
libgfortran 3.0.1 h93005f0_2
libpng 1.6.37 ha441bb4_0
matplotlib 3.1.3 py37_0
matplotlib-base 3.1.3 py37h9aa3819_0
mccabe 0.6.1 py37_1
mkl 2019.4 233
mkl-service 2.3.0 py37hfbe908c_0
mkl_fft 1.0.15 py37h5e564d8_0
mkl_random 1.1.0 py37ha771720_0
ncurses 6.2 h0a44026_1
ninja 1.9.0 py37h04f5b5a_0
numpy 1.18.1 py37h7241aed_0
numpy-base 1.18.1 py37h6575580_1
openssl 1.1.1g h1de35cc_0
pip 20.0.2 py37_1
pycparser 2.20 py_0
pylint 2.5.0 py37_0
pyparsing 2.4.6 py_0
python 3.7.7 hc70fcce_0_cpython
python-dateutil 2.8.1 py_0
pytorch 1.4.0 cpu_py37hf9bb1df_0
readline 8.0 h1de35cc_0
setuptools 46.1.3 py37_0
six 1.14.0 py37_0
sqlite 3.31.1 h5c1f38d_1
tk 8.6.8 ha441bb4_0
torchdiffeq 0.0.1 pypi_0 pypi
tornado 6.0.4 py37h1de35cc_1
wheel 0.34.2 py37_0
wrapt 1.12.1 py37h1de35cc_1
xz 5.2.5 h1de35cc_0
zlib 1.2.11 h1de35cc_3
pip install .
:$ cd torchsearchsorted/
$ ls
LICENSE README.md examples setup.py src test
$ pip install .
Processing /Users/andreapanizza/BoxSync/PC/GitHub_repositories/torchsearchsorted
ERROR: Command errored out with exit status 1:
command: /anaconda3/envs/pytorch/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/6m/3vm9vws14tzb0nyh0yldv56r0000gn/T/pip-req-build-r_59wapl/setup.py'"'"'; __file__='"'"'/private/var/folders/6m/3vm9vws14tzb0nyh0yldv56r0000gn/T/pip-req-build-r_59wapl/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/6m/3vm9vws14tzb0nyh0yldv56r0000gn/T/pip-req-build-r_59wapl/pip-egg-info
cwd: /private/var/folders/6m/3vm9vws14tzb0nyh0yldv56r0000gn/T/pip-req-build-r_59wapl/
Complete output (10 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/6m/3vm9vws14tzb0nyh0yldv56r0000gn/T/pip-req-build-r_59wapl/setup.py", line 2, in <module>
from torch.utils.cpp_extension import BuildExtension, CUDA_HOME
File "/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/__init__.py", line 81, in <module>
from torch._C import *
ImportError: dlopen(/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_C.cpython-37m-darwin.so, 9): Symbol not found: _mkl_blas_caxpy
Referenced from: /anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/../../../../libmkl_intel_lp64.dylib
Expected in: flat namespace
in /anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/../../../../libmkl_intel_lp64.dylib
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
I don't understand the error message. My system is:
Darwin 17.7.0 Darwin Kernel Version 17.7.0: Tue Feb 18 22:51:29 PST 2020; root:xnu-4570.71.73~1/RELEASE_X86_64 x86_64
Hi, thanks for the great tool! I've been using this package in my work, and I've noticed that the tensors often need to be contiguous
for results to be consistent (benchmarked wrt tf.searchsorted()
and multiple independent calls to torchsearchsorted.searchsorted()
).
It'll be great if you could add a word of caution to the README
and adopt this style in the examples.
TL;DR. Use a.contiguous()
and v.contiguous()
before making a call to searchsorted()
.
Hi Antoine,
When I tried "pip install ." on the root, it spitted out a very long error message, which seems a bit similar to Issue 19 (but the solution there does not apply to mine, as I was using the up-to-date master branch). Please see the error message here:
err.txt
Conditions:
System: Ubuntu 16.04
nvcc: 10.2
gcc/g++: 5.4.0 (as required by cuda installation guide)
Python packages:
_libgcc_mutex 0.1 main
absl-py 0.9.0 py37_0
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
c-ares 1.15.0 h7b6447c_1001
ca-certificates 2020.1.1 0
certifi 2020.4.5.1 py37_0
configargparse 1.1 py_0
cudatoolkit 10.2.89 hfd86e86_1
cycler 0.10.0 py37_0
dbus 1.13.14 hb2f20db_0
expat 2.2.6 he6710b0_0
ffmpeg 4.2.2 h20bf706_0
fontconfig 2.13.0 h9420a91_0
freetype 2.9.1 h8a8886c_1
glib 2.63.1 h3eb4bd4_1
gmp 6.1.2 h6c8ec71_1
gnutls 3.6.5 h71b1129_1002
grpcio 1.27.2 py37hf8bcb03_0
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb31296c_0
icu 58.2 he6710b0_3
imageio 2.8.0 py_0
imageio-ffmpeg 0.4.2 py_0
intel-openmp 2020.1 217
jpeg 9b h024ee3a_2
kiwisolver 1.2.0 py37hfd86e86_0
lame 3.100 h7b6447c_0
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20181209 hc058e9b_0
libffi 3.3 he6710b0_1
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libopus 1.3.1 h7b6447c_0
libpng 1.6.37 hbc83047_0
libprotobuf 3.11.4 hd408876_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_0
libuuid 1.0.3 h1bed415_2
libvpx 1.7.0 h439df22_0
libxcb 1.13 h1bed415_1
libxml2 2.9.9 hea5a465_1
markdown 3.1.1 py37_0
matplotlib 3.1.3 py37_0
matplotlib-base 3.1.3 py37hef1b27d_0
mkl 2020.1 217
mkl-service 2.3.0 py37he904b0f_0
mkl_fft 1.0.15 py37ha843d7b_0
mkl_random 1.1.0 py37hd6b4f25_0
ncurses 6.2 he6710b0_1
nettle 3.4.1 hbb512f6_0
ninja 1.9.0 py37hfd86e86_0
numpy 1.18.1 py37h4f9e942_0
numpy-base 1.18.1 py37hde5b4d6_1
olefile 0.46 py37_0
opencv-python 4.2.0.34 pypi_0
openh264 2.1.0 hd408876_0
openssl 1.1.1g h7b6447c_0
pcre 8.43 he6710b0_0
pillow 7.1.2 py37hb39fc2d_0
pip 20.0.2 py37_3
protobuf 3.11.4 py37he6710b0_0
pyparsing 2.4.7 py_0
pyqt 5.9.2 py37h05f1152_2
python 3.7.7 hcff3b4d_5
python-dateutil 2.8.1 py_0
pytorch 1.5.0 py3.7_cuda10.2.89_cudnn7.6.5_0
qt 5.9.7 h5867ecd_1
readline 8.0 h7b6447c_0
setuptools 46.4.0 py37_0
sip 4.19.8 py37hf484d3e_0
six 1.14.0 py37_0
sqlite 3.31.1 h62c20be_1
tensorboard 1.14.0 py37hf484d3e_0
tk 8.6.8 hbc83047_0
torchvision 0.6.0 py37_cu102
tornado 6.0.4 py37h7b6447c_1
tqdm 4.46.0 py_0
werkzeug 1.0.1 py_0
wheel 0.34.2 py37_0
x264 1!157.20191217 h7b6447c_0
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
Do you have any idea what is wrong? Any help is deeply appreciated!
Thanks,
Koven
The Numpy version of searchsorted takes an additional parameter to specify the semantic of the returned index:
side | returned index i satisfies |
---|---|
left | a[i-1] < v <= a[i] |
right | a[i-1] <= v < a[i] |
The current implementation of searchsorted
for Pytorch follows the left
behavior, is it possible to implement the right
version to allow ordering from the right?
Hi,
Are there plans to convert this repo into a PyPI
package? The amount of work needed there could be trivial (as you already seem to have a setup.py
file and stuff).
That could be incredibly useful.
torchsearchsorted
to requirements.txt
.hi,
i was considering using your code for implementing official pytorch torch.searchsorted, but i can only do that if your repository has a compatible license.
could you please consider adding a license to your codebase? fyi pytorch has a BSD-3 clause license
I get the following error when installing:
python setup.py install
running install
running bdist_egg
running egg_info
creating src/torchsearchsorted.egg-info
writing src/torchsearchsorted.egg-info/PKG-INFO
writing dependency_links to src/torchsearchsorted.egg-info/dependency_links.txt
writing requirements to src/torchsearchsorted.egg-info/requires.txt
writing top-level names to src/torchsearchsorted.egg-info/top_level.txt
writing manifest file 'src/torchsearchsorted.egg-info/SOURCES.txt'
reading manifest file 'src/torchsearchsorted.egg-info/SOURCES.txt'
writing manifest file 'src/torchsearchsorted.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/torchsearchsorted
copying src/torchsearchsorted/__init__.py -> build/lib.linux-x86_64-3.6/torchsearchsorted
copying src/torchsearchsorted/utils.py -> build/lib.linux-x86_64-3.6/torchsearchsorted
copying src/torchsearchsorted/searchsorted.py -> build/lib.linux-x86_64-3.6/torchsearchsorted
running build_ext
building 'torchsearchsorted.cpu' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/src
creating build/temp.linux-x86_64-3.6/src/cpu
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include -I/mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/t
orch/lib/include/torch/csrc/api/include -I/mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/TH -I/mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/THC -I/mnt/lustre/panxingang/anaconda3/envs
/pytorch1/include/python3.6m -c src/cpu/searchsorted_cpu_wrapper.cpp -o build/temp.linux-x86_64-3.6/src/cpu/searchsorted_cpu_wrapper.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/ATen/ATen.h:9:0,
from /mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/types.h:3,
from /mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/data.h:3,
from /mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/all.h:4,
from /mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/torch/extension.h:4,
from src/cpu/searchsorted_cpu_wrapper.h:4,
from src/cpu/searchsorted_cpu_wrapper.cpp:1:
src/cpu/searchsorted_cpu_wrapper.cpp: In lambda function:
src/cpu/searchsorted_cpu_wrapper.cpp:102:45: error: expected primary-expression before ‘>’ token
scalar_t* a_data = a.data_ptr<scalar_t>();
^
/mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/ATen/Dispatch.h:10:12: note: in definition of macro ‘AT_PRIVATE_CASE_TYPE’
return __VA_ARGS__(); \
^
src/cpu/searchsorted_cpu_wrapper.cpp:100:3: note: in expansion of macro ‘AT_DISPATCH_ALL_TYPES’
AT_DISPATCH_ALL_TYPES(a.type(), "searchsorted cpu", [&] {
^
src/cpu/searchsorted_cpu_wrapper.cpp:102:47: error: expected primary-expression before ‘)’ token
scalar_t* a_data = a.data_ptr<scalar_t>();
^
/mnt/lustre/panxingang/anaconda3/envs/pytorch1/lib/python3.6/site-packages/torch/lib/include/ATen/Dispatch.h:10:12: note: in definition of macro ‘AT_PRIVATE_CASE_TYPE’
return __VA_ARGS__(); \
^
src/cpu/searchsorted_cpu_wrapper.cpp:100:3: note: in expansion of macro ‘AT_DISPATCH_ALL_TYPES’
AT_DISPATCH_ALL_TYPES(a.type(), "searchsorted cpu", [&] {
error: command 'gcc' failed with exit status 1
I am using Pytorch1.0.0 installed via anaconda.
My system configuration:
CentOS v7
TITAN X.
Nvidia driver (384.90)
CUDA 9.0
Pytorch 1.0.0 installed via Anaconda.
Python 3.6.0 :: Anaconda
Do you have any suggestions to address this issue?
Building wheels failed in colab tpu
`!python --version
!nvcc --version
!sudo apt-get install g++-8 gcc-8
!sudo ln -s /usr/bin/gcc-8 /usr/local/cuda-10.1/bin/gcc
!sudo ln -s /usr/bin/g++-8 /usr/local/cuda-10.1/bin/g++
!pip install torch
%cd /content/
!git clone https://github.com/aliutkus/torchsearchsorted.git
%cd /content/torchsearchsorted
!pip install .`
LOG
Python 3.6.9 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243 Reading package lists... Done Building dependency tree Reading state information... Done g++-8 is already the newest version (8.4.0-1ubuntu1~18.04). gcc-8 is already the newest version (8.4.0-1ubuntu1~18.04). 0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded. ln: failed to create symbolic link '/usr/local/cuda-10.1/bin/gcc': File exists ln: failed to create symbolic link '/usr/local/cuda-10.1/bin/g++': File exists Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.6.0a0+f5bc91f) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch) (1.18.4) Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch) (0.18.2) /content/ fatal: destination path 'torchsearchsorted' already exists and is not an empty directory. /content/torchsearchsorted Processing /content/torchsearchsorted Building wheels for collected packages: torchsearchsorted Building wheel for torchsearchsorted (setup.py) ... error ERROR: Failed building wheel for torchsearchsorted Running setup.py clean for torchsearchsorted Failed to build torchsearchsorted Installing collected packages: torchsearchsorted
I have Ubuntu 18.04, python 3.6.9, and I've tried installing this in a virtual environment with each combination of pytorch 1.3.1 and 1.4.0 and cuda 10.1 and 10.2.
I put symlinks to gcc and g++ 8 in my cuda/bin directory.
When I run pip install . in the root directory I get:
"6 errors detected in the compilation of "/tmp/tmpxft_0000137f_00000000-6_searchsorted_cuda_kernel.cpp1.ii".
error: command '/usr/bin/nvcc' failed with exit status 1"
(the actual number of errors here was different between cuda 10.1 and cuda 10.2).
Some of the errors during compilation:
" /home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list
argument types are: (torch::enumtype::_compute_enum_name, torch::nn::KLDivLossOptions::reduction_t)
detected during:
instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::KLDivLossOptions::reduction_t]"
(176): here
instantiation of "at::Reduction::Reduction torch::enumtype::reduction_get_enum(V) [with V=torch::nn::KLDivLossOptions::reduction_t]"
/home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/loss.h(49): here
/home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list
argument types are: (torch::enumtype::_compute_enum_name, torch::nn::MultiLabelSoftMarginLossOptions::reduction_t)
detected during instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::MultiLabelSoftMarginLossOptions::reduction_t]"
/home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/loss.h(336): here
/home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list
argument types are: (torch::enumtype::_compute_enum_name, torch::nn::functional::PadFuncOptions::mode_t)
detected during instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::functional::PadFuncOptions::mode_t]"
/home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(40): here
/home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(164): error: no instance of function template "c10::visit" matches the argument list
argument types are: (torch::enumtype::_compute_enum_name, torch::nn::functional::InterpolateFuncOptions::mode_t)
detected during instantiation of "std::string torch::enumtype::get_enum_name(V) [with V=torch::nn::functional::InterpolateFuncOptions::mode_t]"
/home/andrew/.virtualenvs/nerf/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/upsampling.h(103): here"
Any ideas?
Hi!
I accidentally pass tensor with inf and nan values into the torchsearchsorted. Instead of throwing, the function actually outputs some tensor value. The output tensor is a DISASTER. Any operations on the output tensor will cause the entire process frozen. (For example, print the output vector). Not even ctrl-c can break it. I have to quickly kill the process to avoid further damage to the machine.
It would be great if you can add a simple check before the computation. Terminate the code if it finds there are nan or inf value exists.
Thanks!
Im currently trying to get to run, but i got the following error:
Traceback (most recent call last):
File "train_nerf.py", line 405, in <module>
main()
File "train_nerf.py", line 240, in main
encode_direction_fn=encode_direction_fn,
File "/home/nerfteam/nerfmeshes/src/nerf/nerf/train_utils.py", line 180, in run_one_iter_of_nerf
for batch in batches
File "/home/nerfteam/nerfmeshes/src/nerf/nerf/train_utils.py", line 180, in <listcomp>
for batch in batches
File "/home/nerfteam/nerfmeshes/src/nerf/nerf/train_utils.py", line 101, in predict_and_render_radiance
det=(getattr(options.nerf, mode).perturb == 0.0),
File "/home/nerfteam/nerfmeshes/src/nerf/nerf/nerf_helpers.py", line 288, in sample_pdf_2
inds = torchsearchsorted.searchsorted(cdf, u, side="right")
File "/home/nerfteam/nerfmeshes/.venv/lib/python3.7/site-packages/torchsearchsorted/searchsorted.py", line 41, in searchsorted
raise Exception('torchsearchsorted on CUDA device is asked, but it seems '
Exception: torchsearchsorted on CUDA device is asked, but it seems that it is not available. Please install it
I use poetry, but tried both adding torchsearchsorted as a dependency and installing through pip as advised in the installation part of the readme. In which situations can this exception be thrown? I would be happy if you could point me into the right direction.
According to the definition of searchsorted
, the returned tensor contains indexes, which are usually of dtype torch.long
in Pytorch.
However, the current implementation returns float indexes when out
is not passed as a parameter and throws an error if a LongTensor
is given as out
.
I think I identified where this happens:
out
is not passed, a result tensor is allocated in the python interface here using the dtype of v
res.type()
as a scalar_t
type in the CUDA codeIs it possible to have torch.long
indexes, both as return values and as the accepted type for the out
parameter?
(nerf_pl) hsc@hsc-B150-D3A:~/condaproject/nerf_pl-master/torchsearchsorted$ pip install .
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Processing /home/hsc/condaproject/nerf_pl-master/torchsearchsorted
Building wheels for collected packages: torchsearchsorted
Building wheel for torchsearchsorted (setup.py) ... \
Everything works fine on CPU. But when I tried to run the test.py on GPU, exception is raised:
" torchsearchsorted on CUDA device is asked, but it seems that it is not available. Please install it"
Please see below:
Looking for 50000x1000 values in 50000x300 entries
NUMPY: searchsorted in 3538.361ms
CPU: searchsorted in 2558.010ms
difference between CPU and NUMPY: 0.000
Traceback (most recent call last):
File "test.py", line 58, in
test_GPU = searchsorted(a, v, test_GPU, side)
File "/home/yin/anaconda3/lib/python3.7/site-packages/torchsearchsorted/searchsorted.py", line 41, in searchsorted
raise Exception('torchsearchsorted on CUDA device is asked, but it seems '
Exception: torchsearchsorted on CUDA device is asked, but it seems that it is not available. Please install it
“
Hi, thanks for the beautiful implementation here.
I'm trying to install this module, but I got the following error:
ERROR: Command errored out with exit status 1:
command: /root/Anacondas/anaconda3/envs/nerf/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/p300/Code/gnerf-pytorch/torchsearchsorted/setup.py'"'"'; __file__='"'"'/p300/Code/gnerf-pytorch/torchsearchsorted/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-37xwwr5z cwd: /p300/Code/gnerf-pytorch/torchsearchsorted/
...
File "/root/Anacondas/anaconda3/envs/nerf/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1413, in _run_ninja_build
raise RuntimeError(message)
RuntimeError: Error compiling objects for extension
I only copy-pasted the start and the end of the whole error cuz they are too long..
The whole error logs are attached below:
err.log
BTW, I'm running with pytorch 1.5, RTX2080tis
tried pytorch 1.3 or 1.4, but no luck
running nvcc --version
gives me:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Any help?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.