Giter Club home page Giter Club logo

multiperson's Introduction

Coherent Reconstruction of Multiple Humans from a Single Image

Code repository for the paper:
Coherent Reconstruction of Multiple Humans from a Single Image
Wen Jiang*, Nikos Kolotouros*, Georgios Pavlakos, Xiaowei Zhou, Kostas Daniilidis
CVPR 2020 [paper] [project page]

teaser

Contents

Our repository includes training/testing/demo code for our paper. Additionally, you might find useful some parts of the code that can also be used in a standalone manner. More specifically:

Neural Mesh Renderer: Fast implementation of the original NMR.

SDF: CUDA implementation of the SDF computation and our SDF-based collision loss.

SMPLify 3D fitting: Extension of SMPLify that offers the functionality of fitting the SMPL model to 3D keypoints.

Installation instructions

This codebase was adopted from early version of mmdetection and mmcv. Users of this repo are highly recommended to read the readme of mmcv and mmdetection before using this code.

To install mmcv and mmdetection:

conda env create -f environment.yml
cd neural_renderer/
python3 setup.py install
cd ../mmcv
python3 setup.py install
cd ../mmdetection
./compile.sh
python setup.py develop
cd ../sdf
python3 setup.py install

Fetch data

Download our model data and place them under mmdetection/data. This includes the model checkpoint and joint regressors. You also need to download the mean SMPL parameters from here. Besides these files, you also need to download the SMPL model. You will need the neutral model for training, evaluation and running the demo code. Please go to the websites for the corresponding projects and register to get access to the downloads section. In case you need to convert the models to be compatible with python3, please follow the instructions here.

After finishing with the installation and downloading the necessary data, you can continue with running the demo/evaluation/training code.

Run demo code

We provide code to evaluate our pretrained model on a folder of images by running:

cd mmdetection
python3 tools/demo.py --config=configs/smpl/tune.py --image_folder=demo_images/ --output_folder=results/ --ckpt data/checkpoint.pt

Prepare datasets

Please refer to DATASETS.md for the preparation of the dataset files.

Run evaluation code

Besides the demo code, we also provide code to evaluate our models on the datasets we employ for our quantitative evaluation. Before continuing, please make sure that you follow the preparation of test sets.

You could use either our pretrained checkpoint or the model trained by yourself to evaluate on Panoptic, MuPoTS-3D, Human3.6M and PoseTrack.

Example usage:

cd mmdetection
python3 tools/full_eval.py configs/smpl/tune.py full_h36m --ckpt ./work_dirs/tune/latest.pth

Running the above command will compute the MPJPE and Reconstruction Error on the Human3.6M dataset (Protocol I). The full_h36m option can be replaced with other dataset or sequences based on the type of evaluation you want to perform:

  • haggling: haggling sequence of Panoptic
  • mafia: mafia sequence of Panoptic
  • ultimatum: ultimatum sequence of Panoptic
  • haggling: haggling sequence of Panoptic
  • mupots: MuPoTS-3D dataset
  • posetrack: PoseTrack dataset

Regarding the evaluation:

  • For Panoptic, the command will compute the MPJPE for each sequence.
  • For MuPoTS-3D, the command will save the results to the work_dirs/tune/mupots.mat which can be taken as input for official MuPoTS-3D test script.
  • For H36M, the command will compute P1 and P2 for test set.

Run training code

Please make sure you have prepared all datasets before running our training script. The training of our model would take three phases, pretrain -> baseline -> fine tuning. We prepared three configuration files under mmdetection/configs/smpl/. To train our model from scratch:

cd mmdetection
# Phase 1: pretraining
python3 tools/train.py configs/smpl/pretrain.py --create_dummy
while true:
do
    python3 tools/train.py configs/smpl/pretrain.py
done
# We could move to next phase after training for 240k iterations

# Phase 2: baseline
python3 tools/train.py configs/smpl/baseline.py --load_pretrain ./work_dirs/pretrain/latest.pth
while true:
do
    python3 tools/train.py configs/smpl/baseline.py 
done
# We could move to next phase after training for 180k iterations

# Phase 3: Fine-tuning
python3 tools/train.py configs/smpl/tune.py --load_pretrain ./work_dirs/baseline/latest.pth
while true:
do
    python3 tools/train.py configs/smpl/tune.py 
done
# It could be done after 100k iterations of training

All the checkpoints, evaluation results and logs would be saved to ./mmdetection/work_dirs/ + pretrain|baseline|tune respectively. Our training program will save the checkpoints and restart every 50 mins. You could change the time_limit in the configurations files to something more convenient

Citing

If you find this code useful for your research or the use data generated by our method, please consider citing the following paper:

@Inproceedings{jiang2020mpshape,
  Title          = {Coherent Reconstruction of Multiple Humans from a Single Image},
  Author         = {Jiang, Wen and Kolotouros, Nikos and Pavlakos, Georgios and Zhou, Xiaowei and Daniilidis, Kostas},
  Booktitle      = {CVPR},
  Year           = {2020}
}

Acknowledgements

This code uses (mmcv and mmdetection) as backbone. We gratefully appreciate the impact these libraries had on our work. If you use our code, please consider citing the original paper as well.

multiperson's People

Contributors

geopavlakos avatar jiangwenpl avatar nkolot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiperson's Issues

Speed for fitting SMPL-X to 3D key points

Hi,

Thanks a ton for the great work!

I am trying out the script to fit SMPL-X to 3D keypoints and are noticing a speed of about 40 secs per frame; This is rather slow even with parallelzation. Any suggestions?

neutral model

Hi,

I ran the demo but got the assertion indicating that the data/smpl/SMPL_NEUTRAL.pkl does not exist. Then I noticed the fetch data section indicates that the 'neutral model' (http://smplify.is.tue.mpg.de/) is needed for training, evaluation, and running the demo code. That website contains multiple files, Could you please confirm if that model is the one located on lsp_results.targ.gz (obtained from http://smplify.is.tue.mpg.de/) > results > lsp > all_results_gender-neutral.pkl? If so, I would have to rename all_results_gender-neutral.pkl as SMPL_NEUTRAL.pkl. After renaming the pkl, the code returned the error below, which makes me think I got the wrong pkl:

File "/multiperson/mmdetection/mmdet/models/utils/smpl/smpl.py", line 51, in __init__
    super(SMPL, self).__init__(*args, create_global_orient=False, create_body_pose=False, create_betas=False, create_transl=False, **kwargs)
  File "/python3.6/site-packages/smplx/body_models.py", line 205, in __init__
    self.faces = data_struct.f
AttributeError: 'Struct' object has no attribute 'f'

Cheers,
Gabriel

Computation of MPJPE

Hi,
Thanks for uploading your code!
I am wondering:

  1. do you use joint visibility to compute MPJPE for CMU Panoptic dataset?
  2. do you use PA for the results reported in the paper?

Cheers!

How can I install multiperson for PyTorch 1.6?

I am trying to install multiperson repo for PyTorch 1.6 and I get the following error. How can I make it work for PyTorch 1.6?

(base) mona@mona:~/research$ cd phosa/
(base) mona@mona:~/research/phosa$ mkdir -p external
(base) mona@mona:~/research/phosa$ git clone https://github.com/JiangWenPL/multiperson.git external/multiperson
Cloning into 'external/multiperson'...
remote: Enumerating objects: 752, done.
remote: Counting objects: 100% (752/752), done.
remote: Compressing objects: 100% (566/566), done.
remote: Total 752 (delta 189), reused 723 (delta 173), pack-reused 0
Receiving objects: 100% (752/752), 48.29 MiB | 46.26 MiB/s, done.
Resolving deltas: 100% (189/189), done.
(base) mona@mona:~/research/phosa$ pip install external/multiperson/neural_renderer
Processing ./external/multiperson/neural_renderer
Building wheels for collected packages: neural-renderer-pytorch
  Building wheel for neural-renderer-pytorch (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/mona/anaconda3/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-nl9m5bw0
       cwd: /tmp/pip-req-build-ma51z6r7/
  Complete output (210 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.7
  creating build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/load_obj.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/perspective.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/vertices_to_faces.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/visibility.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/get_points_from_angles.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/look.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/projection.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/rasterize.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/save_obj.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/look_at.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/lighting.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/mesh.py -> build/lib.linux-x86_64-3.7/neural_renderer
  copying neural_renderer/renderer.py -> build/lib.linux-x86_64-3.7/neural_renderer
  creating build/lib.linux-x86_64-3.7/neural_renderer/cuda
  copying neural_renderer/cuda/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer/cuda
  running build_ext
  building 'neural_renderer.cuda.load_textures' extension
  creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7
  creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer
  creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda
  Emitting ninja build file /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/2] /usr/bin/nvcc -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda_kernel.cu -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14
  [2/2] c++ -MMD -MF /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o.d -pthread -B /home/mona/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  FAILED: /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o
  c++ -MMD -MF /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o.d -pthread -B /home/mona/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                   from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
  /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     84 | #pragma omp parallel for if ((end - begin) >= grain_size)
        |
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp: In function ‘at::Tensor load_textures(at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int)’:
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
        |                                       ^
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
     17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
        |                        ^~~~~~~~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’
     28 |     CHECK_INPUT(image);
        |     ^~~~~~~~~~~
  In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                   from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
  /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
    268 |   DeprecatedTypeProperties & type() const {
        |                              ^~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:23: error: ‘AT_CHECK’ was not declared in this scope; did you mean ‘DCHECK’?
     15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
        |                       ^~~~~~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
     17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
        |                        ^~~~~~~~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’
     28 |     CHECK_INPUT(image);
        |     ^~~~~~~~~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
        |                                       ^
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
     17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
        |                        ^~~~~~~~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:29:5: note: in expansion of macro ‘CHECK_INPUT’
     29 |     CHECK_INPUT(faces);
        |     ^~~~~~~~~~~
  In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                   from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
  /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
    268 |   DeprecatedTypeProperties & type() const {
        |                              ^~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
        |                                       ^
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
     17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
        |                        ^~~~~~~~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:30:5: note: in expansion of macro ‘CHECK_INPUT’
     30 |     CHECK_INPUT(is_update);
        |     ^~~~~~~~~~~
  In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                   from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
  /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
    268 |   DeprecatedTypeProperties & type() const {
        |                              ^~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
     15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
        |                                       ^
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
     17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
        |                        ^~~~~~~~~~
  /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:31:5: note: in expansion of macro ‘CHECK_INPUT’
     31 |     CHECK_INPUT(textures);
        |     ^~~~~~~~~~~
  In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                   from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                   from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
  /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
    268 |   DeprecatedTypeProperties & type() const {
        |                              ^~~~
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1515, in _run_ninja_build
      env=env)
    File "/home/mona/anaconda3/lib/python3.7/subprocess.py", line 512, in run
      output=stdout, stderr=stderr)
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
  
  During handling of the above exception, another exception occurred:
  
  Traceback (most recent call last):
    File "<string>", line 1, in <module>
    File "/tmp/pip-req-build-ma51z6r7/setup.py", line 40, in <module>
      cmdclass = {'build_ext': BuildExtension}
    File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/__init__.py", line 163, in setup
      return distutils.core.setup(**attrs)
    File "/home/mona/anaconda3/lib/python3.7/distutils/core.py", line 148, in setup
      dist.run_commands()
    File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
      self.run_command(cmd)
    File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/home/mona/anaconda3/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 290, in run
      self.run_command('build')
    File "/home/mona/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/home/mona/anaconda3/lib/python3.7/distutils/command/build.py", line 135, in run
      self.run_command(cmd_name)
    File "/home/mona/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run
      _build_ext.run(self)
    File "/home/mona/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
      _build_ext.build_ext.run(self)
    File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 340, in run
      self.build_extensions()
    File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions
      build_ext.build_extensions(self)
    File "/home/mona/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
      _build_ext.build_ext.build_extensions(self)
    File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
      self._build_extensions_serial()
    File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
      self.build_extension(ext)
    File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 208, in build_extension
      _build_ext.build_extension(self, ext)
    File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
      depends=ext.depends)
    File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 478, in unix_wrap_ninja_compile
      with_cuda=with_cuda)
    File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
      error_prefix='Error compiling objects for extension')
    File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build
      raise RuntimeError(message)
  RuntimeError: Error compiling objects for extension
  ----------------------------------------
  ERROR: Failed building wheel for neural-renderer-pytorch
  Running setup.py clean for neural-renderer-pytorch
Failed to build neural-renderer-pytorch
Installing collected packages: neural-renderer-pytorch
  Attempting uninstall: neural-renderer-pytorch
    Found existing installation: neural-renderer-pytorch 1.1.3
    Uninstalling neural-renderer-pytorch-1.1.3:
      Successfully uninstalled neural-renderer-pytorch-1.1.3
    Running setup.py install for neural-renderer-pytorch ... error
    ERROR: Command errored out with exit status 1:
     command: /home/mona/anaconda3/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-ul8nk1jn/install-record.txt --single-version-externally-managed --compile --install-headers /home/mona/anaconda3/include/python3.7m/neural-renderer-pytorch
         cwd: /tmp/pip-req-build-ma51z6r7/
    Complete output (212 lines):
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.7
    creating build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/load_obj.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/perspective.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/vertices_to_faces.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/visibility.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/get_points_from_angles.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/look.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/projection.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/rasterize.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/save_obj.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/look_at.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/lighting.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/mesh.py -> build/lib.linux-x86_64-3.7/neural_renderer
    copying neural_renderer/renderer.py -> build/lib.linux-x86_64-3.7/neural_renderer
    creating build/lib.linux-x86_64-3.7/neural_renderer/cuda
    copying neural_renderer/cuda/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer/cuda
    running build_ext
    building 'neural_renderer.cuda.load_textures' extension
    creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7
    creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer
    creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda
    Emitting ninja build file /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/build.ninja...
    Compiling objects...
    Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
    [1/2] c++ -MMD -MF /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o.d -pthread -B /home/mona/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    FAILED: /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o
    c++ -MMD -MF /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o.d -pthread -B /home/mona/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                     from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
    /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
       84 | #pragma omp parallel for if ((end - begin) >= grain_size)
          |
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp: In function ‘at::Tensor load_textures(at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int)’:
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
       15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
          |                                       ^
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
       17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
          |                        ^~~~~~~~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’
       28 |     CHECK_INPUT(image);
          |     ^~~~~~~~~~~
    In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                     from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
    /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
      268 |   DeprecatedTypeProperties & type() const {
          |                              ^~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:23: error: ‘AT_CHECK’ was not declared in this scope; did you mean ‘DCHECK’?
       15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
          |                       ^~~~~~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
       17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
          |                        ^~~~~~~~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’
       28 |     CHECK_INPUT(image);
          |     ^~~~~~~~~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
       15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
          |                                       ^
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
       17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
          |                        ^~~~~~~~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:29:5: note: in expansion of macro ‘CHECK_INPUT’
       29 |     CHECK_INPUT(faces);
          |     ^~~~~~~~~~~
    In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                     from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
    /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
      268 |   DeprecatedTypeProperties & type() const {
          |                              ^~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
       15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
          |                                       ^
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
       17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
          |                        ^~~~~~~~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:30:5: note: in expansion of macro ‘CHECK_INPUT’
       30 |     CHECK_INPUT(is_update);
          |     ^~~~~~~~~~~
    In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                     from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
    /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
      268 |   DeprecatedTypeProperties & type() const {
          |                              ^~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
       15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
          |                                       ^
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
       17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
          |                        ^~~~~~~~~~
    /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:31:5: note: in expansion of macro ‘CHECK_INPUT’
       31 |     CHECK_INPUT(textures);
          |     ^~~~~~~~~~~
    In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                     from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
                     from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1:
    /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here
      268 |   DeprecatedTypeProperties & type() const {
          |                              ^~~~
    [2/2] /usr/bin/nvcc -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda_kernel.cu -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14
    ninja: build stopped: subcommand failed.
    Traceback (most recent call last):
      File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1515, in _run_ninja_build
        env=env)
      File "/home/mona/anaconda3/lib/python3.7/subprocess.py", line 512, in run
        output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-req-build-ma51z6r7/setup.py", line 40, in <module>
        cmdclass = {'build_ext': BuildExtension}
      File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/__init__.py", line 163, in setup
        return distutils.core.setup(**attrs)
      File "/home/mona/anaconda3/lib/python3.7/distutils/core.py", line 148, in setup
        dist.run_commands()
      File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
        self.run_command(cmd)
      File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/install.py", line 61, in run
        return orig.install.run(self)
      File "/home/mona/anaconda3/lib/python3.7/distutils/command/install.py", line 545, in run
        self.run_command('build')
      File "/home/mona/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/home/mona/anaconda3/lib/python3.7/distutils/command/build.py", line 135, in run
        self.run_command(cmd_name)
      File "/home/mona/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
        self.distribution.run_command(command)
      File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
        cmd_obj.run()
      File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run
        _build_ext.run(self)
      File "/home/mona/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
        _build_ext.build_ext.run(self)
      File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 340, in run
        self.build_extensions()
      File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions
        build_ext.build_extensions(self)
      File "/home/mona/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
        _build_ext.build_ext.build_extensions(self)
      File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
        self._build_extensions_serial()
      File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
        self.build_extension(ext)
      File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 208, in build_extension
        _build_ext.build_extension(self, ext)
      File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
        depends=ext.depends)
      File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 478, in unix_wrap_ninja_compile
        with_cuda=with_cuda)
      File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
        error_prefix='Error compiling objects for extension')
      File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build
        raise RuntimeError(message)
    RuntimeError: Error compiling objects for extension
    ----------------------------------------
  Rolling back uninstall of neural-renderer-pytorch
  Moving to /home/mona/anaconda3/lib/python3.7/site-packages/neural_renderer/
   from /home/mona/anaconda3/lib/python3.7/site-packages/~eural_renderer
  Moving to /home/mona/anaconda3/lib/python3.7/site-packages/neural_renderer_pytorch-1.1.3.dist-info/
   from /home/mona/anaconda3/lib/python3.7/site-packages/~eural_renderer_pytorch-1.1.3.dist-info
ERROR: Command errored out with exit status 1: /home/mona/anaconda3/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-ul8nk1jn/install-record.txt --single-version-externally-managed --compile --install-headers /home/mona/anaconda3/include/python3.7m/neural-renderer-pytorch Check the logs for full command output.

I have:

$ python
Python 3.7.6 (default, Jan  8 2020, 19:59:22) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.6.0'
$ lsb_release -a
LSB Version:	core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.1 LTS
Release:	20.04
Codename:	focal

$ nvidia-smi
Sun Dec  6 16:36:36 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 2070    Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   49C    P8    21W /  N/A |   1546MiB /  7982MiB |      8%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

Loss Problem

Is it reasonable for the losses to oscillate?

Problem about the sdf

Thanks for your great work!
I want to use the sdf package in my own project and the installation process is finished. But when I am trying to run my code, I get the following error. How can I solve the problem?

Traceback (most recent call last):
File "test.py", line 20, in
fai = get_sdf(test_faces, test_vertics)
File "/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/sdf.py", line 25, in forward
phi = SDFFunction.apply(phi, faces, vertices)
File "/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/sdf.py", line 15, in forward
return _C.sdf(phi, faces, vertices)
AttributeError: module 'sdf.csrc' has no attribute 'sdf'

The installation process:
`running install
running bdist_egg
running egg_info
creating sdf_pytorch.egg-info
writing sdf_pytorch.egg-info/PKG-INFO
writing dependency_links to sdf_pytorch.egg-info/dependency_links.txt
writing top-level names to sdf_pytorch.egg-info/top_level.txt
writing manifest file 'sdf_pytorch.egg-info/SOURCES.txt'
package init file 'sdf/csrc/init.py' not found (or not a regular file)
reading manifest file 'sdf_pytorch.egg-info/SOURCES.txt'
writing manifest file 'sdf_pytorch.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/sdf
copying sdf/sdf.py -> build/lib.linux-x86_64-3.7/sdf
copying sdf/sdf_loss.py -> build/lib.linux-x86_64-3.7/sdf
copying sdf/init.py -> build/lib.linux-x86_64-3.7/sdf
running build_ext
building 'sdf.csrc' extension
creating /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7
creating /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/sdf
creating /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/sdf/csrc
Emitting ninja build file /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /usr/local/cuda/bin/nvcc -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/TH -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/mnt/data3/HOME/zjx/miniconda3/include/python3.7m -c -c /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda_kernel.cu -o /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/sdf/csrc/sdf_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
[2/2] c++ -MMD -MF /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/sdf/csrc/sdf_cuda.o.d -pthread -B /mnt/data3/HOME/zjx/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/TH -I/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/mnt/data3/HOME/zjx/miniconda3/include/python3.7m -c -c /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp -o /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/sdf/csrc/sdf_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=csrc -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/DeviceType.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/Device.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/Allocator.h:6,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:7,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:1:
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp: In function ‘at::Tensor sdf(at::Tensor, at::Tensor, at::Tensor)’:
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:3:42: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
3 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/macros/Macros.h:197:64: note: in definition of macro ‘C10_UNLIKELY’
197 | #define C10_UNLIKELY(expr) (__builtin_expect(static_cast(expr), 0))
| ^~~~
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/util/Exception.h:441:7: note: in expansion of macro ‘C10_UNLIKELY_OR_CONST’
441 | if (C10_UNLIKELY_OR_CONST(!(cond))) {
| ^~~~~~~~~~~~~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:3:23: note: in expansion of macro ‘TORCH_CHECK’
3 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^~~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:5:24: note: in expansion of macro ‘CHECK_CUDA’
5 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
| ^~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:19:2: note: in expansion of macro ‘CHECK_INPUT’
19 | CHECK_INPUT(phi);
| ^~~~~~~~~~~
In file included from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:9,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:1:
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:194:30: note: declared here
194 | DeprecatedTypeProperties & type() const {
| ^~~~
In file included from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/DeviceType.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/Device.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/Allocator.h:6,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:7,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:1:
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:3:42: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
3 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/macros/Macros.h:197:64: note: in definition of macro ‘C10_UNLIKELY’
197 | #define C10_UNLIKELY(expr) (__builtin_expect(static_cast(expr), 0))
| ^~~~
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/util/Exception.h:441:7: note: in expansion of macro ‘C10_UNLIKELY_OR_CONST’
441 | if (C10_UNLIKELY_OR_CONST(!(cond))) {
| ^~~~~~~~~~~~~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:3:23: note: in expansion of macro ‘TORCH_CHECK’
3 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^~~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:5:24: note: in expansion of macro ‘CHECK_CUDA’
5 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
| ^~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:20:2: note: in expansion of macro ‘CHECK_INPUT’
20 | CHECK_INPUT(faces);
| ^~~~~~~~~~~
In file included from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:9,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:1:
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:194:30: note: declared here
194 | DeprecatedTypeProperties & type() const {
| ^~~~
In file included from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/DeviceType.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/Device.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/core/Allocator.h:6,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:7,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:1:
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:3:42: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
3 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/macros/Macros.h:197:64: note: in definition of macro ‘C10_UNLIKELY’
197 | #define C10_UNLIKELY(expr) (__builtin_expect(static_cast(expr), 0))
| ^~~~
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/c10/util/Exception.h:441:7: note: in expansion of macro ‘C10_UNLIKELY_OR_CONST’
441 | if (C10_UNLIKELY_OR_CONST(!(cond))) {
| ^~~~~~~~~~~~~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:3:23: note: in expansion of macro ‘TORCH_CHECK’
3 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^~~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:5:24: note: in expansion of macro ‘CHECK_CUDA’
5 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
| ^~~~~~~~~~
/mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:21:2: note: in expansion of macro ‘CHECK_INPUT’
21 | CHECK_INPUT(vertices);
| ^~~~~~~~~~~
In file included from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:9,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/sdf/csrc/sdf_cuda.cpp:1:
/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:194:30: note: declared here
194 | DeprecatedTypeProperties & type() const {
| ^~~~
g++ -pthread -shared -B /mnt/data3/HOME/zjx/miniconda3/compiler_compat -L/mnt/data3/HOME/zjx/miniconda3/lib -Wl,-rpath=/mnt/data3/HOME/zjx/miniconda3/lib -Wl,--no-as-needed -Wl,--sysroot=/ /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/sdf/csrc/sdf_cuda.o /mnt/data3/HOME/zjx/code/PMnet_pytorch/outside-code/sdf/build/temp.linux-x86_64-3.7/sdf/csrc/sdf_cuda_kernel.o -L/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/torch/lib -L/usr/local/cuda/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda_cu -ltorch_cuda_cpp -o build/lib.linux-x86_64-3.7/sdf/csrc.cpython-37m-x86_64-linux-gnu.so
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/sdf
copying build/lib.linux-x86_64-3.7/sdf/csrc.cpython-37m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg/sdf
copying build/lib.linux-x86_64-3.7/sdf/sdf.py -> build/bdist.linux-x86_64/egg/sdf
copying build/lib.linux-x86_64-3.7/sdf/sdf_loss.py -> build/bdist.linux-x86_64/egg/sdf
copying build/lib.linux-x86_64-3.7/sdf/init.py -> build/bdist.linux-x86_64/egg/sdf
byte-compiling build/bdist.linux-x86_64/egg/sdf/sdf.py to sdf.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/sdf/sdf_loss.py to sdf_loss.cpython-37.pyc
byte-compiling build/bdist.linux-x86_64/egg/sdf/init.py to init.cpython-37.pyc
creating stub loader for sdf/csrc.cpython-37m-x86_64-linux-gnu.so
byte-compiling build/bdist.linux-x86_64/egg/sdf/csrc.py to csrc.cpython-37.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying sdf_pytorch.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying sdf_pytorch.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying sdf_pytorch.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying sdf_pytorch.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
zip_safe flag not set; analyzing archive contents...
sdf.pycache.csrc.cpython-37: module references file
creating dist
creating 'dist/sdf_pytorch-0.0.1-py3.7-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing sdf_pytorch-0.0.1-py3.7-linux-x86_64.egg
removing '/mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/sdf_pytorch-0.0.1-py3.7-linux-x86_64.egg' (and everything under it)
creating /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/sdf_pytorch-0.0.1-py3.7-linux-x86_64.egg
Extracting sdf_pytorch-0.0.1-py3.7-linux-x86_64.egg to /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages
sdf-pytorch 0.0.1 is already the active version in easy-install.pth

Installed /mnt/data3/HOME/zjx/miniconda3/lib/python3.7/site-packages/sdf_pytorch-0.0.1-py3.7-linux-x86_64.egg
Processing dependencies for sdf-pytorch==0.0.1
Finished processing dependencies for sdf-pytorch==0.0.1`

Question about datasets

Hi, thanks for sharing the great work and code!

Could you please provide the detailed structure of the ann_file and proposal_file or provide an example file for better understanding the code? Also, when exactly will you release the data preprocessing code?

Thank you and look fwd to your reply!

Data for Adversarial Training

Hi,

I was wondering, which data do you use for adversarial pre-training SMPL head?
Is it the same as in Kanazawa paper? or do you use different data.

Thank you

How can use SMPL-X model

Hi, thanks for wonderful project!

Currently, I have carried out the project with the SMPL(SMPL_NEUTRAL.pkl) model, and I plan to proceed with the SMPLX(SMPLX_NEUTRAL) model. An error occurs when using the model SMPLX. Is there a way to use the model?

image

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.2.bn2.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer1.1.bn1.num_batches_tracked, bn1.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer3.2.bn2.num_batches_tracked

2021-08-12 18:09:38,964 - INFO - load checkpoint from data/checkpoint.pt
Traceback (most recent call last):
  File "/home/neosungsin/anaconda3/envs/multi/lib/python3.7/site-packages/mmcv-0.2.10-py3.7-linux-x86_64.egg/mmcv/runner/checkpoint.py", line 87, in load_state_dict
    own_state[name].copy_(param)
RuntimeError: The size of tensor a (20908) must match the size of tensor b (13776) at non-singleton dimension 0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "tools/demo.py", line 190, in <module>
    main()
  File "tools/demo.py", line 149, in main
    runner.resume(cfg.resume_from)
  File "/home/neosungsin/anaconda3/envs/multi/lib/python3.7/site-packages/mmcv-0.2.10-py3.7-linux-x86_64.egg/mmcv/runner/runner.py", line 330, in resume
    map_location=lambda storage, loc: storage.cuda(device_id))
  File "/home/neosungsin/anaconda3/envs/multi/lib/python3.7/site-packages/mmcv-0.2.10-py3.7-linux-x86_64.egg/mmcv/runner/runner.py", line 239, in load_checkpoint
    self.logger)
  File "/home/neosungsin/anaconda3/envs/multi/lib/python3.7/site-packages/mmcv-0.2.10-py3.7-linux-x86_64.egg/mmcv/runner/checkpoint.py", line 180, in load_checkpoint
    load_state_dict(model.module, state_dict, strict, logger)
  File "/home/neosungsin/anaconda3/envs/multi/lib/python3.7/site-packages/mmcv-0.2.10-py3.7-linux-x86_64.egg/mmcv/runner/checkpoint.py", line 93, in load_state_dict
    name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named smpl_head.loss.sdf_loss.faces, whose dimensions in the model are torch.Size([20908, 3]) and whose dimensions in the checkpoint are torch.Size([13776, 3]).

Also, if the SMPL model is used, the above error occurs, but the output result is generated.

image

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer3.2.bn2.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer4.0.bn1.num_batches_tracked, bn1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.5.bn1.num_batches_tracked

2021-08-12 17:54:32,318 - INFO - load checkpoint from data/checkpoint.pt
2021-08-12 17:54:32,574 - WARNING - missing keys in source state_dict: smpl_head.loss.smpl.shapedirs, smpl_head.loss.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.loss.smpl.parents, smpl_head.smpl.lbs_weights, smpl_head.smpl.faces_tensor, smpl_head.loss.smpl.faces_tensor, smpl_head.smpl.shapedirs, smpl_head.smpl.parents, smpl_head.smpl.J_regressor_extra, smpl_head.smpl.J_regressor, smpl_head.smpl.v_template, smpl_head.loss.smpl.posedirs, smpl_head.smpl.posedirs, smpl_head.loss.smpl.J_regressor_extra, smpl_head.loss.smpl.v_template, smpl_head.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.loss.smpl.J_regressor, smpl_head.loss.smpl.lbs_weights

2021-08-12 17:54:32,593 - INFO - resumed epoch 27, iter 90901
/home/multi/mmdetection/mmdet/core/bbox/transforms.py:56: UserWarning: This overload of addcmul is deprecated:
	addcmul(Tensor input, Number value, Tensor tensor1, Tensor tensor2, *, Tensor out)
Consider using one of the following signatures instead:
	addcmul(Tensor input, Tensor tensor1, Tensor tensor2, *, Number value, Tensor out) (Triggered internally at  /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
  gx = torch.addcmul(px, 1, pw, dx)  # gx = px + pw * dx

Flipping x-axis in renderer and Perspective model

I was wondering, why do you need to flip the x- axis in .../utils/smpl/renderer.py?

        # Need to flip x-axis
        rot = trimesh.transformations.rotation_matrix(
            np.radians(180), [1, 0, 0])

Also, what are the gains provided by using the full perspective camera model vs. weak perspective for 2D supervision?

Error running demo.py: TypeError: 'NoneType' object is not subscriptable

Hello,

I am trying to run the demo file on the teaser image. I followed the instructions of #4 because I had the same error as #1. I am a bit confused about the downloads. Particularly:

  • Downloading the SMPL model from my Google Drive
  • Downloading neutral model .pkl from smplify_code_v2.zip
    I cannot seem to find smpl.zip anywhere therefore I used the .pkl file from the simply_code_v2 under data/smpl/. The code seems to run now however, even tho I get prediction results but the output of prepare_dump is None. This is my output:

WARNING: You are using a SMPL model, with only 10 shape coefficients.
WARNING: You are using a SMPL model, with only 10 shape coefficients.
unexpected key in source state_dict: fc.weight, fc.bias
missing keys in source state_dict: layer2.0.bn1.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.5.bn2.num_batches_tracked, bn1.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer1.1.bn2.num_batches_tracked
2021-08-12 16:47:52,049 - INFO - load checkpoint from data/checkpoint.pt
2021-08-12 16:47:52,503 - WARNING - missing keys in source state_dict: smpl_head.loss.smpl.faces_tensor, smpl_head.loss.smpl.shapedirs, smpl_head.smpl.shapedirs, smpl_head.loss.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.smpl.posedirs, smpl_head.loss.smpl.J_regressor, smpl_head.smpl.v_template, smpl_head.smpl.faces_tensor, smpl_head.loss.smpl.v_template, smpl_head.smpl.J_regressor, smpl_head.loss.smpl.lbs_weights, smpl_head.smpl.lbs_weights, smpl_head.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.smpl.J_regressor_extra, smpl_head.smpl.parents, smpl_head.loss.smpl.J_regressor_extra, smpl_head.loss.smpl.parents, smpl_head.loss.smpl.posedirs
2021-08-12 16:47:52,535 - INFO - resumed epoch 27, iter 90901
WARNING: You are using a SMPL model, with only 10 shape coefficients.
Traceback (most recent call last):
File "/home/multiperson/mmdetection/tools/demo.py", line 190, in
main()
File "/home/multiperson/mmdetection/tools/demo.py", line 186, in main
cv2.imwrite(f'{file_name.replace(folder_name, output_folder)}.output.jpg', img_viz[:, :, ::-1])
TypeError: 'NoneType' object is not subscriptable
startswith first arg must be bytes or a tuple of bytes, not str

I commented the try-except part (demo.py line 89-94) to get a more detailed error message. The problem seems to occur in renderer.py line 88:

Traceback (most recent call last):
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/latebind.py", line 41, in call
return self._finalCall( *args, **named )
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nora/multiperson/mmdetection/tools/demo.py", line 185, in main
img_viz = prepare_dump(pred_results, img, render, bbox_results, FOCAL_LENGTH)
File "/home/nora/multiperson/mmdetection/tools/demo.py", line 90, in prepare_dump
fv_rendered = render([img_th.clone()], [pred_verts], translation=[pred_trans])[0]
File "/home/nora/multiperson/mmdetection/mmdet/models/utils/smpl/renderer.py", line 88, in call
color, rend_depth = self.renderer.render(scene, flags=pyrender.RenderFlags.RGBA)
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/pyrender/offscreen.py", line 99, in render
return self._renderer.render(scene, flags)
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/pyrender/renderer.py", line 121, in render
self._update_context(scene, flags)
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/pyrender/renderer.py", line 709, in _update_context
p._add_to_context()
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/pyrender/primitive.py", line 324, in _add_to_context
self._vaid = glGenVertexArrays(1)
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/latebind.py", line 45, in call
return self._finalCall( *args, **named )
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/wrapper.py", line 657, in wrapperCall
result = wrappedOperation( *cArguments )
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 401, in call
if self.load():
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 390, in load
error_checker = self.error_checker,
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 148, in constructFunction
if (not is_core) and not self.checkExtension( extension ):
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 270, in checkExtension
result = extensions.ExtensionQuerier.hasExtension( name )
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/extensions.py", line 98, in hasExtension
result = registered( specifier )
File "/home/nora/anaconda2/envs/multiperson/lib/python3.7/site-packages/OpenGL/extensions.py", line 105, in call
if not specifier.startswith( self.prefix ):
TypeError: startswith first arg must be bytes or a tuple of bytes, not str

request for the teapot.obj used in example1.py

Hi could you please either share the teapot.obj or the link to download to reproduce example1.py?

mona@goku:~/research/code/multiperson/neural_renderer$ python ./examples/example1.py
Traceback (most recent call last):
  File "./examples/example1.py", line 55, in <module>
    main()
  File "./examples/example1.py", line 31, in main
    vertices, faces = nr.load_obj(args.filename_input)
  File "/home/mona/venv/nmr/lib/python3.8/site-packages/neural_renderer/load_obj.py", line 117, in load_obj
    with open(filename_obj) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/mona/research/code/multiperson/neural_renderer/examples/data/teapot.obj'

Files are missing

Hi, thank you for sharing this great work. It seems that the "tools" folder is missing in this repository. Could you please reupload these files?

J_regressor_extra vs J_regressor_h36m

What is the difference between the two joint regressors? From what I can tell J_regressor_extra is used to extract smpl paramters during smplifyx fitting operation for dataset generation, while J_regressor_h36m is used solely in loss calculation.

Since J_regressor_extra is used to fit SMPL keypoints to gt keypoints would it not also work for loss calculation?

Error pre-processing h36m data

Error pre-processing data
python h36m.py /datasets/Human36M/ ../../../mmdetection/data/h36m/rcnn --split=train


Traceback (most recent call last):
  File "h36m.py", line 135, in <module>
    h36m_extract(args.dataset_path, args.out_path, split=args.split)
  File "h36m.py", line 131, in h36m_extract
    pickle.dump(data, f)
NameError: name 'data' is not defined

Invalid download link

All the downloading linke of pretrained model seems dead. Could you update them for us to make tests?

MuPots Evaluation

Hi,
I have a question regarding MuPots annotations. I know you did not create the dataset. However, I see that annotations for some people in the images are missing. For example, for TS1/img_0000001.jpg, the image has 3 people, but annotations for 2d and 3d_kpts come only for 2 people.

Have you came with this problem while evaluating your method?

Thanks

panoptic.py error

hello, I use panoptic.py to get test dataset, But I get an error bellow. I don't kown what the cmu keypoints the code have used.

Connected to pydev debugger (build 201.8743.11)
0%| | 0/1200 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/icvhpc1/.pycharm_helpers/pydev/pydevd.py", line 1438, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/icvhpc1/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/icvhpc1/bluce/code/mhmr-v1.1/experiments/release_version/multiperson/misc/preprocess_datasets/full/panoptic.py", line 166, in
process_panoptic(args.panoptic_path, args.dataset_name, args.sequence_idx)
File "/home/icvhpc1/bluce/code/mhmr-v1.1/experiments/release_version/multiperson/misc/preprocess_datasets/full/panoptic.py", line 120, in process_panoptic
kpts3d_J24[J24_to_J15] = skel.T
ValueError: shape mismatch: value array of shape (19,4) could not be broadcast to indexing result of shape (15,4)

Problem in SDF LOSS

Hi, I am trying to use your SDF loss function implementation for my project. I am facing an issue described below

#CODE SNIPPET
import sdf
sdf_loss = sdf.sdf_loss.SDFLoss(np.array(human_parameters['faces'].detach().cpu()))
translations = torch.ones((num_people,3), device = 'cuda:0')
vertices = params['pred_vertices']
sdf_loss(vertices,translations)

There are people in the image and I have generated the vertices and faces of those people. I want to calculate the sdf loss function value when all have identical translations ( that is same tx, ty, and tz) and this loss is coming out to be zero I don't understand why because if all the people have the same translation values there is overlap and that implies there are collisions but why is the sdf loss value zero?

Thank you!

Will the version of SMPL-X affect the program?

When I run the demo with the default version of smpl-x, I encountered an error - 'no model named ModelOutput'. So I rolled back the version to 0.1.15. The demo.py then ran smoothly. But I found a slight difference between the output of the demo picture and the output of the first picture in the paper. I don't know why it is different.

Missing key in loading model

Hi, thanks for sharing your code. I run the demo

python3 tools/demo.py --config=configs/smpl/tune.py --image_folder=demo_images/ --output_folder=results/ --ckpt data/checkpoint.pt

However, I get this info:

unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.1.bn3.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, bn1.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.4.bn3.num_batches_tracked

2022-08-19 12:51:02,959 - INFO - load checkpoint from data/checkpoint.pt
2022-08-19 12:51:03,424 - WARNING - missing keys in source state_dict: smpl_head.loss.smpl.J_regressor_extra, smpl_head.smpl.v_template, smpl_head.loss.smpl.faces_tensor, smpl_head.loss.smpl.posedirs, smpl_head.smpl.J_regressor_extra, smpl_head.smpl.parents, smpl_head.loss.smpl.lbs_weights, smpl_head.loss.smpl.parents, smpl_head.smpl.shapedirs, smpl_head.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.smpl.faces_tensor, smpl_head.smpl.posedirs, smpl_head.smpl.lbs_weights, smpl_head.smpl.J_regressor, smpl_head.loss.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.loss.smpl.v_template, smpl_head.loss.smpl.shapedirs, smpl_head.loss.smpl.J_regressor

2022-08-19 12:51:03,450 - INFO - resumed epoch 27, iter 90901

I suppose there are many missing keys so the demo will fail. However, I got a reasonable result.
截屏2022-08-19 下午1 00 19
I tried as #1 . But I still get the same info.

Error running demo.py: a parameter group that doesn't match the size of optimizer's group

Hi, kudos for the great work! Thank you for sharing the code.

When trying to run demo.py with the command

python3 tools/demo.py --config=configs/smpl/tune.py --image_folder=demo_images/ --output_folder=results/ --ckpt data/checkpoint.pt

I get the following error:

FIle "/miniconda3/envs/multiperson/lib/python3.7/site-packages/mmcv/runner/runner.py", line 313, in resume
    self.optimizer.load_state_dict(checkpoint['optimizer'])
File "/miniconda3/envs/multiperson/lib/python3.7/site-packages/torch/optim/optimizer.py", line 115, in load_state_dict
    raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

It seems to be triggered by something with the checkpoint and the model architecture. I also get the following message:


unexpected key in source state_dict: fc.weight, fc.bias
missing keys in source state_dict: layer3.0.bn2.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer4.0.bn3.num_batches_tracked, bn1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.5.bn2.num_batches_tracked

when run demo: RuntimeError: CUDA error: unknown error

Traceback (most recent call last):
File "tools/demo.py", line 190, in
main()
File "tools/demo.py", line 131, in main
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 60, in build_detector
return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 32, in build
return _build_module(cfg, registry, default_args)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 24, in _build_module
return obj_type(**args)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/detectors/smpl_rcnn.py", line 65, in init
self.smpl_head = builder.build_head(smpl_head)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 52, in build_head
return build(cfg, HEADS)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 32, in build
return _build_module(cfg, registry, default_args)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 24, in _build_module
return obj_type(**args)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/smpl_heads/smpl_head.py", line 74, in init
self.loss = build_loss(loss_cfg)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 56, in build_loss
return build(cfg, LOSSES)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 32, in build
return _build_module(cfg, registry, default_args)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/builder.py", line 24, in _build_module
return obj_type(**args)
File "/root/picasso/xhzhang/multiperson/mmdetection/mmdet/models/losses/smpl_loss.py", line 107, in init
anti_aliasing=False)
File "/root/anaconda3/envs/multiperson/lib/python3.7/site-packages/neural_renderer_pytorch-1.1.3-py3.7-linux-x86_64.egg/neural_renderer/renderer.py", line 41, in init
self.dist_coeffs = torch.cuda.FloatTensor([[0., 0., 0., 0., 0.]])
RuntimeError: CUDA error: unknown error

NameError: name 'ModelOutput' is not defined

Seemks like there are something wrong with the smplx.

Does this is caused by the version difference?

e_folder=demo_images/ --output_folder=results/ --ckpt data/checkpoint.pt
WARNING: You are using a SMPL model, with only 10 shape coefficients.
WARNING: You are using a SMPL model, with only 10 shape coefficients.
unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer2.0.bn1.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer3.4.bn3.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.0.bn3.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer3.1.bn1.num_batches_tracked, bn1.num_batches_tracked

2021-02-15 16:49:01,865 - INFO - load checkpoint from data/checkpoint.pt
2021-02-15 16:49:02,106 - WARNING - missing keys in source state_dict: smpl_head.smpl.parents, smpl_head.smpl.faces_tensor, smpl_head.smpl.J_regressor_extra, smpl_head.loss.smpl.faces_tensor, smpl_head.loss.smpl.v_template, smpl_head.loss.smpl.lbs_weights, smpl_head.loss.smpl.J_regressor, smpl_head.loss.smpl.J_regressor_extra, smpl_head.loss.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.smpl.shapedirs, smpl_head.smpl.J_regressor, smpl_head.loss.smpl.parents, smpl_head.smpl.vertex_joint_selector.extra_joints_idxs, smpl_head.smpl.v_template, smpl_head.loss.smpl.shapedirs, smpl_head.smpl.posedirs, smpl_head.smpl.lbs_weights, smpl_head.loss.smpl.posedirs

2021-02-15 16:49:02,121 - INFO - resumed epoch 27, iter 90901
WARNING: You are using a SMPL model, with only 10 shape coefficients.
Traceback (most recent call last):
  File "tools/demo.py", line 190, in <module>
    main()
  File "tools/demo.py", line 180, in main
    bbox_results, pred_results = model(**data_batch, return_loss=False)
  File "/media/frank/MyPassport/anaconda3/envs/multiperson/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/frank/MyPassport/anaconda3/envs/multiperson/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/media/frank/MyPassport/anaconda3/envs/multiperson/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/frank/MyPassport/Code/HPE/multiperson/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func
    return old_func(*args, **kwargs)
  File "/media/frank/MyPassport/Code/HPE/multiperson/mmdetection/mmdet/models/detectors/base.py", line 88, in forward
    return self.forward_test(img, img_meta, **kwargs)
  File "/media/frank/MyPassport/Code/HPE/multiperson/mmdetection/mmdet/models/detectors/smpl_rcnn.py", line 419, in forward_test
    return self.simple_test(imgs, img_metas, **kwargs)
  File "/media/frank/MyPassport/Code/HPE/multiperson/mmdetection/mmdet/models/detectors/smpl_rcnn.py", line 374, in simple_test
    smpl_results = self.simple_test_smpl(x, img_meta, det_bboxes, img.shape, rescale=rescale)
  File "/media/frank/MyPassport/Code/HPE/multiperson/mmdetection/mmdet/models/detectors/test_mixins.py", line 189, in simple_test_smpl
    smpl_pred = self.smpl_head(smpl_feats)
  File "/media/frank/MyPassport/anaconda3/envs/multiperson/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/frank/MyPassport/Code/HPE/multiperson/mmdetection/mmdet/models/smpl_heads/smpl_head.py", line 132, in forward
    global_orient=pred_rotmat[:, 0].unsqueeze(1), pose2rot=False)
  File "/media/frank/MyPassport/anaconda3/envs/multiperson/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/media/frank/MyPassport/Code/HPE/multiperson/mmdetection/mmdet/models/utils/smpl/smpl.py", line 64, in forward
    output = ModelOutput(vertices=smpl_output.vertices,
NameError: name 'ModelOutput' is not defined

ENVs:
I tried to install the pkg strictly following the instructions.

Ubuntu 16.04
CUDA: 10.1


Package                 Version       Location                                                
----------------------- ------------- --------------------------------------------------------
addict                  2.2.1         
argon2-cffi             20.1.0        
asn1crypto              1.2.0         
async-generator         1.10          
attrs                   20.3.0        
backcall                0.1.0         
bleach                  3.3.0         
certifi                 2019.11.28    
cffi                    1.13.0        
chardet                 3.0.4         
chumpy                  0.69          
conda                   4.8.2         
conda-package-handling  1.6.0         
ConfigArgParse          1.3           
cryptography            2.8           
cycler                  0.10.0        
Cython                  0.29.15       
decorator               4.4.2         
defusedxml              0.6.0         
entrypoints             0.3           
filelock                3.0.12        
freetype-py             2.1.0.post1   
future                  0.18.2        
gdown                   3.12.2        
h5py                    2.10.0        
idna                    2.8           
imageio                 2.8.0         
importlib-metadata      3.4.0         
install                 1.3.4         
ipdb                    0.13.2        
ipykernel               5.4.3         
ipython                 7.13.0        
ipython-genutils        0.2.0         
ipywidgets              7.6.3         
jedi                    0.16.0        
Jinja2                  2.11.3        
jsonschema              3.2.0         
jupyter-client          6.1.11        
jupyter-core            4.7.1         
jupyterlab-pygments     0.1.2         
jupyterlab-widgets      1.0.0         
kiwisolver              1.1.0         
MarkupSafe              1.1.1         
matplotlib              3.2.0rc3      
mistune                 0.8.4         
mkl-fft                 1.0.15        
mkl-random              1.1.0         
mkl-service             2.3.0         
mmcv                    0.2.10        
mmdet                   0.6.0+unknown /media/frank/MyPassport/Code/HPE/multiperson/mmdetection
mpmath                  1.2.1         
nbclient                0.5.2         
nbconvert               6.0.7         
nbformat                5.1.2         
nest-asyncio            1.5.1         
networkx                2.4           
neural-renderer-pytorch 1.1.3         
notebook                6.2.0         
numpy                   1.18.1        
olefile                 0.46          
open3d-python           0.7.0.0       
opencv-python           4.2.0.32      
packaging               20.9          
pandas                  1.0.1         
pandocfilters           1.4.3         
parso                   0.6.2         
pexpect                 4.8.0         
pickleshare             0.7.5         
Pillow                  6.1.0         
pip                     19.3.1        
prometheus-client       0.9.0         
prompt-toolkit          3.0.4         
protobuf                3.11.3        
ptyprocess              0.6.0         
pycocotools             2.0.0         
pycosat                 0.6.3         
pycparser               2.19          
pyglet                  1.5.0         
Pygments                2.6.1         
PyMCubes                0.1.2         
PyOpenGL                3.1.0         
pyOpenSSL               19.0.0        
pyparsing               2.4.6         
pypng                   0.0.20        
pyrender                0.1.36        
pyrsistent              0.17.3        
PySocks                 1.7.1         
python-dateutil         2.8.1         
pytz                    2019.3        
PyWavelets              1.1.1         
PyYAML                  5.4.1         
pyzmq                   22.0.3        
requests                2.22.0        
ruamel-yaml             0.15.46       
scikit-image            0.16.2        
scipy                   1.4.1         
sdf-pytorch             0.0.1         
seaborn                 0.10.0        
Send2Trash              1.5.0         
setuptools              41.4.0        
Shapely                 1.7.1         
six                     1.12.0        
smpl                    0.0.66        
smplx                   0.1.21        
sympy                   1.7.1         
tensorboardX            2.0           
terminado               0.9.2         
terminaltables          3.1.0         
testpath                0.4.4         
torch                   1.1.0         
torchgeometry           0.1.2         
torchvision             0.3.0         
tornado                 6.1           
tqdm                    4.36.1        
traitlets               4.3.3         
trimesh                 3.5.25        
typing-extensions       3.7.4.3       
uncertainties           3.1.5         
urllib3                 1.24.2        
wcwidth                 0.1.9         
webencodings            0.5.1         
wheel                   0.33.6        
widgetsnbextension      3.5.1         
zipp                    3.4.0         

No module named 'mmdetection.mmdet.models.utils.camera'

HI! Awesome work!

This is an error with evaluation code.

Traceback (most recent call last):
File "tools/full_eval.py", line 35, in
from mmdetection.mmdet.core.utils.eval_utils import H36MEvalHandler, EvalHandler, PanopticEvalHandler,
File "/home/seonghyun/CRMH_venv/multiperson-master/mmdetection/mmdet/core/utils/eval_utils.py", line 7, in
from mmdetection.mmdet.models.utils.camera import PerspectiveCamera
ModuleNotFoundError: No module named 'mmdetection.mmdet.models.utils.camera'

and

Traceback (most recent call last):
File "tools/full_eval.py", line 35, in
from mmdetection.mmdet.core.utils.eval_utils import H36MEvalHandler, EvalHandler, PanopticEvalHandler,
File "/home/seonghyun/CRMH_venv/multiperson-master/mmdetection/mmdet/core/utils/eval_utils.py", line 5, in
from mmdetection.mmdet.models.utils.pose_utils import reconstruction_error, vectorize_distance
ImportError: cannot import name 'vectorize_distance' from 'mmdetection.mmdet.models.utils.pose_utils' (/home/seonghyun/CRMH_venv/multiperson-master/mmdetection/mmdet/models/utils/pose_utils.py)

Where can I find these modules?
Not all code has been released yet?

Packages used during implementation

Hi,
thank you for your work!
We have installed your program in the past but now we verified the cuda version along with the cudnn is no longer available. For example we have the following error when creating the environment:

ResolvePackageNotFound:

  • torchvision==0.3.0=py37_cu10.0.130_1
  • pytorch==1.1.0=py3.7_cuda10.0.130_cudnn7.5.1_0

do you have any solution for this? do you have a new version of the environment with more recent versions of cuda?

Dataset

How to prepare the dataset?

evaluation

hi,when i run python3 tools/full_eval.py configs/smpl/tune.py haggling --ckpt ./work_dirs/tune/latest.pth to evaluation Panoptic, there is something wrong,
2022-03-05 16:17:40,458 - WARNING - unexpected key in source state_dict: fc.weight, fc.bias

missing keys in source state_dict: layer3.4.bn3.num_batches_tracked, layer3.5.bn3.num_batches_tracked, layer4.0.downsample.1.num_batches_tracked, layer1.0.downsample.1.num_batches_tracked, layer1.2.bn1.num_batches_tracked, layer2.2.bn2.num_batches_tracked, layer1.1.bn2.num_batches_tracked, layer2.3.bn1.num_batches_tracked, layer4.2.bn2.num_batches_tracked, layer3.5.bn1.num_batches_tracked, layer1.2.bn2.num_batches_tracked, layer3.5.bn2.num_batches_tracked, layer3.0.downsample.1.num_batches_tracked, layer3.2.bn2.num_batches_tracked, layer1.0.bn2.num_batches_tracked, layer2.1.bn1.num_batches_tracked, layer3.3.bn1.num_batches_tracked, layer4.2.bn3.num_batches_tracked, layer3.1.bn3.num_batches_tracked, layer3.3.bn3.num_batches_tracked, layer4.1.bn2.num_batches_tracked, layer2.3.bn3.num_batches_tracked, layer3.2.bn1.num_batches_tracked, layer4.2.bn1.num_batches_tracked, layer2.2.bn1.num_batches_tracked, layer3.0.bn2.num_batches_tracked, layer4.0.bn3.num_batches_tracked, layer2.2.bn3.num_batches_tracked, layer2.0.bn1.num_batches_tracked, layer2.3.bn2.num_batches_tracked, layer1.0.bn1.num_batches_tracked, layer2.1.bn2.num_batches_tracked, layer2.0.downsample.1.num_batches_tracked, layer3.3.bn2.num_batches_tracked, layer3.1.bn2.num_batches_tracked, layer3.0.bn1.num_batches_tracked, layer3.4.bn1.num_batches_tracked, layer4.1.bn3.num_batches_tracked, layer4.0.bn1.num_batches_tracked, layer2.0.bn3.num_batches_tracked, layer3.1.bn1.num_batches_tracked, layer1.1.bn3.num_batches_tracked, layer3.2.bn3.num_batches_tracked, layer1.0.bn3.num_batches_tracked, layer3.4.bn2.num_batches_tracked, layer2.0.bn2.num_batches_tracked, layer2.1.bn3.num_batches_tracked, layer4.0.bn2.num_batches_tracked, layer1.1.bn1.num_batches_tracked, layer4.1.bn1.num_batches_tracked, layer1.2.bn3.num_batches_tracked, layer3.0.bn3.num_batches_tracked, bn1.num_batches_tracked

Traceback (most recent call last):
File "tools/full_eval.py", line 297, in
main()
File "tools/full_eval.py", line 222, in main
train_dataset = get_dataset(cfg.datasets[0].train)
File "/home/cuihuili/coherent_reconstruction/multiperson-master/mmdetection/mmdet/datasets/utils.py", line 111, in get_dataset
dset = obj_from_dict(data_info, datasets)
File "/home/cuihuili/miniconda3/envs/multiperson1/lib/python3.7/site-packages/mmcv-0.2.10-py3.7-linux-x86_64.egg/mmcv/runner/utils.py", line 81, in obj_from_dict
return obj_type(**args)
File "/home/cuihuili/coherent_reconstruction/multiperson-master/mmdetection/mmdet/datasets/h36m.py", line 107, in init
self.img_infos = self.load_annotations(ann_file)
File "/home/cuihuili/coherent_reconstruction/multiperson-master/mmdetection/mmdet/datasets/h36m.py", line 239, in load_annotations
with open(ann_file, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'data/h36m/extras/rcnn/h36m_train.pkl'
Q1,why missing keys in source state_dict?
Q2,when evaluation Panoptic, also need h36m dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.