Giter Club home page Giter Club logo

impersonator's People

Contributors

ak9250 avatar dependabot[bot] avatar dnahurnyi avatar piaozhx avatar stevenliuwen avatar t04glovern avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

impersonator's Issues

argparse.ArgumentError: argument --load_path: conflicting option string: --load_path

Hi! Thanks for your awesome job!
When I ran the demo_imitation.py, I had this problem.
I try to comment out 'load_path' in test_options.py or base_options.py, but it didn't work.
Could you help me to figure it out? Thank you!

Traceback (most recent call last):
File "demo_view.py", line 157, in
opt = TestOptions().parse()
File "/workspace/Impersonator/options/base_options.py", line 75, in parse
self.initialize()
File "/workspace/Impersonator/options/test_options.py", line 16, in initialize
help='pretrained model path')
File "/opt/conda/lib/python3.6/argparse.py", line 1352, in add_argument
return self._add_action(action)
File "/opt/conda/lib/python3.6/argparse.py", line 1715, in _add_action
self._optionals._add_action(action)
File "/opt/conda/lib/python3.6/argparse.py", line 1556, in _add_action
action = super(_ArgumentGroup, self)._add_action(action)
File "/opt/conda/lib/python3.6/argparse.py", line 1366, in _add_action
self._check_conflict(action)
File "/opt/conda/lib/python3.6/argparse.py", line 1505, in _check_conflict
conflict_handler(action, confl_optionals)
File "/opt/conda/lib/python3.6/argparse.py", line 1514, in _handle_conflict_error
raise ArgumentError(action, message % conflict_string)
argparse.ArgumentError: argument --load_path: conflicting option string: --load_path

Can you run this on a 8gb nvidia 1070?

Has anyone successfully used a 1070 with 8GB? I can't run the demo code while using the 1070 for display but I wonder if I dedicate it for CUDA it will work.

RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 7.93 GiB total capacity; 5.94 GiB already allocated; 41.25 MiB free; 252.07 MiB cached)

Texture image not found

Hi, I got this error while trying to run python demo_view.py --gpu_ids 1

ImportError: dlopen(/Users/mal/anaconda3/envs/my_env/lib/python3.7/site-packages/neural_renderer-1.1.3-py3.7-macosx-10.9-x86_64.egg/neural_renderer/cuda/load_textures.cpython-37m-darwin.so, 2): Library not loaded: @rpath/libcudart.10.1.dylib
  Referenced from: /Users/mal/anaconda3/envs/my_env/lib/python3.7/site-packages/neural_renderer-1.1.3-py3.7-macosx-10.9-x86_64.egg/neural_renderer/cuda/load_textures.cpython-37m-darwin.so
  Reason: image not found

Eval protocol?

Dear Authors, thank you for uploading the code for your paper, it's very useful. However, one missing piece is the evaluation: I could not find an evaluation script in the repository. Can you describe the evaluation protocol, i.e. exactly which set of (source image, target pose) pairs are used to compute the metrics, or if it's based on random sampling how that sampling is performed? Thank you!

Could I transfer any video with a common pre-trained model?

Hi,I have tested the repo and I can transfer the demo images with the pre_trained model which was trained by mixamo.
But it seems that ,the pre-trained model could only transfer images to actions of mixamo.
How could I modify the network to get a common model ,to accept any input and any base action without re-train a different model ?

There's no problem with demo_imulator.py, and there's a problem with run_imulator.py, but when I remove the parameter "- has_detector", there's no problem with execution.

Hi, thank you for your awesome work.

Btw, I tried to transfer my own with other target images.
Basically, it works but face doesn't look like.

So I did the model fine-tuning in the way you prompted me to look like an iPER.

Execution with demo_imulator.py is no problem. Run_imulator.py prompts a problem, but I can execute without the parameter "has_detector".
Is this because of the problem of extracting faces? In addition, I would like to ask you if the model has been fine-tuned after the completion of this implementation?
The following is a hint of an execution error:
python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ --src_path ./assets/src_imgs/imper_A_Pose/10006.png --tgt_path ./assets/samples/refs/iPER/024_8_3 --bg_ks 13 --ft_ks 3 --has_detector --post_tune --save_res
------------ Options -------------
T_pose: False
batch_size: 4
bg_ks: 13
bg_model: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
bg_replace: False
body_seg: False
cam_strategy: smooth
checkpoints_dir: ./outputs/checkpoints/
cond_nc: 3
data_dir: /p300/datasets/iPER
dataset_mode: iPER
debug: False
do_saturate_mask: False
face_model: assets/pretrains/sphere20a_20171020.pth
front_warp: False
ft_ks: 3
gen_name: impersonator
gpu_ids: 0
has_detector: True
hmr_model: assets/pretrains/hmr_tf2pt.pth
image_size: 256
images_folder: images_HD
ip:
is_train: False
load_epoch: 0
load_path: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
map_name: uv_seg
model: imitator
n_threads_test: 2
name: running
norm_type: instance
only_vis: False
output_dir: ./outputs/results/
part_info: assets/pretrains/smpl_part_info.json
port: 31100
post_tune: True
pri_path: ./assets/samples/A_priors/imgs
repeat_num: 6
save_res: True
serial_batches: False
smpl_model: assets/pretrains/smpl_model.pkl
smpls_folder: smpls
src_path: ./assets/src_imgs/imper_A_Pose/10006.png
swap_part: body
test_ids_file: val.txt
tex_size: 3
tgt_path: ./assets/samples/refs/iPER/024_8_3
time_step: 10
train_ids_file: train.txt
uv_mapping: assets/pretrains/mapper.txt
view_params: R=0,90,0/t=0,0,0
-------------- End ----------------
./outputs/checkpoints/running
Network impersonator was created
loaded net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
Network deepfillv2 was created
loaded net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth

		Personalization: meta imitation...

Traceback (most recent call last):
File "run_imitator.py", line 225, in
adaptive_personalize(test_opt, imitator, visualizer)
File "run_imitator.py", line 203, in adaptive_personalize
imitator.personalize(opt.src_path, visualizer=None)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/media/ubuntu/新加卷/Prog/impersonator-master/models/imitator.py", line 117, in personalize
bbox, body_mask = self.detector.inference(img[0])
File "/media/ubuntu/新加卷/Prog/impersonator-master/utils/detectors.py", line 70, in inference
predictions = self.forward(img_list)[0]
File "/media/ubuntu/新加卷/Prog/impersonator-master/utils/detectors.py", line 40, in forward
predictions = self.model(images)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py", line 48, in forward
features = self.backbone(images.tensors)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torchvision/models/_utils.py", line 58, in forward
x = module(x)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
RuntimeError: CUDA NVRTC error: NVRTC_ERROR_BUILTIN_OPERATION_FAILURE
The above operation failed in interpreter, with the following stack trace:

If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at [email protected]

You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.

Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True

Additional training for Motion Imitation

Hi, thank you for your awesome work.

Btw, I tried to transfer my own with other target images.
Basically, it works but my head doesn't look like me. My hair style and face don't reflect.
Then, I assume this happened because of pre-trained model.
I saw datasets and found out most of people are short and black hair.
What do you think? And if so, how do I train more datasets?

And also, I tried to do for fashion model who you provided. It works well!!
I'm wondering what's going on.

Thanks in advance.

Train for custom dataset

Hi piaozhx,
Thanks for sharing the code. I cloned and able to execute the links.The way of explanation is in such away even for installations is also very nice which makes the process easy and able to understood.
I want to train on my own custom videos. Can you tell the process of how to go further??
What is the process for high quality images??

Thanks in Advance.
Regards,
SandhyaLaxmi

There are some bugs ,how to modify the code?thanks!

C:\Users\1152>python D:\Pytorch\impersonator-master\demo_imitator.py --gpu_ids 1
Traceback (most recent call last):
File "D:\Pytorch\impersonator-master\demo_imitator.py", line 6, in
from models.imitator import Imitator
File "D:\Pytorch\impersonator-master\models\imitator.py", line 8, in
from utils.nmr import SMPLRenderer
File "D:\Pytorch\impersonator-master\utils\nmr.py", line 6, in
import neural_renderer as nr
ModuleNotFoundError: No module named 'neural_renderer'

no results

demo_imitator runs successfully, and some folders are created(like dance, base, acrobat), but all of them are empty.

ImportError: load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _Z16THPVariable_WrapN5torch8autograd8VariableE

when i run demo_imitator.py the error is that and my python version is 3.6.4, Thank you for your help

Traceback (most recent call last):
File "demo_imitator.py", line 6, in
from models.imitator import Imitator
File "/home/lcy/impersonator-master/models/imitator.py", line 8, in
from utils.nmr import SMPLRenderer
File "/home/lcy/impersonator-master/utils/nmr.py", line 6, in
import neural_renderer as nr
File "/home/lcy/anaconda3/envs/python36/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/init.py", line 3, in
from .load_obj import load_obj
File "/home/lcy/anaconda3/envs/python36/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/load_obj.py", line 9, in
import neural_renderer.cuda.load_textures as load_textures_cuda
ImportError: /home/lcy/anaconda3/envs/python36/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/cuda/load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _Z16THPVariable_WrapN5torch8autograd8VariableE
(python36) lcy@lcy:~/impersonator-master$ python demo_imitator.py --gpu_ids 1
Traceback (most recent call last):
File "demo_imitator.py", line 6, in
from models.imitator import Imitator
File "/home/lcy/impersonator-master/models/imitator.py", line 8, in
from utils.nmr import SMPLRenderer
File "/home/lcy/impersonator-master/utils/nmr.py", line 6, in
import neural_renderer as nr
File "/home/lcy/anaconda3/envs/python36/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/init.py", line 3, in
from .load_obj import load_obj
File "/home/lcy/anaconda3/envs/python36/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/load_obj.py", line 9, in
import neural_renderer.cuda.load_textures as load_textures_cuda
ImportError: /home/lcy/anaconda3/envs/python36/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/cuda/load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _Z16THPVariable_WrapN5torch8autograd8VariableE

Pre-processing for Image

Hello, is there pre-processing before inference? Your sources(models in examples and images in your own datasets seem to be processed. Thanks in advance.

Always Segment Fault

Hi, I have followed the installation instructions.The environment details are as the following:Ubuntu 16.04, torch 1.2.0, torchvision 0.40, python 3.6, cuda 10.0. But after installing the neural_render part, I ran the demo file and got the segment fault error. I tried install the neural_renderer part by using the command 'neural_renderer_pytorch', and ran the demo file again, it indicated that some methods are missing. It seems that you have realized some other methods which are not included in the original neural_renderer. Could do me a favor?

Outputs results folder all empty

Hi,
I'm trying to run demo_imitator and everything seems to be running fine except all the folder under outputs/results/demos/imitators is empty. Any idea what I can do to fix this?
Much appreciated!
Here's the attached output from terminal:

------------ Options -------------
T_pose: False
batch_size: 1
bg_ks: 13
bg_model: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
bg_replace: False
body_seg: False
cam_strategy: smooth
checkpoints_dir: ./outputs/checkpoints/
cond_nc: 3
data_dir: /p300/datasets/iPER
dataset_mode: iPER
debug: False
do_saturate_mask: False
face_model: assets/pretrains/sphere20a_20171020.pth
front_warp: False
ft_ks: 3
gen_name: impersonator
gpu_ids: 0
has_detector: False
hmr_model: assets/pretrains/hmr_tf2pt.pth
image_size: 256
images_folder: images_HD
ip: 
is_train: False
load_epoch: 0
load_path: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
map_name: uv_seg
model: impersonator
n_threads_test: 2
name: running
norm_type: instance
only_vis: False
output_dir: ./outputs/results/
part_info: assets/pretrains/smpl_part_info.json
port: 31100
post_tune: False
pri_path: ./assets/samples/A_priors/imgs
repeat_num: 6
save_res: False
serial_batches: False
smpl_model: assets/pretrains/smpl_model.pkl
smpls_folder: smpls
src_path: 
swap_part: body
test_ids_file: val.txt
tex_size: 3
tgt_path: 
time_step: 10
train_ids_file: train.txt
uv_mapping: assets/pretrains/mapper.txt
view_params: R=0,90,0/t=0,0,0
-------------- End ----------------
./outputs/checkpoints/running
  0%|                                                                                                                                                    | 0/3 [00:00<?, ?it/s]Network impersonator was created
Loading net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
Network deepfillv2 was created
Loading net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth

			Personalization: meta imitation...

100%|███████████████████Personalization: meta cycle finetune...████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00,  9.58it/s]
load face model from assets/pretrains/sphere20a_20171020.pth
./outputs/results/demos/imitators/mixamo_preds
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:49<00:00,  9.85s/it./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 148/148 [00:00<00:00, 830.61it/s]
                                                                                                                                                                              ./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 576/576 [00:00<00:00, 842.92it/s]
 33%|██████████████████████████████████████████████▎                                                                                            | 1/3 [01:46<03:32, 106.16s/it]Network impersonator was created████████████████████████████████████████████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 745.93it/s]
Loading net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
Network deepfillv2 was created
Loading net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth

			Personalization: meta imitation...

100%|███████████████████Personalization: meta cycle finetune...████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00, 10.11it/s]
load face model from assets/pretrains/sphere20a_20171020.pth
./outputs/results/demos/imitators/mixamo_preds
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:48<00:00,  9.80s/it./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 148/148 [00:00<00:00, 793.41it/s]
                                                                                                                                                                              ./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 576/576 [00:00<00:00, 891.59it/s]
 67%|████████████████████████████████████████████████████████████████████████████████████████████▋                                              | 2/3 [03:27<01:44, 104.62s/it]Network impersonator was created████████████████████████████████████████████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 726.58it/s]
Loading net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
Network deepfillv2 was created
Loading net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth

			Personalization: meta imitation...

100%|███████████████████Personalization: meta cycle finetune...████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00, 10.18it/s]
load face model from assets/pretrains/sphere20a_20171020.pth
./outputs/results/demos/imitators/mixamo_preds
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:48<00:00,  9.76s/it./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 148/148 [00:00<00:00, 720.35it/s]
                                                                                                                                                                              ./outputs/results/demos/imitators/mixamo_preds███████████████████████████████████████████████████████████████████████████████████████████████| 576/576 [00:00<00:00, 824.25it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [05:06<00:00, 103.17s/it]
Completed! All demo videos are save in ./outputs/results/demos/imitators████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 809.94it/s]

About pretrained data

Hi, I intend to make a training on my own datasets. But I am confused about some 3d data. I want to ask how the "mapper.txt" in the downloaded pretrained data is obtained. In addittion, could you give me the description about 3d data of this nice model? Thank you!

CUDA version error and generate nothing

I used "python demo_imitator.py --gpu_ids 0 --batch_size 1" to run the demo, but it warned "Error in forward_face_index_map_1: CUDA driver version is insufficient for CUDA runtime version, Error in forward_face_index_map_2: CUDA driver version is insufficient for CUDA runtime version" , finally it runed successfuly, "Completed! All demo videos are save in ./outputs/results/demos/imitators" , but There were only a few folders under the folder without the demo videos.
my torch version is 1.0.0, CUDA version is 10.0, Is it the wrong version, could you help me ?

`Error in forward_face_index_map_1: CUDA driver version is insufficient for CUDA runtime version████████████████████████████████████████████████▏ | 108/111 [00:05<00:00, 18.39it/s]
Error in forward_face_index_map_2: CUDA driver version is insufficient for CUDA runtime version
Error in forward_face_index_map_1: CUDA driver version is insufficient for CUDA runtime version
Error in forward_face_index_map_2: CUDA driver version is insufficient for CUDA runtime version

Error in forward_face_index_map_1: CUDA driver version is insufficient for CUDA runtime version██████████████████████████████████████████████████▋ | 110/111 [00:05<00:00, 18.44it/s]
Error in forward_face_index_map_2: CUDA driver version is insufficient for CUDA runtime version
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [07:04<00:00, 141.48s/it]
Completed! All demo videos are save in ./outputs/results/demos/imitators█████████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 1422.15it/s]`

invalid device function

When i run demo_imitator.py, it always comes the error:invalid device function like this:

Error in forward_face_index_map_2: invalid device function
Error in forward_face_index_map_1: invalid device function
Error in forward_face_index_map_2: invalid device function
Error in forward_face_index_map_1: invalid device function
Error in forward_face_index_map_2: invalid device function
Error in forward_face_index_map_1: invalid device function

and the results of other two demos are the same.
my GPUs are 2080ti and the environments is python-3.7, torch-1.3.1, CUDA-10.0, and does the version of my packages cause the error?

AttributeError: Can't pickle local object 'make_dataset.<locals>.Config'

error when running demo_view.py

Personalization: meta cycle finetune...
load face model from assets/pretrains/sphere20a_20171020.pth
0%| | 0/5 [00:00<?, ?it/s]Traceback (most recent call last):
File "demo_view.py", line 179, in
generate_orig_pose_novel_view_result(opt, src_path)
File "demo_view.py", line 117, in generate_orig_pose_novel_view_result
adaptive_personalize(opt, viewer, visualizer)
File "E:\SourceCodes\tensorflow\Gans\impersonator-master\run_imitator.py", line 209, in adaptive_personalize
imitator.post_personalize(opt.output_dir, loader, visualizer=None, verbose=False)
File "E:\SourceCodes\tensorflow\Gans\impersonator-master\models\viewer.py", line 395, in post_personalize
for i, sample in enumerate(data_loader):
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "D:\DevelopTools\Anaconda3\envs\tensorflow_gpu\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'make_dataset..Config'

Training on other dataset

I am a bit confused about what we should prepare for training on the custom dataset.

Could you provide more information about what specifically we should prepare for the custom training?

Thanks!

cant install neural_renderer

error info:
D:/dev/app/Anaconda3/envs/pytorch/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1490): error C3203: 'templated_iterator': unspecialized class template can't be used as a template argument for template parameter '_Ty1', expected a real type
error: command 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc.exe' failed with exit status 2

How do I run train.py?

I want to train my data set. But I couldn't run the train.py file . I don't know how to proceed . Can you help me please ?

About details: why you use 1 discriminator(4 layers) and use three label(-1,0,1) ?

Hi, author!
I have read your paper and your code, and i have some questions which confused me. recently pix2pixHD is famous, why you just use one-scale discriminator and using three labels(-1, 0, 1) instead of using 2-scale Discriminator? and using three label( -1, 0, 1) have advantages than two labels style(e.g. in pix2pixHD, they use 0,1 for fake and real)? thanks for your reply!

pickle.load?win10

win10 64 python373

what i run:

python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ --src_path ./assets/src_imgs/internet/men1_256.jpg --tgt_path ./assets/samples/ref_imgs/024_8_2 --has_detector --post_tune --front_warp --save_res

Traceback

(most recent call last):
File "run_imitator.py", line 225, in
adaptive_personalize(test_opt, imitator, visualizer)
File "run_imitator.py", line 209, in adaptive_personalize
imitator.post_personalize(opt.output_dir, loader, visualizer=None, verbose=False)
File "J:\impersonator\models\imitator.py", line 423, in post_personalize
for i, sample in enumerate(data_loader):
File "C:\Users\goooice\Anaconda3\envs\ml\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\goooice\Anaconda3\envs\ml\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'main.MetaCycleDataSet'>: attribute lookup MetaCycleDataSet on main failed


Traceback (most recent call last):
File "", line 1, in
File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\goooice\Anaconda3\envs\ml\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Getting ImportError: No module named cuda.load_textures

Hi @StevenLiuWen @piaozhx @ak9250
I followed the exact steps mentioned in README.md
On running <demo_swap.py> it runs into "ImportError: No module named cuda.load_textures"

akash@my-open-source-machine:~/impersonator$ python demo_swap.py 
Traceback (most recent call last):
  File "demo_swap.py", line 5, in <module>
    from models.swapper import Swapper
  File "/home/ubuntu/impersonator/models/swapper.py", line 7, in <module>
    from utils.nmr import SMPLRenderer
  File "/home/ubuntu/impersonator/utils/nmr.py", line 6, in <module>
    import neural_renderer as nr
  File "build/bdist.linux-x86_64/egg/neural_renderer/__init__.py", line 3, in <module>
  File "build/bdist.linux-x86_64/egg/neural_renderer/load_obj.py", line 9, in <module>
ImportError: No module named cuda.load_textures

I have referred daniilidis-group/neural_renderer#17
But in my case there is only one neural_renderer.

Can you please guide me on the issue.
Thanks

Training dataset with corrupted ZIP archive ("smpls.zip")

Hello, the provided training data at your OneDrive link have a file named "smpls.zip" that appears to be corrupted, it doesn't unzip correctly in both Windows and Linux, even using tools such as 7-Zip and WinRar, it reports a bad zipfile offset at several points (from file #146 to #735), I re-downloaded it 4 times already and kept an eye on the progress bar to make sure there were no interruptions.
Could you please check if the provided file is okay? Thanks!

TypeError on Windows demo_imitator.py

Hi,

I have tackled all the issues before running the actual demo on Windows and I got this error:

File "base_options.py", line 67, in set_zero_thread_for_Win
if 'n_threads_test' in self._opt.class:
TypeError: argument of type 'type' is not iterable

Anyone resolved this?

about test indicator code

I saw your paper evaluate the quality of generated images on cross-imitation by IS and Frechet Distance on a pre-trained person-reid model,named as FReID. Can you provide your test indicator code and related models?Because I want to quantitatively evaluate the results.Thanks a lot!

Segmentation fault (core dumped)

When running the demo, it reports segmentation fault (core dumped). It seems to be caused by function of neural render. I have tested the example in neural render and it reports the segmentation fault again. Do you know how to fix this? What is the cuda version of your environment? Thank you very much!

c++/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’

Environment:
Ubuntu 18.04 Anaconda with create env with the following command:

conda env create -n impersonator environment.yml

err log:

/usr/local/cuda/bin/nvcc -I/home/u/anaconda3/envs/impersonator/lib/python3.7/site-packages/torch/include -I/home/u/anaconda3/envs/impersonator/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/u/anaconda3/envs/impersonator/lib/python3.7/site-packages/torch/include/TH -I/home/u/anaconda3/envs/impersonator/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/u/anaconda3/envs/impersonator/include/python3.7m -c neural_renderer/cuda/load_textures_cuda_kernel.cu -o build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/home/u/anaconda3/envs/impersonator/gcc/include/c++/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/home/u/anaconda3/envs/impersonator/gcc/include/c++/tuple:613:152:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/u/anaconda3/envs/impersonator/lib/python3.7/site-packages/torch/include/ATen/core/TensorMethods.h:1566:176:   required from here
/home/u/anaconda3/envs/impersonator/gcc/include/c++/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;

ImportError: load_textures.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

when i run demo_imitator.py I got↓

Traceback (most recent call last):
File "demo_imitator.py", line 6, in
from models.imitator import Imitator
File "/home/lbl/impersonator/models/imitator.py", line 8, in
from utils.nmr import SMPLRenderer
File "/home/lbl/impersonator/utils/nmr.py", line 11, in
import neural_renderer as nr
File "/home/lbl/anaconda3/lib/python3.7/site-packages/neural_renderer/init.py", line 3, in
from .load_obj import load_obj
File "/home/lbl/anaconda3/lib/python3.7/site-packages/neural_renderer/load_obj.py", line 8, in
import neural_renderer.cuda.load_textures as load_textures_cuda
ImportError: /home/lbl/anaconda3/lib/python3.7/site-packages/neural_renderer/cuda/load_textures.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

No results

After run some examples, I cannot find the ouput in the output_dir. Running runDetails.md examples

info related with iPER dataset

Hi,thanks for your job
could you release another files related with iPER dataset,including train\test list、three source images in motion imitation. Thanks!
Best wishes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.