zxz267 / avatarjlm Goto Github PK
View Code? Open in Web Editor NEW[ICCV 2023] Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
Home Page: https://zxz267.github.io/AvatarJLM/
License: MIT License
[ICCV 2023] Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
Home Page: https://zxz267.github.io/AvatarJLM/
License: MIT License
Traceback (most recent call last):
File "/home/yyh_file/AvatarJLM-main/vis.py", line 94, in
main(opt)
File "/home/yyh_file/AvatarJLM-main/vis.py", line 75, in main
avg_error = evaluate(opt, logger, model, test_loader, save_animation=1)
File "/home/yyh_file/AvatarJLM-main/test.py", line 41, in evaluate
vis.save_animation(body_pose=body_parms_gt['body'], savepath=save_video_path_gt, bm = model.bm, fps=60, resolution = (800,800))
File "/home/yyh_file/AvatarJLM-main/utils/utils_visualize.py", line 174, in save_animation
mv = MeshViewer(width=imw, height=imh, use_offscreen=True)
File "/home/yyh_file/AvatarJLM-main/body_visualizer/mesh/mesh_viewer.py", line 59, in init
self.viewer = pyrender.OffscreenRenderer(*self.figsize)
File "/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/offscreen.py", line 137, in _create
egl_device = egl.get_device_by_index(device_id)
File "/home/.conda/envs/AvatarJLM/lib/python3.9/site-packages/pyrender/platforms/egl.py", line 83, in get_device_by_index
raise ValueError('Invalid device ID ({})'.format(device_id, len(devices)))
ValueError: Invalid device ID (0)
Thanks for your work, but sadly there seems to be a problem with the pose visualization code. Have you encountered such a problem? Or how to solve it. thank you for your reply!
def model_to_device(self, network):
"""Model to device. It also warps models with DistributedDataParallel
or DataParallel.
Args:
network (nn.Module)
"""
network = network.to(self.device)
if self.opt['dist']:
find_unused_parameters = self.opt['find_unused_parameters']
network = DistributedDataParallel(network, device_ids=[torch.cuda.current_device()], find_unused_parameters=find_unused_parameters)
else:
network = DataParallel(network)
return network
it will occur this problem:
Traceback (most recent call last):
File "/home/HDD2/yanshiqi/testAJ/train.py", line 113, in
main(opt)
File "/home/HDD2/yanshiqi/testAJ/train.py", line 78, in main
model.optimize_parameters(current_step)
File "/home/HDD2/yanshiqi/testAJ/models/model_jlm.py", line 179, in optimize_parameters
self.netG_forward()
File "/home/HDD2/yanshiqi/testAJ/models/model_jlm.py", line 172, in netG_forward
self.predictions = self.netG(self.input_signal)
File "/home/HDD2/yanshiqi/.conda/envs/avatarjlm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/HDD2/yanshiqi/.conda/envs/avatarjlm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/HDD2/yanshiqi/.conda/envs/avatarjlm/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 186, in forward
return self.gather(outputs, self.output_device)
File "/home/HDD2/yanshiqi/.conda/envs/avatarjlm/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 203, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/HDD2/yanshiqi/.conda/envs/avatarjlm/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 105, in gather
res = gather_map(outputs)
File "/home/HDD2/yanshiqi/.conda/envs/avatarjlm/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 96, in gather_map
return type(out)((k, gather_map([d[k] for d in outputs]))
TypeError: first argument must be callable or None
When I annotated this line of code:network = DataParallel(network), it will only use gpu 0.
There are 4 GPUs on my server, but I have been using only GPU0 and this issue has occurred:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 462.00 MiB. GPU 0 has a total capacty of 23.69 GiB of which 117.62 MiB is free. Process 461938 has 11.61 GiB memory in use. Process 467797 has 498.00 MiB memory in use. Including non-PyTorch memory, this process has 11.44 GiB memory in use. Of the allocated memory 10.65 GiB is allocated by PyTorch, and 498.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
在推理过程中,其余19个关节的信息应当是不会被模型使用到的,在模型文件中也写了会将其他关节的输入置0,请问是什么情况导致在data_amass.py中将其余关节置0导致结果很差呢?
the problem accur,how to solve it?thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.