Giter Club home page Giter Club logo

humanrf's People

Contributors

isikmustafa avatar jrs-synth avatar martinruenz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

humanrf's Issues

Novel pose synthesis

Thanks for the amazing work with the ActorsHQ dataset.

I would like to evaluate my method on novel pose synthesis and compare with the leaderboard scores but I couldn't find information about the evaluation protocol and data splits. Could you share this information please ?

Thanks again !

dataset

Hi, thanks for releasing the code and data for this amazing work!

I would like to ask about the Alpha matte that corresponds to the RGB image. How can I create an Alpha matte like in your dataset, and what is the tool you are using?

Data Missing

Hi, Mustafa, many thanks for your awesome job! It's really amazing!

May I ask if there is something missing in the dataset? When I download the ActorsHQ dataset, I met the error below:
2023-08-11 10-30-28 的屏幕截图

Could you please tell me what's the problem?

Cmake fail for tool box

Hi, I am trying to use the alembic_extractor. I followed the installation guidelines but got the following error message. I am wondering any hint on how to resolve this. Thanks

renderer/CMakeFiles/mesh_renderer.dir/build.make:75: recipe for target 'renderer/CMakeFiles/mesh_renderer.dir/main.cpp.o' failed
make[2]: *** [renderer/CMakeFiles/mesh_renderer.dir/main.cpp.o] Error 1
CMakeFiles/Makefile2:143: recipe for target 'renderer/CMakeFiles/mesh_renderer.dir/all' failed
make[1]: *** [renderer/CMakeFiles/mesh_renderer.dir/all] Error 2
Makefile:135: recipe for target 'all' failed
make: *** [all] Error 2

Validation Error

image

PC environment : wsl2, cuda 11.8, HumanRF Installation, torch 2.0.1+cu118,

I used import_dfa.py and reordered DFA panda data.
I tried to run HumanRF with DFA image dataset but I faced the error.
How can I solved it?

image
image

Processing for mesh

Hi, thanks for this high-quality dataset. I have some questions about the mesh.abc.

I also use Reality Capture to reconstruct a mesh since I want use the UV map and texture map. But the mesh you give seems better than mine, specifically, it is more smooth than ours. So, do you use the original image or masked image for reconstruction ?? Can you provide the UV map of the reconstruction??

Temporal stability vs Rendering quality

Hi,
Thanks for sharing such a good job. I tried it with my own custom data (one person in slow motion, captured in 360 degrees by 36 cameras).
I found that the rendering quality is related to the division of segments.

  • When I use dynamic partition, the segment array is (50, 25, 25). The person is blurred in the rendering view.
  • When I use fixed pratition (segment size=6), the rendering quality has improved, but the temporal stability has dropped significantly.

The frequency of video flickering seems to be positively related to the division of segments.
Is it possible to alleviate my problem by parameter setting? How to achieve the effect of the "temporal stability" in homepage?

ModuleNotFoundError: No module named 'actorshq.dataset.volumetric_dataset', and its solution

I followed instructions and have successfully installed all the required packages (thank you for the very nicely-written README by the way). I also received the links for ActorsHQ dataset.

However, when I run the download command within the parent humanrf directory, I encountered this error:

ModuleNotFoundError: No module named 'actorshq.dataset.volumetric_dataset'

I added the below code snippet to the start of download_manager.py to solve this error.

import sys
sys.path
sys.path.append('./')

FileNotFoundError: [Errno 2] No such file or directory: 'vmaf'

Thanks for the amazing work!

The training can proceed and complete as intended on my end, but an error is thrown when the code calls "vmaf" for evaluation after training is done. Could you let me know what "vmaf" installation it is that I need for evaluation to complete?

== Evaluating with 134 frames ==
PSNR: 26.438246794533562
LPIPS: 0.06897718506628897
SSIM: 0.9463056193377604
Traceback (most recent call last):
  File "./humanrf/run.py", line 197, in <module>
    evaluate(
  File "/home/user/humanrf_train/actorshq/evaluation/evaluate.py", line 170, in evaluate
    subprocess.run(
  File "/home/user/.conda/envs/humanrf/lib/python3.8/subprocess.py", line 493, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/home/user/.conda/envs/humanrf/lib/python3.8/subprocess.py", line 858, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/user/.conda/envs/humanrf/lib/python3.8/subprocess.py", line 1720, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'vmaf'
terminate called without an active exception
Aborted

The use of xyzt_vectors

self.vectors = torch.nn.Parameter(
torch.randn((4, vectors_finest_resolution, feature_size), dtype=torch.float) * 0.1
)
However, I don't understand why xyzt_vectors is a random tensor but also have gradients.

And the next question is In tensor_composition.cu, why time dimension is also multipied by finest_resolution:

for (int i = 0; i < 4; ++i)
    {
        // Following routine corresponds to align_corners=True in PyTorch's grid_sample.
        // TensoRF does the same.
        // Cuda's texture fetch linear mode does the same: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#linear-filtering
        // xyzt_coordinates are assumed to be in [0, 1].
        auto coord = xyzt_coordinates[sample_index][i] * finest_resolution - 0.5f;
        auto coord_floor = floorf(coord);
        auto coord_frac = coord - coord_floor;

        int coord0 = fmaxf(coord_floor, 0.0f);
        int coord1 = fminf(coord_floor + 1.0f, finest_resolution - 1);
        auto val0 = xyzt_vectors[i][coord0][feature_index];
        auto val1 = xyzt_vectors[i][coord1][feature_index];
        sampled_vectors[i] = val0 + coord_frac * (val1 - val0);

        auto dval = features[i] * d_output;
        atomicAdd(&d_xyzt_vectors[i][coord0][feature_index], dval * (1 - coord_frac));
        atomicAdd(&d_xyzt_vectors[i][coord1][feature_index], dval * coord_frac);
    }

How to Parsing SMPL-X Parameters in ActorsHQ Dataset?

Dear Authors,

I hope you are doing well. I appreciate your work on the HumanRF project, and I am currently experimenting with the ActorsHQ dataset. However, I am facing some difficulties in parsing the SMPL-X parameters and obtaining an unreasonable posed mesh. Please check the figures below.
viewfile

Parameters: The provided parameters have a 'poses' shape of [1, 87]. I am parsing it into ['body_poses' (indices [0:63]), 'right_hand_pose' (indices [63:75]), and 'left_hand_pose' (indices [75:87])].

Implementation: I first parse the 'poses' of the parameters. For the other parameters, I use the following mapping: {'shapes' to 'betas', 'expression' to 'expression', 'Rh' to 'global_orient', 'Th' to 'translation'}. For the missing values like ["jaw_pose", "leye_pose", "reye_pose"], I fill them with zero vectors. Then, I forward the parsed results to the SMPLX model of 'SMPLX_MALE.npz' using the "smplx" Python module.

I would appreciate your guidance on the following questions:

  1. Are there any issues or pitfalls in my implementation? If so, can you suggest solutions or improvements?
  2. Are the provided SMPL-X parameters correct? Could you provide some explanation to the data format of the saved parameters? Or could you provide a script to parse them?
  3. Have you estimated the missing values? Could you provide them? I noticed some expression changing between different frames.

Historical 'Looker' Movie

The 1981 cult classic movie Looker becomes real with this project in its technology sense.
See the Internet Movie Database for details.
The basic plot is models are scanned, then killed, while their scans take over their acting rolls in commercials, so the real life models don't have to be paid.

Looker was "the first ever film to create 3D shading with a computer that produced the first ever CGI human character was the model Cindy (Susan Dey). This movie achieved this feat before Disney's more famous (Tron) hit the screens. The Web site Filmsite said of Cindy: 'Her digitization was visualized by a computer-generated simulation of her body being scanned--notably the first use of shaded 3D CGI in a feature film. Polygonal models obtained by digitizing a human body were used to render the effects.'"

Do we have the understanding of neuro anatomy today to make a Light Ocular-Oriented Kinetic Emotive Responses (L.O.O.K.E.R.) device? We certainly have the technology.

Obviously no 'issue' so feel free to close this. Just thought it was cool to note.

error: subprocess-exited-with-error, and its solution

I can install requirements.txt and tiny-cuda-nn by following the instructions, but I encountered error: subprocess-exited-with-error when installing ActorsHQ package.

I used sudo apt-get install libglm-dev to solve this error.

CUDA Error during validation

Hi Mustafa and the team,

I seem to have managed to install it correctly, and following your example code I get the sample dataset and try to train a model using the example config. It seems to train well, but with every validation step I hit a CUDA error on a tinycudann function . Any idea what it could be?

[I am able to respawn training from the saved checkpoint and that's why in the log you see it at 10k iterations, but it happens from the first validation step after 2,5k iteration, every time]

Many thanks for the repo and support
Guy


(sdfstudio) [user@pc humanrf]$ CUDA_LAUNCH_BLOCKING=1 humanrf/run.py     --config example_humanrf     --workspace /tmp/example_workspace     --dataset.path data/actorshq/
/tmp/example_workspace
Running adaptive temporal partitioning with threshold 1.25: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 29.59it/s]
Images are loaded in 1.334407091140747 seconds by a pool of 64 processes.
Images are loaded in 0.045470476150512695 seconds by a pool of 1 processes.
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off]
/home/gafni/.conda/envs/sdfstudio/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  warnings.warn(
/home/gafni/.conda/envs/sdfstudio/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=AlexNet_Weights.IMAGENET1K_V1`. You can also use `weights=AlexNet_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /home/gafni/.conda/envs/sdfstudio/lib/python3.8/site-packages/lpips/weights/v0.1/alex.pth
[INFO] # parameters: 29.840 million
[INFO] Loading the checkpoint from /tmp/example_workspace/checkpoints/step_00007500.pth ...
[INFO] The full state is loaded at step 7500
  0%|                                                                                                                                                                                          | 0/50001 [00:00<?, ? total training iterations/s][W BinaryOps.cpp:594] Warning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (function operator())
loss=0.00010 (exp avg loss=0.00009), lr=0.008706:  20%|████████████████████████▌                                                                                                  | 9999/50001 [04:08<1:04:34, 10.32 total training iterations/s]─────────────────────────────────────────────────────────────────────────────────────────────────────────── Validation at Step 10000 ────────────────────────────────────────────────────────────────────────────────────────────────────────────
                                                                                                                                                                                                                                                Traceback (most recent call last):9 ssim=0.9414  --- AVERAGE: psnr=29.6805 lpips=0.0639 ssim=0.9414 :   5%|█████▏                                                                                                  | 1/20 [00:01<00:21,  1.12s/it]
  File "humanrf/run.py", line 118, in <module>
    trainer.train(training_data_loader, validation_data_loader, max_steps=config.training.max_steps)
  File "/home/gafni/projects/humanrf/humanrf/trainer.py", line 196, in train
    self.validate(validation_data_loader)
  File "/home/gafni/.conda/envs/sdfstudio/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/gafni/projects/humanrf/humanrf/trainer.py", line 301, in validate
    partial_render_output = render(
  File "/home/gafni/projects/humanrf/humanrf/volume_rendering.py", line 120, in render
    queried_props = scene_representation(query_input)
  File "/home/gafni/.conda/envs/sdfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gafni/projects/humanrf/humanrf/scene_representation/humanrf.py", line 206, in forward
    output.radiance = self.color_net(torch.cat(color_net_input, dim=-1))
  File "/home/gafni/.conda/envs/sdfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gafni/.conda/envs/sdfstudio/lib/python3.8/site-packages/tinycudann/modules.py", line 189, in forward
    self.params.to(_torch_precision(self.native_tcnn_module.param_precision())).contiguous(),
RuntimeError: CUDA error: invalid configuration argument
CURRENT: psnr=29.6805 lpips=0.0639 ssim=0.9414  --- AVERAGE: psnr=29.6805 lpips=0.0639 ssim=0.9414 :   5%|█████▏                                                                                                  | 1/20 [00:01<00:24,  1.28s/it]
loss=0.00010 (exp avg loss=0.00009), lr=0.008706:  20%|████████████████████████▊                                                                                                   | 10000/50001 [04:10<16:40, 39.99 total training iterations/s]

Leaderboard

How can we participate in the Learderboard?

Format of transformation matrices in model_viewer.js

First of all, the results of this paper are so exciting, great work!

I really loved the visualization of the scenes (cameras/avatar) on the project page.
Therefore, I wanted to try out the three.js script you used for the visualization on my own camera poses (no avatar).

I did get everything to run without avatar, and with the hardcoded camera matrices.
Unfortunately, when I insert my own camera/transformation matrices I get some weird results.
I see that cam_transforms is a flattened 4x4 transformation matrix, which I have as well.
What do the transformation matrices in cam_transforms represent?

My transformation matrices follow the nerfstudio transforms.json data conventions.
As a side note, I know for certain that the transformation matrices are valid since I'm getting good results with them.

In case you're curious, these are the "weird results":
results threejs

How to extract meshes after training

Hi, thanks for releasing the code and data of your great work :)

I want to extract meshes after training on my own data. And I find that a .abc to .obj converter is provided in [actorshq/toolbox], but I have no idea how to get the .abc file after training. Or are there any other methods (e.g., marching cubes) to obtain meshes in this project?

Thank you in advance!

aabbs.csv not found when downloading

I found "actorshq/dataset/download_manager.py" returns the following error.
It seems the provided dataset doesn't include aabbs.csv (I couldn't see the actual file in my file system), so something might happen on the server side.
I was wondering if someone could check it.

./actorshq/dataset/download_manager.py actorshq_access_4x.yaml ./data/actorhq --actor Actor01 --sequence Sequence1 --scale 4 --frame_start 1 --frame_stop 2
Reading links ....
Downloading scene.json
Downloading RGB and mask files ...: 100%|
Downloading calibration.csv
Downloading occupancy_grids.tar.gz
Downloading light_annotations.csv
Downloading aabbs.csv
Traceback (most recent call last):
  File "/workspaces/humanrf/./actorshq/dataset/download_manager.py", line 240, in <module>
    main()
  File "/workspaces/humanrf/./actorshq/dataset/download_manager.py", line 225, in main
    download_dataset(
  File "/workspaces/humanrf/./actorshq/dataset/download_manager.py", line 197, in download_dataset
    download_lazy(
  File "/workspaces/humanrf/./actorshq/dataset/download_manager.py", line 31, in download_lazy
    with open(target_file, 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'data/actorhq/Actor01/Sequence1/data/actorhq/Actor01/Sequence1/aabbs.csv'

Python Version

Greetings,

Could you specify the Python version you are using?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.