Giter Club home page Giter Club logo

aleth-nerf's Introduction

[AAAI 2024] Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption

Ziteng Cui1,2, Lin Gu3,1, Xiao Sun2*, Xianzheng Ma4, Yu Qiao2, Tatsuya Harada1,3.

1.The University of Tokyo, 2.Shanghai AI Lab, 3.RIKEN AIP, 4.University of Oxford


2024.1.25 : Update the renewed experimental results, dataset and arxiv version. We now provide all the comparision results and you can feel free to make comparision on your own research. You can download the experimental results of Aleth-NeRF and comparison methods, Low-Light-Results from (google drive) or (baiduyun (passwd: 729w)), and Over-Exposure Results from (google drive) or (baiduyun (passwd: 6q4k)).

2023.12.9 : Paper accepted by AAAI 2024 ! The old version of our paper is invalid, please refer to the new version, thanks~



" Can you see your days blighted by darkness ? "
-- Pink Floyd (Lost For Words)


๐Ÿฆ†: Abstract

The standard Neural Radiance Fields (NeRF) paradigm employs a viewer-centered methodology, entangling the aspects of illumination and material reflectance into emission solely from 3D points. This simplified rendering approach presents challenges in accurately modeling images captured under adverse lighting conditions, such as low light or over-exposure. Motivated by the ancient Greek emission theory that posits visual perception as a result of rays emanating from the eyes, we slightly refine the conventional NeRF framework to train NeRF under challenging light conditions and generate normal-light condition novel views unsupervised. We introduce the concept of a โ€Concealing Field,โ€ which assigns transmittance values to the surrounding air to account for illumination effects. In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process. Concealing Field thus compel NeRF to learn reasonable density and colour estimations for objects even in dimly lit situations. Similarly, the Concealing Field can mitigate over-exposed emissions during the rendering stage. Furthermore, we present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation.

We assume objects are naturally visible. However, the Concealing Field attenuates the light in the viewing direction, making the people see a low-light scene. (c). Remove the concealing field, we can render out normal-light images in low-light scenes. (d). Add the concealing field, we can render out normal-light in over-exposure scenes.


๐Ÿ”: Enviroment setup:

1. 
$ git clone https://github.com/cuiziteng/Aleth-NeRF.git

$ cd Aleth-NeRF


2. (You can adjust to your own torch>1.8.0 version and CUDA version)
$ conda create -n aleth_nerf -c anaconda python=3.8
$ conda activate aleth_nerf
$ conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
$ pip3 install -r requirements.txt

๐Ÿฆœ: Usage:

(1). Proposed dataset

We collect the first paired low-light & normal-light & over-exposure multi-view images dataset. Download the LOM dataset from: [google drive] or [baiduyun (passwd: ve1t)].

LOM dataset contains 5 scenes (buu | chair | sofa | bike | shrub), each scene includes 25~65 paired multi-view normal-light & low-light images & over-exposure images, and low-light images enhanced by different 2D low-light enhancement methods.

Unzip the download file, place LOM under $./data$ folder, then LOM dataset format as follow:

data     
โ””โ”€โ”€โ”€
    LOM_full      
    โ””โ”€โ”€โ”€ buu
        โ”‚โ”€โ”€โ”€ colmap_sparse
        โ”‚โ”€โ”€โ”€ colmap_text
        โ”‚โ”€โ”€โ”€ high (normal-light images)
        โ”‚โ”€โ”€โ”€ low  (low-light images)
        โ”‚โ”€โ”€โ”€ over_exp  (over-exposure images)

        โ”‚โ”€โ”€โ”€ Low_light_enhance (low-light images enhanced by 2D enhance methods)
            โ”‚โ”€โ”€โ”€ enh_RetiNexNet (enhanced by [RetiNexNet, BMVC 2018])
            โ”‚โ”€โ”€โ”€ enh_zerodce (enhanced by [Zero-DCE, CVPR 2020])
            โ”‚โ”€โ”€โ”€ enh_SCI (enhanced by [SCI, CVPR 2022])
            โ”‚โ”€โ”€โ”€ enh_IAT (enhanced by [IAT, BMVC 2022])
            โ”‚โ”€โ”€โ”€ enh_MBLLEN (enhanced by video enhance method [MBLLEN, BMVC 2018])
            โ”‚โ”€โ”€โ”€ enh_LLVE (enhanced by video enhance method [LLVE, CVPR 2021])

        โ”‚โ”€โ”€โ”€ Exposure_correction (over-exp images corrected by 2D exposure correction methods)
            โ”‚โ”€โ”€โ”€ HE (corrected by Histogram Equlization)
            โ”‚โ”€โ”€โ”€ IAT (corrected by [IAT, BMVC 2022])
            โ”‚โ”€โ”€โ”€ MSEC (corrected by [MSEC, CVPR 2021])

        โ”‚โ”€โ”€โ”€ colamp.db
        โ”‚โ”€โ”€โ”€ transforms_test.json (test scenes)
        โ”‚โ”€โ”€โ”€ transforms_train.json (train scenes)
        โ”‚โ”€โ”€โ”€ transforms_val.json (validation scenes)

    โ”‚โ”€โ”€โ”€ chair 
        โ”‚โ”€โ”€โ”€ ...     
    โ”‚โ”€โ”€โ”€ sofa
        โ”‚โ”€โ”€โ”€ ...     
    โ”‚โ”€โ”€โ”€ bike
        โ”‚โ”€โ”€โ”€ ...     
    โ”‚โ”€โ”€โ”€ shrub
        โ”‚โ”€โ”€โ”€ ...     

(2). Training Aleth-NeRF

By default, we use 4 GPUs to train Aleth-NeRF on LOM dataset (around 2 hours ~ 2.5 hours per scene), you can feel free to set other GPU number or GPU id depend on your own device. We take "buu" scene training for example:

For low-light conditions, we default set con = 12 and eta = 0.45 (Table.2's results):

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run.py --ginc configs/LOM/aleth_nerf/aleth_nerf_buu.gin --logbase ./logs --con 12 --eta 0.45

You can also adjust the hyper-parameter "con" (contrast degree) and "eta" (enhance degree) to achieve different enhance results, like:

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run.py --ginc configs/LOM/aleth_nerf/aleth_nerf_buu.gin --logbase ./logs --con 10/12/15 --eta 0.4/0.45/0.5

For over-exposure conditions, we default set con = 1 and eta = 0.45 (Table.3's results):

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run.py --ginc configs/LOM/aleth_nerf_exp/aleth_nerf_buu.gin --logbase ./logs_exp --con 1 --eta 0.45

You can also direct use following command to run all 5 scenes scenes together:

bash run/run_LOM_aleth.sh

(3). Evaluation with pre-train weights

You could also download our pre-train weights for direct model evaluation Low-Light-Results from (google drive) or (baiduyun (passwd: 729w)), and Over-Exposure Results from (google drive) or (baiduyun (passwd: 6q4k)), then unzip the file under this folder ($./logs$), test each scene as follow:

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run.py --ginc configs/LOM/aleth_nerf/aleth_nerf_buu.gin --con 12 --eta 0.45 --logbase ./logs --ginb run.run_train=False

if you want to render out videos with novel views, direct add "--ginb run.run_render=True":

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run.py --ginc configs/LOM/aleth_nerf/aleth_nerf_buu.gin --con 12 --eta 0.45 --logbase ./logs --ginb run.run_train=False --ginb run.run_render=True

You can also direct use following command to render all 5 scenes scenes together:

bash run/run_LOM_aleth_test.sh

(4). LOM dataset Benchmark

All the comparision methods' weights and experimental results could be downloaded, Low-Light-Results from (google drive) or (baiduyun (passwd: 729w)), and Over-Exposure Results from (google drive) or (baiduyun (passwd: 6q4k)). We formulate the comparision results as follow:

Low-Light-Results :

logs    
โ””โ”€โ”€โ”€    
    โ””โ”€โ”€โ”€ Aleth-NeRF (Aleth-NeRF results with various ablation)
    โ””โ”€โ”€โ”€ NeRF (NeRF results and "NeRF + 2D enhance methods" Results)
    โ””โ”€โ”€โ”€ RetiNexNet (RetiNexNet + NeRF)
    โ””โ”€โ”€โ”€ SCI (SCI + NeRF)
    โ””โ”€โ”€โ”€ zerodce (zerodce + NeRF)
    โ””โ”€โ”€โ”€ IAT (IAT+ NeRF)
    โ””โ”€โ”€โ”€ MBLLEN (MBLLEN+ NeRF)
    โ””โ”€โ”€โ”€ LLVE (LLVE+ NeRF)

## LOM dataset low-light benchmark ##

Over-Exposure-Results :

logs    
โ””โ”€โ”€โ”€     
    โ””โ”€โ”€โ”€ Aleth-NeRF (Aleth-NeRF results)
    โ””โ”€โ”€โ”€ NeRF (NeRF results and "NeRF + 2D exposure correction" Results)
    โ””โ”€โ”€โ”€ HE (Histogram Equlization + NeRF)
    โ””โ”€โ”€โ”€ IAT (IAT + NeRF)
    โ””โ”€โ”€โ”€ MSEC (MSEC + NeRF)

## LOM dataset over-exposure benchmark ##

Then you can evlaution with our pre-train weights.

For basic NeRF methods:

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run.py --ginc configs/LOM/nerf/nerf_buu.gin --ginb run.run_train=False

For NeRF trained on enhanced images:

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 run.py --ginc configs/LOM/compare_methods/RetiNexNet/nerf_buu.gin --ginb run.run_train=False

๐Ÿค: Others:

If you want to editing the code or find out details of Aleth-NeRF, direct refer to model.py and helper.py.

For the angle control to render a video in LOM dataset, please refer to here.


๐Ÿฆ‰: Reference and Related Works:

Acknowledgement:

Code is based on NeRF-Factory, much thanks to their excellent codebase! Also if you use our dataset or our code & paper help you, please consider cite our work:

@inproceedings{cui_aleth_nerf,
  title={Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption},
  author={Cui, Ziteng and Gu, Lin and Sun, Xiao and Ma, Xianzheng and Qiao, Yu and Harada, Tatsuya},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2024}
}

@misc{cui2023alethnerf,
      title={Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields}, 
      author={Ziteng Cui and Lin Gu and Xiao Sun and Xianzheng Ma and Yu Qiao and Tatsuya Harada},
      year={2023},
      eprint={2303.05807},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

aleth-nerf's People

Contributors

cuiziteng avatar jeongyw12382 avatar seungjooshin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

aleth-nerf's Issues

RestoreDet

ไฝ ๅฅฝ๏ผŒ่ฏท้—ฎRestoreDet่ฟ™็ฏ‡ๆ–‡็ซ ็š„ๆบ็ ๅฏไปฅ็ป™ไธ€ไปฝๅ—๏ผŸ

render 3D

Hello, very nice work! I have a question. After testing the model we get 3 image outputs. Can you tell me how to render the 3D motion picture? Thanks!

I think there is a wrong!

Training Stage with Concealing Field

if mode != 'test':
    if i_level == 0:
        accum_prod_dark = torch.cat([torch.ones_like(alpha[..., :1] * coarse_dark[0]),
            torch.cumprod(1.0 - alpha[..., :-1] + eps, dim=-1) * coarse_dark[1:]], dim=-1,)
    else:
        accum_prod_dark = torch.cat([torch.ones_like(alpha[..., :1]) * fine_dark[0],
            torch.cumprod(1.0 - alpha[..., :-1] + eps, dim=-1) * fine_dark[1:]], dim=-1,)

here :**torch.cumprod(1.0 - alpha[..., :-1] + eps, dim=-1) * fine_dark[1:]**็ปดๅบฆๆ˜ฏ๏ผˆ4096๏ผŒ64๏ผ‰๏ผˆ128๏ผŒ๏ผ‰ๆฒกๅŠžๆณ•็›ธไน˜็š„ๅ‘€๏ผŸๆœŸๅพ…ไฝ ็š„ๅ›ž็ญ”

ๅฆ‚ไฝ•่‡ชๅˆถๆ•ฐๆฎ้›†

่ฏท้—ฎๆˆ‘่‡ชๅทฑๆ‹็š„ๆ•ฐๆฎ้›†๏ผŒๆ€Žไนˆๅพ—ๅˆฐtransform_test.json่ฟ™ไบ›ๅˆ†ๅผ€็š„jsonๆ–‡ไปถ๏ผŸ ๅ…ˆ้€š่ฟ‡colmap๏ผŒๅพ—ๅˆฐ็จ€็–ๆ•ฐๆฎ๏ผŒๆ€Žไนˆๅพ—ๅˆฐไฝ ๆ•ฐๆฎ้›†ไธญ็š„ๅ…ถไป–jsonๆ–‡ไปถๅ‘ข๏ผŸ

Training Time

How many GPUs do you use for training, and how many hours does it take to converge the model?

Gradient computation for omega

I noticed that the transimittance of a point is related to the alpha and omega of all the points before it. But in finding the gradient of the loss function with respect to omega, the gradient of omega at the last point of a ray should not be available, right? Because the final color of the ray is calculated without the omega variable of the last point, right?

I noticed that the initial dimension of omega in your code is the number of sample points plus 1. I can't figure out how to solve the problem that the loss function can't calculate the gradient of the omega variable with respect to the last point of the ray.

Spiral Path for the video

Hi, Nice work

I wanted to find the code which calculates the poses for rendering the video. I went through the code but was unable to locate it. Would it be possible to please refer me to the section where you are calculating the poses for the path required to render the video?

Question about geometry.

Hi,there,
I was wonder that if it coulde rebuild a normal radiance(light) form low light condition, dose the geometry also rebuild well?

About single image low-light enhancement

First of all, thank you very much for your contribution. You stated in the article that this method also works well for low-light enhancement of a single image, thanks to your prior Settings. I would like to ask if there is a single image enhancement code provided.

Using nerf in this paper to do single image low-light enhancement is a bit tedious, because it has to fix the camera pose and other parameters. Have you ever tried using your prior for low-light enhancement of a single image?

RuntimeError: number of dims don't match in permute

้žๅธธๆ„Ÿ่ฐขไฝ ไปฌ็š„ๅทฅไฝœ๏ผŒ
ๅฝ“ๆˆ‘่ฟ่กŒ
CUDA_VISIBLE_DEVICES=0 python3 run.py --ginc configs/LOM/aleth_nerf/aleth_nerf_buu.gin --eta 0.1
ๅ‡บ็Žฐ้”™่ฏฏRuntimeError: number of dims don't match in permute
ๆ‰“ๅฐ่พ“ๅ‡บ๏ผš
the scene name is: buu
the log dir is: ./logs/aleth_nerf_blender_buu_220901eta0.1
Global seed set to 220901
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/callba cks/model_checkpoint.py:613: UserWarning: Checkpoint directory /home/zenglongjian/Aelth-NeRF/l ogs/aleth_nerf_blender_buu_220901eta0.1 exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

| Name | Type | Params

0 | model | Aleth_NeRF | 1.3 M

1.3 M Trainable params
0 Non-trainable params
1.3 M Total params
5.296 Total estimated model params size (MB)
Epoch 0: 100%|โ–ˆ| 12523/12523 [59:02<00:00, 3.54it/s, loss=0.000127, v_num=0, train/psnr1=43.70, train/psnrDownloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /home/zenglongjian/.cache/torch/hub/checkpoints/vgg16-397923af.pth
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 528M/528M [01:02<00:00, 8.81MB/s]
Downloading: "https://github.com/richzhang/PerceptualSimilarity/raw/master/lpips/weights/v0.1/vgg.pth" to /home/zenglongjian/.cache/torch/hub/checkpoints/vgg.pthโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 527M/528M [01:02<00:00, 7.53MB/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7.12k/7.12k [00:00<00:00, 3.37MB/s]
Epoch 9: 100%|โ–ˆ| 12523/12523 [58:35<00:00, 3.56it/s, loss=9.05e-05, v_num=0, train/psnr1=44.00, train/psnrTrainer.fit stopped: max_steps=125000 reached.
Epoch 9: 100%|โ–ˆ| 12523/12523 [58:35<00:00, 3.56it/s, loss=9.05e-05, v_num=0, train/psnr1=44.00, train/psnr
the checkpoint path is: ./logs/aleth_nerf_blender_buu_220901eta0.1/last.ckpt
Restoring states from the checkpoint path at ./logs/aleth_nerf_blender_buu_220901eta0.1/last.ckpt
Lightning automatically upgraded your loaded checkpoint from v1.7.6 to v1.9.5. To apply the upgrade to yourles permanently, run python -m pytorch_lightning.utilities.upgrade_checkpoint --file logs/aleth_nerf_blendbuu_220901eta0.1/last.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from checkpoint at ./logs/aleth_nerf_blender_buu_220901eta0.1/last.ckpt
Testing DataLoader 0: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 69/69 [00:15<00:00, 4.44itTraceback (most recent call last):
File "run.py", line 244, in
run(
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/gin/config.py", line 1605, inn_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/gin/utils.py", line 41, in aunt_exception_message_and_reraise
raise proxy.with_traceback(exception.traceback) from None
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/gin/config.py", line 1582, inn_wrapper
return fn(*new_args, **new_kwargs)
File "run.py", line 191, in run
trainer.test(model, data_module, ckpt_path=ckpt_path)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 794, in test
return call._call_and_handle_interrupt(
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/caly", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 842, in _test_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1112, in _run
results = self._run_stage()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1188, in _run_stage
return self._run_evaluate()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1228, in _run_evaluate
eval_loop_results = self._evaluation_loop.run()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop., line 206, in run
output = self.on_run_end()
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/loops/dataler/evaluation_loop.py", line 180, in on_run_end
self._evaluation_epoch_end(self._outputs)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/loops/dataler/evaluation_loop.py", line 288, in _evaluation_epoch_end
self.trainer._call_lightning_module_hook(hook_name, output_or_outputs)
File "/home/zenglongjian/.conda/envs/aleth_nerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trar.py", line 1356, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/zenglongjian/Aelth-NeRF/src/model/aleth_nerf/model.py", line 424, in test_epoch_end
darknesss = self.alter_gather_cat_conceil(outputs, "darkness", all_image_sizes)
File "/home/zenglongjian/Aelth-NeRF/src/model/interface.py", line 52, in alter_gather_cat_conceil
all = all.permute((1, 0, 2)).flatten(0, 1)
RuntimeError: number of dims don't match in permute
In call to configurable 'run' (<function run at 0x7efb2cf63550>)
Testing DataLoader 0: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 69/69 [00:16<00:00, 4.31it/s]

How to make the gif in your website

I just want to know how to make the gif in your website๏ผŸIs it made with images rendered from all training viewpoints๏ผŸOr is it from custom viewpoints? Is there any detailed code for this?

Depth Image

Thank you for the outstanding work. I've observed that the depth images in the log directory are in grayscale, whereas those in the paper are colorful. Could you please provide the code or offer some suggestions on how to generate these colorful depth images?

About Eq 9 in the paper

Great work, but I have a question. Is the implementation of volume rendering in the code consistent with Eq. 9 in the paper? From my understanding, the implementation in the code directly multiplies the local occlusion field darkness from the network output with trans. I interpret this to mean that in the implementation of trans in the code, the trans of a point is only influenced by the occlusion field at the current position, without considering the occlusion field before this point. As I'm relatively new to NERF, my understanding of the paper may not be thorough enough. Could you please address this issue?
In your code๏ผš
weights_dark = alpha * accum_prod_dark comp_rgb_dark = (weights_dark[..., None] * rgb * darkness).sum(dim=-2) acc_dark = weights_dark.sum(dim=-1)
In the paper๏ผš
BNO{(8V7T42~OI6CPTLFH8W

About color deviation between test results and GT

Thank you for your greate work.

I am trying to replicating your project and I found there is color shift between the rendered images and your presented results in the paper. The results in the paper looks great. I am wondering if you received the results by using the hyperparameters specified in configs directory.

thanks.

model confusion

Hi, thank you for your great work. But I'm confused about these models, could you explain them please?
Thanks.
image

ๅ…ณไบŽๆ•ฐๆฎ้›†

ไฝ ๅฅฝ๏ผŒ่ฏท้—ฎไฝ ็š„ๆ•ฐๆฎ้›†็”Ÿๆˆ็š„็จ€็–็‚นไบ‘ๅ’Œ็›ธๆœบไฝๅงฟ็ญ‰ไฟกๆฏๆ˜ฏ็”จๆญฃๅธธๅ…‰ๅ›พๅƒ่ฟ˜ๆ˜ฏไฝŽๅ…‰ๅ›พๅƒ็ป่ฟ‡colmapๅพ—ๅˆฐ็š„๏ผŸ

about environmental question

Hi, thank you for your great work. However, the following problems occurred when I configured the environment according to the steps. Could you please tell me how to solve it?

data type

How can we use nerfstudio's data in this project?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.