wangfeng18 / 3d-gaussian-splatting Goto Github PK
View Code? Open in Web Editor NEWImplementation for 3d gaussian splatting
License: MIT License
Implementation for 3d gaussian splatting
License: MIT License
It would be interesting to work with existing pointcloud data instead of SFM Colmap data. For example the E57 file format has rich metadata as well as panorama images (which we can undistort) and would IMO allow to plug the colmap implementation into it. An example E57 file can be seen at https://static.matterport.com/misc/UoqjwziqrZs-Aon_Lobby.e57 (1.15 GB).
Any ideas on where to start?
Rendering Speed (official) | Rendering Speed (Ours)
160 FPS (avg MIPNeRF360) | 60 FPS
Very Great work here!!
This repo shows great performance on some outside 360 degree scene datasets.
I wonder if u tried the NeRF LLFF Dataset from here.
The PSNR is always less than 10. The performance is not good.
Are there any "tricks" for training these kinds of datasets? e.g. any param like value of grad_thresh
?
All these params were used as default.
For the record, I used the command: python train.py --exp llff --data /nerfllff/fern/
Looking forward to your reply!
Best
Hi, May I know which gaussian package is used in the splatter.py?
Hi, I use my own dataset, and it has only two images, When the program iterated over 700 times, I occured this error! Could u know how to solve this? Thanks!
//error info//
Traceback (most recent call last):
File "train.py", line 407, in
trainer.train()
File "train.py", line 206, in train
output = self.train_step(i_iter, bar)
File "train.py", line 118, in train_step
loss.backward()
File "/home/ubuntu/miniconda3/envs/3dgs_custom/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/ubuntu/miniconda3/envs/3dgs_custom/lib/python3.8/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Thank you for your great work!
I also have an implementation using Taichi Lang: https://github.com/wanmeihuali/taichi_3d_gaussian_splatting
However, I think both our PSNR metric is a bit worse than what the paper claims. So I'm wondering what's their "trick"....
I have the following questions:
Hi,
After some training time I'm getting
loss: 0.386204/0.673039/6.7382/[882569/237845]:lr: 0.0228|0.0228|0.0023|0.0023|0.0023 grad: [0.000000|0.000000|0.000000|0.000000|0.000000]: 10%|██▍ | 700/7001 [00:31<04:47, 21.90it/s]
Traceback (most recent call last):
File "train.py", line 403, in <module>
trainer.train()
File "train.py", line 206, in train
output = self.train_step(i_iter, bar)
File "train.py", line 162, in train_step
self.accum_max_grad/(self.grad_counter+1e-3).unsqueeze(dim=-1),
AttributeError: 'float' object has no attribute 'unsqueeze'
Any idea what the problem could be?
Thanks!
Great work! I'm trying to attach a network to estimate camera parameters. Since most renderings are coded in Cuda, would I then need to write the network and backwards all in cuda code? Thanks.
──────────────── viser ────────────────╮
│ ╷ │
│ HTTP │ http://127.0.0.1:6789 │
│ Websocket │ ws://127.0.0.1:6789 │
│ ╵ │
╰───────────────────────────────────────╯
This is viser result log, but i want to display on mac
Thank you for this amazing work. I was testing it and trying to run it first time and I am getting following errors. Can you help me with this?
File "/3d-gaussian-splatting/train.py", line 374, in <module>
gaussian_splatter = Splatter(
File "/3d-gaussian-splatting/splatter.py", line 427, in __init__
self.set_camera(0)
File "/3d-gaussian-splatting/splatter.py", line 490, in set_camera
self.current_w2c_quat = self.w2c_quats[idx]
IndexError: list index out of range
My command to start training is:
python train.py --data data/sample_data/ --exp exp_1
My sample data is processed with colmap as structured as following:
-sample_data
-images/
-sparse/0/*.bin
-transforms_train.json
I made a COLMAP using Nerfstudio and then trained in your framework, everything was great, until I wanted to look at the result in the viewer
the three rotation axes became confused in gimbal lock.
what is the approach to make the viewer work normally with Nerfstudio prepared data.
I think I tried again just using COLMAP GUI and got the same result.
Is you viewer OpenGL handed or OpenCV handed?
This is an amazing & independent implementation. However I cannot get the pth file working with any ply based viewer. Even when mapping
to their corresponding ply parts, renderers like antimatter15/splat won't show anything.
Is it possible to export directly to a compatible format?
Thanks for your great work!
With the latest version of the code,I notice the GPU memory increasing with training iterations and eventually overflowing. I'm wondering how to fix it, thanks.
Hello and thank you for this elegant implementation! The code is very readable and I look forward to seeing more commits.
I tried to run the code on some mipnerf360 datasets other than garden but I get a shape mismatch error in the photometric loss calculation. I have verified the bug on the bonsai and kitchen scenes.
$ python train.py --data bonsai
100%|█████████████████████████████████████████████████████████████████████████████| 292/292 [00:03<00:00, 96.17it/s]
100%|█████████████████████████████████████████████████████████████████████████████| 206613/206613 [00:43<00:00, 4710.91it/s]
805.6752699398112
805.6752699398112
0%| | 0/7001 [00:00<?, ?it/s]{' culling tiles': 0.0019258240461349488,
' gather culled tiles': 0.0009233599901199341,
' rendering': 0.01425823974609375,
' sorting': 0.0008818240165710449,
' set camera': 0.00043235200457274913,
' set image': 0.00015588800236582755,
' write out': 1.4720000326633453e-05,
' frustum cuda': 0.0016573439836502074,
'crop': 0.00014073599874973297,
'forward': 0.021217279434204102,
'frustum culling': 0.0017174719572067261,
'render function': 0.018458879470825196,
'set camera': 0.0007449600100517273}
0%| | 0/7001 [00:00<?, ?it/s]
Traceback (most recent call last):
File "3d-gaussian-splatting/train.py", line 385, in <module>
trainer.train()
File "3d-gaussian-splatting/train.py", line 190, in train
output = self.train_step(i_iter, bar)
File "3d-gaussian-splatting/train.py", line 92, in train_step
l1_loss = ((rendered_img - self.gaussian_splatter.ground_truth).abs()).mean()
RuntimeError: The size of tensor a (519) must match the size of tensor b (520) at non-singleton dimension 0
It seems like the error is related to cropping the padded render in splatter.py
. I will try to debug and submit a fix soon.
Thanks again for the great project.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.