autonomousvision / shape_as_points Goto Github PK
View Code? Open in Web Editor NEW[NeurIPS'21] Shape As Points: A Differentiable Poisson Solver
Home Page: https://pengsongyou.github.io/sap
License: MIT License
[NeurIPS'21] Shape As Points: A Differentiable Poisson Solver
Home Page: https://pengsongyou.github.io/sap
License: MIT License
Hello
For some reasons, I would like to preprocess the shapenet dataset by myself.
While running process_shapenet.py, I found that the work is stopped at processing the point_rasterize function.
How long does it take to make psr_gr?
Do we need GPUs during this?
is it only availble with existing dataset?
got error when trying to make parameter o3d_show as True in yaml file.
aga/res_32 --train:lr_pcl 0.002000 --data:object_id -1
Changing model:grid_res ---- 256 to 32
Changing model:psr_sigma ---- 2 to 2
Changing train:input_mesh ---- to None
Changing train:total_epochs ---- 4000 to 1000
Changing train:out_dir ---- /dsk1/shape_as_points/after/aga to /dsk1/shape_as_points/after/aga/res_32
Changing train:lr_pcl ---- 2e-2 to 0.002000
Changing data:object_id ---- 0 to -1
/dsk1/shape_as_points/after/aga/res_32
Segmentation fault (core dumped)
Couldn't find the right solution to the questions, how can this be fixed?
I try, but get a bad result.
Hi Songyou,
Thanks again for sharing the code for your excellent work!
Could you please provide the pretrained model of optimization-based version that are reported in your paper (there are three datasets: dfaust, thingi, deep_geometric_prior_data).
Looking for your reply. :)
Best,
Runsong
Hi Songyou,
Thanks for sharing your code for the excellent work!
I am curious about the optimization-based part and I have two questions. Can you help me? thanks a lot in advance. :)
Looking forward to your reply!
Best,
Runsong
Good afternoon, I have a few questions, I am trying to do a point cloud processing cycle for my master's work at the university and I will be very glad if you answer my questions:
When I try to run the program without any modification, there is ༛̥⃝ʺ̤ error triggered at optim.py:11 "from src.optimization import Trainer":
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Thanks for sharing this great work! It seems that the interpolation method used in the point_rasterize and grid_interp function is based on the absolute distance of the point to the adjacent grid point. Have you ever tried other interpolation method such as trilinear interpolation? Do different interpolation methods affect the results?
Hi Songyou,
Thanks again for sharing the code for your excellent work!
I found that the script "eval_meshes.py" is prepared for the learning-based method, but I want to evaluate the mesh result produced by the optimization-based part. Could you please give some suggestions for this?
Looking for your reply. :)
Best,
Runsong
Thank you for great work
I'm curious about what train_overfit.lst
and test_overfit.lst
mean in the preprocessed dataset which can be downloaded in Dataset for Learning-based Reconstruction?
I cannot find any clue in scripts/preprocess_shapenet.py
.
Thank you
Thanks for sharing this great work!
I am a little confused about the rasterize function. It seems that the weight is the product of the absolute distance, and the value on the gird is the weighted sum of all the relevant points. Does the grid value need to be normalized? Because I have found that when I rasterize the parameters of the points to the grid, the value on the grid increases as the number of the sampled points increases. But when I normalize the value on the grid, there is an error of marching cube. Do you have any advice or is there any relationship between the number of sampling points and the grid size?
If we set a boundary value as 0.5 at surface point and keep scaling,
will the grid values converge to 0(inside) or 1(outside)?
hi Peng,
for learning_based 3d reconstruction from point clouds,how long does it take to train ?(400000 epochs)
Hello, I want to download the demo data,but falied to establish SSL connection.
Could you please upload a copy to GoogleDrive at the same time?
Thanks!
wget https://s3.eu-central-1.amazonaws.com/avg-projects/shape_as_points/data/demo.zip
--2021-11-10 10:33:21-- https://s3.eu-central-1.amazonaws.com/avg-projects/shape_as_points/data/demo.zip
Proxy tunneling failed: Bad GatewayUnable to establish SSL connection.
Trying to install using ubuntu22.04 the conda install fails on the installing pip dependencies stage
:
Installing pip dependencies: / Ran pip subprocess with arguments:
['/home/creynolds/anaconda3/envs/sap/bin/python', '-m', 'pip', 'install', '-U', '-r', '/home/creynolds/shape_as_points/condaenv.r7d4ldjp.requirements.txt']
Pip subprocess output:
...
Building wheels for collected packages: plyfile, ipdb
Building wheel for plyfile (setup.py): started
Building wheel for plyfile (setup.py): finished with status 'done'
Created wheel for plyfile: filename=plyfile-0.7-py3-none-any.whl size=8236 sha256=83fe451234965bd0827ee5f3585ddc48cf2a71f4e82d56e9bde1b1ba2ce0123e
Stored in directory: /home/creynolds/.cache/pip/wheels/ed/f2/18/3a13a4be6937b1a201cdb5a99879b034fb60ae18002d28abb5
Building wheel for ipdb (setup.py): started
Building wheel for ipdb (setup.py): finished with status 'done'
Created wheel for ipdb: filename=ipdb-0.13.7-py3-none-any.whl size=11461 sha256=24cc2accca4705b8e9272c840c1358c128dce9989616a0da413ed989279fea2a
Stored in directory: /home/creynolds/.cache/pip/wheels/b3/e6/33/23ed5c0ce0654cd1426587c22918c3854c5e2583473e61538b
Successfully built plyfile ipdb
Installing collected packages: wcwidth, pytz, python-mnist, pure-eval, ptyprocess, pickleshare, fastjsonschema, executing, dash-table, dash-html-components, dash-core-components, backcall, av, addict, widgetsnbextension, traitlets, tornado, threadpoolctl, tenacity, pyzmq, pyyaml, pyrsistent, pygments, psutil, prompt-toolkit, pkgutil-resolve-name, pexpect, parso, numpy, networkx, nest-asyncio, MarkupSafe, jupyterlab-widgets, joblib, itsdangerous, importlib-resources, entrypoints, decorator, debugpy, configargparse, asttokens, Werkzeug, tifffile, stack-data, PyWavelets, pyquaternion, pykdtree, plyfile, plotly, pandas, opencv-python, matplotlib-inline, jupyter_core, jsonschema, Jinja2, jedi, imageio, comm, scikit-learn, scikit-image, nbformat, jupyter-client, ipython, Flask, ipykernel, ipdb, dash, ipywidgets, open3d
Attempting uninstall: tornado
Found existing installation: tornado 6.1
Uninstalling tornado-6.1:
Successfully uninstalled tornado-6.1
Attempting uninstall: pyyaml
Found existing installation: PyYAML 5.3.1
Pip subprocess error:
ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
failed
CondaEnvException: Pip failed
Furthermore, attempting to install PyTorch3D
as per the instructions gives:
ERROR: Could not find a version that satisfies the requirement pytorch3d (from versions: none)
ERROR: No matching distribution found for pytorch3d
Could you please advise on how to proceed?
Hello, great paper.
I have successfully managed to run inference on this model via the Optimization based Model and it has produced some decent results but not quite what the paper shows.
What are the steps I need to run inference via the learning based model? I tried replacing the configs in the demo with my own pointcloud but it gave me this error:
<torch.utils.data.dataloader.DataLoader object at 0x7fdc75c91430>
Loading model...
https://s3.eu-central-1.amazonaws.com/avg-projects/shape_as_points/models/ours_outlier_7x.pt
=> Loading checkpoint from url...
Generating...
0%| | 0/2 [00:00<?, ?it/s]WARNING - 2023-01-16 21:08:42,915 - core - Error occured when loading field gt_points of model 0
0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "generate.py", line 232, in <module>
main()
File "generate.py", line 104, in main
for it, data in enumerate(tqdm(test_loader)):
File "/opt/conda/envs/sap/lib/python3.8/site-packages/tqdm/std.py", line 1167, in __iter__
for obj in iterable:
File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
[mesh-f71edd87-064e-4cfc-ab1b-9.zip](https://github.com/autonomousvision/shape_as_points/files/10429561/mesh-f71edd87-064e-4cfc-ab1b-9.zip)
File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 86, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>
Thanks.
This is the npz I want to test, it is generated by Point-E and the paper says applying SAP could be a next step
I use SAP to reconstruct mesh from depth map input, while the result is somekind weird. It seems that SAP try to reconstruct closed shape. For those only one side oriented point clouds extracted from depth map SAP additionally generates some discrete faces. Is there any way to remove these faces?
Hi Songyou,
Thanks again for sharing the code for your excellent work!
It helps a lot!
I am trying to implement optimization-based multi-view images reconstruction.
I used your PixelNeRFDTUDataset to load DTU. However, I found there is a function called uv_creation is missing. And I am kind of stuck here.
Could you please kindly provide your code for that function?
Thanks a lot!
Hello,
I'm trying to train your learning based model and according to your config noise_large(or small) yaml files,
it said, you trained the model 400,000 epochs.
However, I have used pretty good 2GPU(Quadro RTX 8000), the model was trained about 8epochs/hours. Is this right?
Also, could you tell me the difference between two files, ours.yaml and default.yaml?
In section Installation
"
Now, you can install PyTorch3D 0.6.0 from the official instruction as follows
pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu102_pyt190/download.html
"
but I can only get pytorch3d 0.0.1 to 0.3.0 from the above address. I want to check whether we should install pytorch3d 0.3.0 with the above command or manually install pytorch 0.6.0 from official instruction?
P.S. I tried pytorch3d 0.3.0 but it ends up with
ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory
Thanks!
Hi,
Thanks for the awesome work! I'm trying to generate mesh from my own point cloud (generated with DiT3D, trained also on ShapeNet). I already did the pre-processing such that the format will be similar to the ones used in this repo: (i) making pose to right-side heading, (ii) consisting of 3K points, (iii) normalizing into [-0.5, 0.5]. See Figure 1 below for some mesh results (right side) from the point cloud inputs (left side). From Figure 1, the results are quite unexpected. I already tried to use the provided pre-trained model: (a) small noise, (b) large noise, (c) and those outliers models, but the results are also still quite unexpected. Here are some questions:
Many thanks! :)
Figure 1 (these samples are right-side headings. Here I rotate only for best visualization purposes)
Hey,
Thank you for publishing the code and congratulations on getting the NiPS oral!
Could you please provide some comparison of the Optimization-based vs. Learning-based results or at least provide some insight into what works better under what circumstances.
(I looked through the paper and SM and couldn't find this, but if it exists and I've missed it, could you please point me in the right direction?)
Thanks,
Eliahu
ERROR: Could not find a version that satisfies the requirement igl (from versions: none)
ERROR: No matching distribution found for igl
igl cannot be installed by pip. How can I install igl?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.