Giter Club home page Giter Club logo

shape_as_points's People

Contributors

pengsongyou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shape_as_points's Issues

shapenet psr gt

Hello

For some reasons, I would like to preprocess the shapenet dataset by myself.

While running process_shapenet.py, I found that the work is stopped at processing the point_rasterize function.

How long does it take to make psr_gr?

Do we need GPUs during this?

Segmentation fault (core dumped)

got error when trying to make parameter o3d_show as True in yaml file.

aga/res_32 --train:lr_pcl 0.002000 --data:object_id -1
Changing model:grid_res ---- 256 to 32
Changing model:psr_sigma ---- 2 to 2
Changing train:input_mesh ---- to None
Changing train:total_epochs ---- 4000 to 1000
Changing train:out_dir ---- /dsk1/shape_as_points/after/aga to /dsk1/shape_as_points/after/aga/res_32
Changing train:lr_pcl ---- 2e-2 to 0.002000
Changing data:object_id ---- 0 to -1
/dsk1/shape_as_points/after/aga/res_32
Segmentation fault (core dumped)

Couldn't find the right solution to the questions, how can this be fixed?

Pretrained model for the optimization-based version

Hi Songyou,

Thanks again for sharing the code for your excellent work!
Could you please provide the pretrained model of optimization-based version that are reported in your paper (there are three datasets: dfaust, thingi, deep_geometric_prior_data).

Looking for your reply. :)

Best,
Runsong

Input all Points for Optimization-based method?

Hi Songyou,

Thanks for sharing your code for the excellent work!
I am curious about the optimization-based part and I have two questions. Can you help me? thanks a lot in advance. :)

  • Do you use the whole point cloud? For example, the "daratech" shape(SRB dataset) contains 71265 point cloud, you put all the point clouds for your methods or you downsample them first.
  • Can you provide the parameter for Poisson Surface Reconstruction(e.g. depth) and parameter for minimum spanning tree method if possible.

Looking forward to your reply!

Best,
Runsong

The question is about the model and training.

Good afternoon, I have a few questions, I am trying to do a point cloud processing cycle for my master's work at the university and I will be very glad if you answer my questions:

  1. From your article, I did not quite understand what exactly is being trained in the model, could you clarify which parameters are being improved in the learning process?
  2. Do I understand correctly that you trained your model on all three datasets? I am trying to optimize a point cloud in the form of an airplane, and the result of your model does not turn out to be as high-quality as I would like. I want to train the model on 3D figures of planes, BUT there are already planes in the ShapeNet dataset on which you trained your model, and despite this, the result is still not very good (the result is attached to the cart). Does it make sense to retrain your model ONLY on airplanes to improve the result ?
    The original number of points in the aircraft model is about 60000+-.

image
image

bad_alloc

When I try to run the program without any modification, there is ༛̥⃝ʺ̤ error triggered at optim.py:11 "from src.optimization import Trainer":
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc

question about interpolation method

Thanks for sharing this great work! It seems that the interpolation method used in the point_rasterize and grid_interp function is based on the absolute distance of the point to the adjacent grid point. Have you ever tried other interpolation method such as trilinear interpolation? Do different interpolation methods affect the results?

Mesh Evaluation for the Optimization-based part

Hi Songyou,

Thanks again for sharing the code for your excellent work!
I found that the script "eval_meshes.py" is prepared for the learning-based method, but I want to evaluate the mesh result produced by the optimization-based part. Could you please give some suggestions for this?

Looking for your reply. :)

Best,
Runsong

question about rasterize function

Thanks for sharing this great work!
I am a little confused about the rasterize function. It seems that the weight is the product of the absolute distance, and the value on the gird is the weighted sum of all the relevant points. Does the grid value need to be normalized? Because I have found that when I rasterize the parameters of the points to the grid, the value on the grid increases as the number of the sampled points increases. But when I normalize the value on the grid, there is an error of marching cube. Do you have any advice or is there any relationship between the number of sampling points and the grid size?

About psr_grid value

If we set a boundary value as 0.5 at surface point and keep scaling,

will the grid values converge to 0(inside) or 1(outside)?

The datasets can`t be fetched.

Hello, I want to download the demo data,but falied to establish SSL connection.
Could you please upload a copy to GoogleDrive at the same time?
Thanks!

wget https://s3.eu-central-1.amazonaws.com/avg-projects/shape_as_points/data/demo.zip

--2021-11-10 10:33:21--  https://s3.eu-central-1.amazonaws.com/avg-projects/shape_as_points/data/demo.zip
Proxy tunneling failed: Bad GatewayUnable to establish SSL connection.

The conda installation instructions seem to be in a broken state.

Trying to install using ubuntu22.04 the conda install fails on the installing pip dependencies stage:

Installing pip dependencies: / Ran pip subprocess with arguments:
['/home/creynolds/anaconda3/envs/sap/bin/python', '-m', 'pip', 'install', '-U', '-r', '/home/creynolds/shape_as_points/condaenv.r7d4ldjp.requirements.txt']
Pip subprocess output:

...

Building wheels for collected packages: plyfile, ipdb
  Building wheel for plyfile (setup.py): started
  Building wheel for plyfile (setup.py): finished with status 'done'
  Created wheel for plyfile: filename=plyfile-0.7-py3-none-any.whl size=8236 sha256=83fe451234965bd0827ee5f3585ddc48cf2a71f4e82d56e9bde1b1ba2ce0123e
  Stored in directory: /home/creynolds/.cache/pip/wheels/ed/f2/18/3a13a4be6937b1a201cdb5a99879b034fb60ae18002d28abb5
  Building wheel for ipdb (setup.py): started
  Building wheel for ipdb (setup.py): finished with status 'done'
  Created wheel for ipdb: filename=ipdb-0.13.7-py3-none-any.whl size=11461 sha256=24cc2accca4705b8e9272c840c1358c128dce9989616a0da413ed989279fea2a
  Stored in directory: /home/creynolds/.cache/pip/wheels/b3/e6/33/23ed5c0ce0654cd1426587c22918c3854c5e2583473e61538b
Successfully built plyfile ipdb
Installing collected packages: wcwidth, pytz, python-mnist, pure-eval, ptyprocess, pickleshare, fastjsonschema, executing, dash-table, dash-html-components, dash-core-components, backcall, av, addict, widgetsnbextension, traitlets, tornado, threadpoolctl, tenacity, pyzmq, pyyaml, pyrsistent, pygments, psutil, prompt-toolkit, pkgutil-resolve-name, pexpect, parso, numpy, networkx, nest-asyncio, MarkupSafe, jupyterlab-widgets, joblib, itsdangerous, importlib-resources, entrypoints, decorator, debugpy, configargparse, asttokens, Werkzeug, tifffile, stack-data, PyWavelets, pyquaternion, pykdtree, plyfile, plotly, pandas, opencv-python, matplotlib-inline, jupyter_core, jsonschema, Jinja2, jedi, imageio, comm, scikit-learn, scikit-image, nbformat, jupyter-client, ipython, Flask, ipykernel, ipdb, dash, ipywidgets, open3d
  Attempting uninstall: tornado
    Found existing installation: tornado 6.1
    Uninstalling tornado-6.1:
      Successfully uninstalled tornado-6.1
  Attempting uninstall: pyyaml
    Found existing installation: PyYAML 5.3.1

Pip subprocess error:
ERROR: Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.

failed

CondaEnvException: Pip failed

Furthermore, attempting to install PyTorch3D as per the instructions gives:

ERROR: Could not find a version that satisfies the requirement pytorch3d (from versions: none)
ERROR: No matching distribution found for pytorch3d

Could you please advise on how to proceed?

How to run inference on learning based Model.

Hello, great paper.

I have successfully managed to run inference on this model via the Optimization based Model and it has produced some decent results but not quite what the paper shows.

What are the steps I need to run inference via the learning based model? I tried replacing the configs in the demo with my own pointcloud but it gave me this error:

<torch.utils.data.dataloader.DataLoader object at 0x7fdc75c91430>
Loading model...
https://s3.eu-central-1.amazonaws.com/avg-projects/shape_as_points/models/ours_outlier_7x.pt
=> Loading checkpoint from url...
Generating...
  0%|                                                                                                                                                                   | 0/2 [00:00<?, ?it/s]WARNING - 2023-01-16 21:08:42,915 - core - Error occured when loading field gt_points of model 0
  0%|                                                                                                                                                                   | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "generate.py", line 232, in <module>
    main()
  File "generate.py", line 104, in main
    for it, data in enumerate(tqdm(test_loader)):
  File "/opt/conda/envs/sap/lib/python3.8/site-packages/tqdm/std.py", line 1167, in __iter__
    for obj in iterable:
  File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
[mesh-f71edd87-064e-4cfc-ab1b-9.zip](https://github.com/autonomousvision/shape_as_points/files/10429561/mesh-f71edd87-064e-4cfc-ab1b-9.zip)

  File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/opt/conda/envs/sap/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 86, in default_collate
    raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>

Thanks.

This is the npz I want to test, it is generated by Point-E and the paper says applying SAP could be a next step

mesh-f71edd87-064e-4cfc-ab1b-9.zip

weird results for depth map input

I use SAP to reconstruct mesh from depth map input, while the result is somekind weird. It seems that SAP try to reconstruct closed shape. For those only one side oriented point clouds extracted from depth map SAP additionally generates some discrete faces. Is there any way to remove these faces?

Load DTU dataset. uv_creation is missing

Hi Songyou,

Thanks again for sharing the code for your excellent work!
It helps a lot!

I am trying to implement optimization-based multi-view images reconstruction.
I used your PixelNeRFDTUDataset to load DTU. However, I found there is a function called uv_creation is missing. And I am kind of stuck here.
Could you please kindly provide your code for that function?

Thanks a lot!

training epochs

Hello,
I'm trying to train your learning based model and according to your config noise_large(or small) yaml files,
it said, you trained the model 400,000 epochs.
However, I have used pretty good 2GPU(Quadro RTX 8000), the model was trained about 8epochs/hours. Is this right?
Also, could you tell me the difference between two files, ours.yaml and default.yaml?

Fail to setup pytorch3d 0.6.0 : a typo?

In section Installation
"
Now, you can install PyTorch3D 0.6.0 from the official instruction as follows
pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu102_pyt190/download.html
"
but I can only get pytorch3d 0.0.1 to 0.3.0 from the above address. I want to check whether we should install pytorch3d 0.3.0 with the above command or manually install pytorch 0.6.0 from official instruction?

P.S. I tried pytorch3d 0.3.0 but it ends up with
ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory

Thanks!

Generate from Our Own Point cloud, need new training or finetuning?

Hi,

Thanks for the awesome work! I'm trying to generate mesh from my own point cloud (generated with DiT3D, trained also on ShapeNet). I already did the pre-processing such that the format will be similar to the ones used in this repo: (i) making pose to right-side heading, (ii) consisting of 3K points, (iii) normalizing into [-0.5, 0.5]. See Figure 1 below for some mesh results (right side) from the point cloud inputs (left side). From Figure 1, the results are quite unexpected. I already tried to use the provided pre-trained model: (a) small noise, (b) large noise, (c) and those outliers models, but the results are also still quite unexpected. Here are some questions:

  1. What are the possible reasons why the mesh results below are not good?
  2. Any direction on how to get good mesh shapes using my point clouds as shown below as input?
  3. If new training or fine-tuning is needed to get good meshes for my own point clouds, how to do it if I only have point cloud data (x,y,z) without normals data and mesh data?
  4. I wonder if the method in this repo is designed to have one pre-trained model for all classes or one pre-trained model for each class? Including those provided pre-trained models.

Many thanks! :)

Figure 1 (these samples are right-side headings. Here I rotate only for best visualization purposes)
image

Optimization-based vs. Learning-based results

Hey,
Thank you for publishing the code and congratulations on getting the NiPS oral!
Could you please provide some comparison of the Optimization-based vs. Learning-based results or at least provide some insight into what works better under what circumstances.
(I looked through the paper and SM and couldn't find this, but if it exists and I've missed it, could you please point me in the right direction?)

Thanks,
Eliahu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.