czq142857 / bsp-net-pytorch Goto Github PK
View Code? Open in Web Editor NEWPyTorch 1.2 implementation of BSP-NET.
License: Other
PyTorch 1.2 implementation of BSP-NET.
License: Other
Hi,
I am interested in testing your code with different resolution RGB images. It is possible to use "modelSVR.py" also with RGB images? What changes need to be made?
As for the resolution of the images, is it necessary to add other levels of convolution?
Hi Zhiqin,
I really liked your work and thanks for the awesome code.
I'm trying to expand BSPNet to point cloud, but training becomes unstable and sensitive to changes in "p_dim", "c_dim" and torch random seeds.
The first training iteration sometimes generates an empty or solid shape and kills all the gradients. Probably due to clamping operations in the generator.
Don't know if you faced similar issues before, any suggestions are really appreciated.
Best,
Daxuan
Hi Zhiqin,
Thanks for sharing the code. Have you also tried reconstructing meshes directly from point clouds?
Hi!
Thank you for sharing the code. What is the rough time needed for training on all category data?
Thank you for your code.I have seen other issues that the pretrained model is without overlap loss and code of generator class in modelSVR.py corresponds to Phase 1.
So to produce better results for shape reconstruction,should I retrain the AE model on phase 0 โ phase 3 and change the code of generator class in modelSVR.py and then retrain the SVR model?
Hi,
Thank you for the code. I trained the AE model using train_ae.sh and using the prepared Shapenet dataset that you have provided. I did not make any changes to the code, except pointing to the correct data_dir in main.py. After training with size 16 and 32, I looked at the sample files generated during training and testing. The generated sample .ply files do not have any vertices, but all of them just have the header as shown below. Is this the expected output? If not, could you please tell me how I can fix the problem and get the correct sample output with vertex info? The training logs are also shown below.
Also, can the 4 progressive training steps (size 16 and 32 with 8M iterations and size 64 with 16M iterations) be run in parallel or do they have to be trained sequentially as shown in train_ae.sh?
Thank you.
---------sample ply file generated during training and testing----------
ply
format ascii 1.0
element vertex 0
property float x
property float y
property float z
element face 0
property list uchar int vertex_index
end_header
32 Epoch: [ 0/228] time: 171.8615, loss_sp: 0.252307, loss_total: 0.259578
32 Epoch: [ 1/228] time: 344.7302, loss_sp: 0.252319, loss_total: 0.259573
32 Epoch: [ 2/228] time: 517.7831, loss_sp: 0.252298, loss_total: 0.259548
32 Epoch: [ 3/228] time: 690.9526, loss_sp: 0.252300, loss_total: 0.259559
32 Epoch: [ 4/228] time: 864.0516, loss_sp: 0.252307, loss_total: 0.259560
Hi,
Thanks for sharing this code. What do you use to normalize the meshes prior to making voxels? Did you write your own python script like this (http://shapenet.cs.stanford.edu/shapenet/obj-zip/ShapeNetCore.v2-old/shapenet/scripts/shapenetcorev2/normalize.py) or do you suggest other tools that potentially can make things faster?
Hi,
can I train the SVR with 3d models and renderings created by me but still using the pre-trained autoencoder provided by you?
In this case my models would always be from the same classes used to train the autoencoder (chairs, tables, planes...)
Hi! Thank you so much for this wonderful work.
I'm interested in applying textures on the output meshes, as shown in the paper. The textures seem continuous across different convexes. May I ask how do you do the UV mapping to apply the textures please? Thank you!
No such file or directory: 'model_checkpoint_path: "BSP_AE.model-228"'
Thank you for your highly interesting work and for releasing the code. I have a few questions.
I see in the code that there is a training loss not described in the paper, the one where you encourage the values in T to move towards 0 or 1. How does this loss perform? If using phase 3, should it be after phase 0 training or should we start fro, scratch with phase 3?
Also, I saw that the implementation of the overlap loss in the code seems different from what is described in the paper. What is the reason for this difference?
Finally, are the pretrained network weights you provided trained with or without the overlap loss?
Hi,
Thanks for sharing your good research.
In the segmentation experiment, you said that for each point sample, label is assigned by measuring the distance among nearby primitives. Is there a code for this in your repo?
Hello, I'd like to ask a question. In utils.py
, how to sample point clouds from the mesh surface in the function sample_points_polygon
?
Hi,
Thank you for making the code available publicly. I am interested in using your pretrained model to reconstruct 3D mesh from an RGB image (for testing). The BSP_SVR class in modelSVR.py has a method test_mesh_obj_material() that generates a 3D mesh with color/material in .obj format, which is the format I need. Can this method be used to reconstruct mesh directly from an image input? This method requires pixel data (self.data_pixels) as input in the following line of code.
for t in range(config.start, min(len(self.data_pixels),config.end))
In your code, self.data_pixels is extracted from the hdf5 files. For new test images for which we dont have the hdf5 files, could you please let us know what kind of data processing we need to do to define self.data_pixels from an RGB image input?
thanks
Hi there!
I have been experimenting with the 2D version of your BSP_SVR model in a PyTorch implementation of mine. I have been able to reproduce the results on the toy dataset. However, when I switched to more complex binary masks (such as segmentation pipeline outputs), I'm noticing that the bsp_loss_sp loss initially converges and then after a few epochs, collapses to a constant value in the continuous phase. When this happens, I notice that the net_outs tend to collapse to a tensor of all zero values.
However, initializing the model on these weights and continuing to train in the discrete phase still seems to work in the expected direction, albeit, the convergence of losses is very slow.
I've noticed that the model only works well with a very low learning rate (0.00002) in the 2D case. However, it appears the model takes a very long time to train using this learning rate on more complex binary masks. Do you have any suggestions or insights as to why the model collapses to all zero values (in the 2D case where binary masks are more complex than the toy dataset)? And on how this could be tackled?
Looking forward to your reply. Thank you.
Hi,
If I want to train the Autoencoder from scratch with my 3d models, how many models do I need?
Thanks in advance for your reply
Hi ZhiQin!
Thanks for your work and releasing scripts.
I want to know the way you prepare the voxel data, I notice that there are "data_voxels" and "coords" needed for the 3d AE.
The "data_voxels" are the model voxel martrixs and the "coords" are float coordinates of the voxels. The "coords" are splited to 8 subsets, values of which are limited to[-0.4922, 0.4766]. After you get the point clouds from ShapeNet, how you normalize the data and how you voxelize the data after normalization?
Best,
hmax
Hi,
Thank you for the code. I am using my own data to train the AE model and meet some problems. When I train with size 16, train loss is normal, probably between 0.001 and 0.002. But when I train with size 32, the loss is about 0.02. When I train with size 64, the loss is about 0.05. I want to ask whether the loss is normal. The batch size is 24 for all training.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.