Giter Club home page Giter Club logo

neuraludf's Introduction

NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies (CVPR2023)

Introduction

We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering. However, these methods are limited to objects with closed surfaces since they adopt Signed Distance Function (SDF) as surface representation which requires the target shape to be divided into inside and outside. In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation. Specifically, a new density function that correlates the property of UDF with the volume rendering scheme is introduced for robust optimization of the UDF fields. Experiments on the DTU and DeepFashion3D datasets show that our method not only enables high-quality reconstruction of non-closed shapes with complex typologies, but also achieves comparable performance to the SDF based methods on the reconstruction of closed surfaces.

Usage

Setup environment

Set up a conda environment with the right packages using:

conda env create -f conda_env.yml
conda activate neuraludf

We leverage MeshUDF to extract mesh from the learned UDF field. Thank them for the great work. To compile the custom version for your system, please run:

cd custom_mc
python setup.py build_ext --inplace
cd ..

Data Convention

Download the preprocessed Deepfashion3D dataset we use and GT point clouds: The DTU data and Deepfashion3d data are organized as follows:

<case_name>
|-- cameras_xxx.npz    # camera parameters
|-- image
    |-- 000.png        # target image for each view
    |-- 001.png
    ...
|-- mask
    |-- 000.png        # target mask each view (For unmasked setting, set all pixels as 255)
    |-- 001.png
    ...

Here the cameras_xxx.npz follows the data format in IDR, where world_mat_xx denotes the world to image projection matrix, and scale_mat_xx denotes the normalization matrix.

Running

  • On objects with closed surfaces (DTU)

The training has two stages. We apply blending-based patch loss (used in SparseNeuS) to further improve the reconstruction quality.

bash bashs/bash_dtu_blending.sh --gpu 0 --case scan118
bash bashs/bash_dtu_blending_ft.sh --gpu 0 --case scan118
  • On objects with open surfaces (Deepfashion3D) If the initial sparse_weight is inappropriate, adjust it in the fine-tuning stage
bash bashs/bash_garment_blending.sh --gpu 0 --case scan320 -s 0.001
bash bashs/bash_garment_blending_ft.sh --gpu 0 --case scan320 -s 0.01
  • Extract surface from trained model
python exp_runner_blending.py --mode validate_udf_mesh --conf <config_file> --case <case_name> --is_continue # use latest checkpoint

The corresponding mesh can be found in exp/<case_name>/<exp_name>/meshes/<iter_steps>.ply.

Train NeuralUDF with your custom data

More information can be found in preprocess_custom_data of NeuS.

The reconstruction results of ours and baselines

You can download the results of the methods mentioned in the paper here:

Discussions and future work

As we stated in the paper, it's more difficult to train a UDF field than a SDF field, since UDF doesn't enforce any topological assumption (like the surfaces are closed) and UDF is not differentiable at zero-level sets. Although we propose a series of strategies to alleviate the problem, there are still some limitations, and hope that they can be addressed in the future.

  • The weight of the geometric regularization sometimes is sensitive to some cases, and need to be tuned for better results. Maybe a more robust regularization stragtegy can handle this.
  • How to initialize the UDF field for open surfaces ? In the work, we still adopt sphere initialization.
  • How to extract mesh from the optimized UDF in a more robust way ? MeshUDF provides an inspiring and effective solution, but it's sensitive to the gradients near zero-level sets, and cannot handle non-manifold surfaces.

Citation

Cite as below if you find this repository is helpful to your project:

@article{long2022neuraludf,
  title={NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies},
  author={Long, Xiaoxiao and Lin, Cheng and Liu, Lingjie and Liu, Yuan and Wang, Peng and Theobalt, Christian and Komura, Taku and Wang, Wenping},
  journal={arXiv preprint arXiv:2211.14173},
  year={2022}
}

Acknowledgement

Some code snippets are borrowed from NeuS, MeshUDF and SparseNeuS. Thanks for these great projects.

neuraludf's People

Contributors

clinplayer avatar flamehaze1115 avatar xxlong0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuraludf's Issues

Code release

Hello. Your paper is absolutely amazing! When should we expect the code release to have a go?
Thank you :)

Fine Tuning Failure

Thanks for the great work and for sharing the code.

I tried to train on objects with closed surfaces (DTU). I first ran the bash: bash bashs/bash_dtu_blending.sh --gpu 0 --case scan24, and got some results.

However, then I ran the bash: bash bashs/bash_dtu_blending_ft.sh --gpu 0 --case scan24 , but in most dtu scans the Fine-tuning results look even worse. What should I do to solve this problem? Besides, in these scans, the results I obtained by using the evaluation script you provided are quite different from those in Table 2 of your paper.

Thanks again!

Question about "vis_mask"

Thanks for your excellent work!
I noticed that you used visible mask to modify the indicator function in Eq.(9) in the paper, which is written as
$$\Psi (t)=\prod (1 - h(t) * m(t))$$
However, it seems that the implementation code (Line 410 in ''udf_renderer_blending.py'') is written as
$$\Psi (t)=\prod (1 - h(t) + flip\_saturation*m(t))$$,
and the variable $flip\_saturation$ increases from 0 to 1 along with the training progress. Would you please explain why using ''add'' instead of ''multiply'' to deal with the indicator function? And what is the role of $flip\_saturation$ here?

Thanks again!

Differences to NeuS

Hi, thanks for the great work!

I was wondering what are the major differences to NeuS? When I was comparing the code, it seems that the main difference is the change in the sign of the outputs. I was wondering if there are any other changes in training to make it work better for UDF?

NeuralUDF/models/fields.py

Lines 210 to 211 in 1436d15

return torch.cat([self.udf_out(x[:, :1]) / self.scale, x[:, 1:]],
dim=-1)

When is the code relesed

Hi,Congratulations to CVPR 2023,When should we expect the code release to have a go?
Thank you :)

Performance degradation on textureless objects

Hello,

It's a fascinating piece of work.

However, I did have a few questions and concerns that I would like to discuss with you. I noticed in your paper that you pointed out the limitation of NeuralUDF that its performance may degrade on textureless objects. From looking at the examples in the paper, I can see that most of them are quite colorful. I was wondering how NeuralUDF performs on pure-colored objects and if this is a known issue.

Additionally, I would like to understand the reason behind this limitation. Is it due to the extra computation required by the visibility indicator function or some other differences from the SDF methods? Specifically, is it possible that the higher degrees of freedom in NeuralUDF might cause difficulties in regions where the depth changes drastically, such as in the collar region, or that NeuralUDF might assume the existence of holes in the continuous surface?

Thank you!

Results without Patch loss

Hi,
This paper looks very interesting.

I'm wondering whether the patch loss is a necessary component of this method or it would work without this loss as well.

In the figures 11/12 you do a qualitative comparision against VolSDF and NeuS but they don't have this extra patch loss.
Wouldn't it be better to compare against SparseNeuS?

Thank you!

Rough UDF Mesh

Hello, we ran 320 in DeepFashion3D and found that the extracted UDF mesh is relatively coarse. Is this a reasonable result, or did I miss any steps? The resolution is 256.

Lark20231011-155831

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.