Giter Club home page Giter Club logo

shapegan's Introduction

Generative Adversarial Networks and Autoencoders for 3D Shapes

Shapes generated with our propsed GAN architecture and reconstructed using Marching Cubes

This repository provides code for the paper "Adversarial Generation of Continuous Implicit Shape Representations" and for my master thesis about generative machine learning models for 3D shapes. It contains:

  • the networks proposed in the paper (GANs with a DeepSDF network as the generator and a 3D CNN or Pointnet as discriminator)
  • an autoencoder, variational autoencoder and GANs for SDF voxel volumes using 3D CNNs
  • an implementation of the DeepSDF autodecoder that learns implicit function representations of 3D shapes
  • a GAN that uses a DeepSDF network as the generator and a 3D CNN as the discriminator ("Hybrid GAN", as proposed in the paper, but without progressive growing and without gradient penalty)
  • a data prepration pipeline that can prepare SDF datasets from triangle meshes, such as the Shapenet dataset (based on my mesh_to_sdf project)
  • a ray marching renderer to render signed distance fields given by a neural network, as well as a classic rasterized renderer to render triangle meshes reconstructed with Marching Cubes
  • tools to visualize the results

Note that although the code provided here works, most of the scripts need some configuration to work for a specific task.

This project uses two different ways to represent 3D shapes. These representations are voxel volumes and implicit functions. Both use signed distances.

For both representations, there are networks that learn latent embeddings and then reconstruct objects from latent codes. These are the autoencoder and variational autoencoder for voxel volumes and the autodecoder for the DeepSDF network.

In addition, for both representations, there are generative adversarial networks that learn to generate novel objects from random latent codes. The GANs come in a classic and a Wasserstein flavor.

Reproducing the paper

This section explains how to reproduce the paper "Generative Adversarial Networks and Autoencoders for 3D Shapes".

Data preparation

To train the model, the meshes in the Shapenet dataset need to be voxelized for the voxel-based approach and converted to SDF point clouds for the point based approach.

We provide readily prepared datasets for the Chairs, Airplanes and Sofas categories of Shapenet as a download. The size of that dataset is 71 GB.

To prepare the data yourself, follow these steps:

  1. install the mesh_to_sdf pip module.
  2. Download the Shapenet files to the data/shapenet/ directory or create an equivalent symlink.
  3. Review the settings at the top of prepare_shapenet_dataset.py. The default settings are configured for reproducing the GAN paper, so you shouldn't need to change anything. You can change the dataset category that will be prepared, the default is the chairs category. You can disable preparation of either the voxel or point datasets if you don't need both.
  4. Run prepare_shapenet_dataset.py. You can stop and resume this script and it will continue where it left off.

Training

Voxel-based discriminator

To train the GAN with the 3D CNN discriminator, run

python3 train_hybrid_progressive_gan.py iteration=0
python3 train_hybrid_progressive_gan.py iteration=1
python3 train_hybrid_progressive_gan.py iteration=2
python3 train_hybrid_progressive_gan.py iteration=3

This runs the four steps of progressive growing. Each iteration will start with the result of the previous iteration or the most recent result of the current iteration if the "continue" parameter is supplied. Add the nogui parameter to disable the model viewer during training. This parameter should be used when the script is run remotely.

Point-based discriminator

TODO

Note that the pointnet-based approach currently has a separate implementation of the generator and doesn't work with the visualization scripts provided here. The two implementations will be merged soon so that the demos work.

Use pretrained generator models

In the examples directory, you find network parameters for the GAN generators trained on chairs, airplanes and sofas with the 3D CNN discriminator. You can use these by loading the generator from these files, i.e. in demo_sdf_net.py you can change sdf_net.filename accordingly.

TODO: Examples for the pointnet-based GANs will be added soon.

Running other 3D deep learning models

Data preparation

Two data preparation scripts are available, prepare_shapenet_dataset.py is configured to work specifically with the Shapenet dataset. prepare_data.py can be used with any folder of 3D meshes. Both need to be configured depending on what data you want to prepare. Most of the time, not all types of data need to be prepared. For the DeepSDF network, you need SDF clouds. For the remaining networks, you need voxels of resolution 32. The "uniform" and "surface" datasets, as well as the voxels of other resolutions are only needed for the GAN paper (see the section above).

Training

Run any of the scripts that start with train_ to train the networks. The train_autoencoder.py trains the variational autoencoder, unless the classic argument is supplied. All training scripts take these command line arguments:

  • continue to load existing parameters
  • nogui to not show the model viewer, which is useful for VMs
  • show_slice to show a text representation of the learned shape

Progress is saved after each epoch. There is no stopping criterion. The longer you train, the better the result. You should have at least 8GB of GPU RAM available. Use a datacenter GPU, training on a desktop GPU will take several days to get good results. The classifiers take the least time to train and the GANs take the most time.

Visualization

To visualize the results, run any of the scripts starting with demo_. They might need to be configured depending on the model that was trained and the visualizations needed. The create_plot.py contains code to generate figures for my thesis.

Using the pretrained DeepSDF model and recreating the latent space traversal animation

This section explains how get a DeepSDF network model that was pre-trained on the Shapenet dataset and how to use it to recreate this latent space traversal animation.

Since the model was trained, some network parameters have changed. If you're training a new model, you can use the parameters on the master branch and it will work as well. To be compatible with the pretrained model, you'll need the changes in the pretrained-deepsdf-shapenet branch.

To generate the latent space animation, follow these steps:

  1. Switch to the the pretrained-deepsdf-shapenet branch.

  2. Move the contents of the examples/deepsdf-shapenet-pretrained directory to the project root directory. The scripts will look for the .to files in /models and /data relative to the project root.

  3. Run python3 demo_latent_space.py. This takes about 40 minutes on my machine. To make it faster, you can lower the values of SAMPLE_COUNT and TRANSITION_FRAMES in demo_latent_space.py.

  4. To render a video file from the frames, run ffmpeg -framerate 30 -i images/frame-%05d.png -c:v libx264 -profile:v high -crf 19 -pix_fmt yuv420p video.mp4.

Note that after completing steps 1 and 2, you can run python3 demo_sdf_net.py to show a realtime latent space interpolation.

shapegan's People

Contributors

katjaq avatar marian42 avatar rusty1s avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shapegan's Issues

Red cube displayed upon running demo_gan.py

I ran demo_gan.py using the pre-trained gan_generator_voxels_sofas.to file. I made the following changes:

  1. Copied gan_generator_voxels_sofas.to from examples folder to models folder.

  2. Did pip install scipy==1.5.2

  3. In rendering/__init__.py
    vertices, faces, normals, _ = skimage.measure.marching_cubes_lewiner(voxels, level=level, spacing=(2.0 / voxel_resolution, 2.0 / voxel_resolution, 2.0 / voxel_resolution))
    to
    vertices, faces, normals, _ = skimage.measure.marching_cubes(voxels, level=level, spacing=(2.0 / voxel_resolution, 2.0 / voxel_resolution, 2.0 / voxel_resolution),method='lewiner')

  4. In model/gan.py
    super(Generator, self).__init__(filename="generator.to")
    to
    super(Generator, self).__init__(filename="gan_generator_voxels_sofas.to")

I got this visualization as a result:
image

How do I resolve this?

Training parameters for demo models

Were the demo models provided in examples directory trained in progressive fashion or without it? Also what are the other hyper-parameters for the provided models. Thanks!

train_hybrid_progressive_gan.py

Hello, I want to ask some questions about the paper , he said that training will experience four stages in the process, I want to ask how long is the training in each stage is over

prepare_shapenet_dataset.py

According to the prompt of the file, I run the prepare_shapenet_dataset.py file to generate the voxel of the chair. After running, the following file appeared: 1a6f615e8b1b5ae4dbbc9440457e303e.npy
When I tried to draw this voxel with the matplotlib tool, it showed a 323232 cube without the shape of a chair.
image

ModuleNotFoundError: No module named 'dataset'

Hi,
after reading your paper, I am trying to reproduce your works.
Firstly, I trained VAE with the chair dataset.
Before testing tsne space, I am testing plotting process.
When I run "python create_plot.py color-test", it occurs ModuleNotFoundError.

In the code (create_plot.py),

from dataset import dataset as dataset
dataset.load_voxels('cpu')
dataset.load_labels()
voxels = dataset.voxels

but I could not find 'dataset'.
is it a typo of datasets?
Also, I could not find where load_voxels() and load_labels() are defined.
could you clarify which dataset it is pointing to?

Thank you,

QObject::moveToThread:

QObject::moveToThread: Current thread (0x563d74d14da0) is not the object's thread (0x563e00957b30).
Cannot move to target thread (0x563d74d14da0)

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/alberto/anaconda3/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.

Aborted (core dumped)

Did you by chance run into this error during the visualization of the latent walk?

Retraining DeepSDF

Hi @marian42 ,

Thanks for contributing an amazing amount of code.
I am trying to use your repo to retrain DeepSDF on the Sofa category and have two queries :

  1. I tried downloading the precomputed dataset but it seems that the cloud folder is missing from the downloaded data. Do you confirm that I need to rerun prepare_shapenet_data.py to get the sdf.to file for training?
  2. Is your code (prepare_shapenet_data) designed for ShapeNetv1 or ShapeNetv2 or both?

Thanks a lot!
Best regards,
Thibault

What should be in the file -- 'data/chairs/train.txt' ?

Hi, I'm trying to re-implement your work and am running into some confusion.
It seems like the file 'data/chairs/train.txt' is being called in train_hybrid_progressive_gan.py, but I don't see where this file is generated (or any writeup about how to generate it yourself). I followed the data preprocessing steps as was outlined - any pointers would be helpful!

Recreating the latent space traversal animation

Hey,
I'd like to create the latent space animation with the model I have trained.
However, I'm not sure how to create the following files required by your code:
sdf_net_latent_codes.to
labels.to
Could you elaborate on how to initialize them?

Also, I assumed that sdf_net.to is basically the generative model I trained hybrid_progressive_gan_generator_3.to - Is that correct?

Thanks!

Request regarding demo_latent_space

Thank you for a great work.
Is it possible to share sdf_net_latent_codes.to to run demo_latent_space.py?
Also I need sdf_points.to and sdf_values.to to run train_sdf_autodecoder.py, how can I get it?

demo_sdf_

Screenshot from 2021-10-07 14-55-13

I am running the demo and I am getting an empty view.

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/home/alberto/anaconda3/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/home/alberto/anaconda3/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/alberto/Documents/shapegan/rendering/__init__.py", line 299, in _run
    self._render()
  File "/home/alberto/Documents/shapegan/rendering/__init__.py", line 230, in _render
    light_vp_matrix = get_camera_transform(6, self.rotation[0], 50, project=True)
  File "/home/alberto/Documents/shapegan/rendering/math.py", line 20, in get_camera_transform
    camera_transform = np.matmul(camera_transform, get_rotation_matrix(rotation_x, axis='x'))
  File "/home/alberto/Documents/shapegan/rendering/math.py", line 14, in get_rotation_matrix
    matrix[:3, :3] = rotation.as_dcm()
AttributeError: 'scipy.spatial.transform.rotation.Rotation' object has no attribute 'as_dcm'

Evaluation script

Hi,

Could you share your evaluation scripts to reproduce the table1 in paper?

Thanks!

Code Release

Hi,

First of all, thanks for sharing this excellent work!

However, I have 2 questions regarding to code release:

  1. Will you open-source the training script of "Point-based discriminator" ?
  2. Would you provide the evaluation script ?

Thanks!

Question re: DeepSDF training

Hi, thanks for your work and this repository! I was taking a look at the original DeepSDF training code, and your own training code for the autodecoder, and I noticed some differences. I was hoping I could perhaps pick your brain on your own experiences with DeepSDF-style training?

For instance, I notice that your training completely shuffles the data samples across all points and across all shapes. So a batch will contain a random assortment of shapes. Whereas the DeepSDF code will instead randomly pick a set of shapes, and from those randomly select the points. So DeepSDF batches are only composed of a handful of shapes. Was there a reason you departed from the DeepSDF batch composition? I'm curious if you found it more stable?

I also noticed that your autodecoder model includes no normalization layers. Was there a reason you decided not to include those?

Because of these differences, I'm just wondering if there are any quick tips/strategies you can pass on - thanks in advance if you have the time to respond!

dataset

I would like to ask, is the data set of the pre-training model trained with point clouds or voxels?

Codelab example or installation guide?

Hi,

Is there a installation guide or codelab option?
I like the project and would use it in an art project for 3D printing. Showing what's possible with GAN's etc..
And a 3D output would be amazing?

Kindest regards,

dataset

Hello, the dataset you provided is too large, can you provide a smaller data set, about 1g

Visualization Scripts for the point-based GAN

Hi Marian,
Is there any chance you could upload the visualization scripts for the point-based GAN? I am precisely training the point_sdf_net model and would like to see what comes out from it.
Cheers!

conda environment

I'm running into issues trying to get the right conda environment set up for this project -- pygame requires 2.7 or 3.5, but scikit-spatial requires >=3.7. Is there possibly a list of package versions to set up the correct environment?

License?

Could you please add a license to this repo? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.