Giter Club home page Giter Club logo

freeviewsynthesis's People

Contributors

griegler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freeviewsynthesis's Issues

DTU groundtruth meshes

Hi,

Thanks for the wonderful work. I was wondering about how you obtained your DTU ground truth meshes. Was it the same way you obtained them for tanks and temples? If yes, would it be possible to share them?

Thanks in advance

Percetual Loss

for midx, mod in enumerate(self.vgg):
es = mod(es)

for i,mod in enumerate(vgg):
... if i == 3:
... print(mod)
...
ReLU(inplace=True)

I can't understand this,why?

Questions about the paper

Thanks for your great work!

I have some questions about the Experimental Evaluation Section in the paper:

  1. How many source views are used to infer a target view in the training phase?
  2. What about the performance on DTU and Tanks and Temples with a few source views (e.g. 5 or 7 views)?

Error in evaluation script

Hi,

Thanks for uploading the code, I tried to run the evaluation script by following the readme and I got this error:

python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5
[...]
[2020-10-30/21:28/INFO/mytorch] Eval iter 749999
/home/gkopanas/.conda/envs/freeviewsynthesis/lib/python3.6/site-packages/torch/nn/functional.py:3384: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
warnings.warn("Default grid_sample and affine_grid behavior has changed "
Traceback (most recent call last):
File "exp.py", line 467, in
worker.do(args, worker_objects)
File "../co/mytorch.py", line 423, in do
self.do_cmd(args, worker_objects)
File "../co/mytorch.py", line 411, in do_cmd
worker_objects, iters=args.iter, net_root=args.eval_net_root
File "../co/mytorch.py", line 600, in eval_iters
self.eval(iter, net, eval_sets)
File "../co/mytorch.py", line 609, in eval
self.eval_set(iter, net, eval_set_idx, eval_set, epoch=epoch)
File "../co/mytorch.py", line 676, in eval_set
err_str = self.format_err_str(err_items)
File "../co/mytorch.py", line 546, in format_err_str
err_list.extend(v.ravel())
AttributeError: 'list' object has no attribute 'ravel'

All the best,
George Kopanas

Make

Hello,thank you for sharing the source code. But When I executed the "make" command,such a error occured:
/ext/preprocess/preprocess.cpp:272:74: error: converting to ‘std::tuple<pybind11::array_t<float, 16>, pybind11::array_t<float, 16>, pybind11::array_t<float, 16> >’ from initializer list would use explicit constructor ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {pybind11::array_t<float, 16>, pybind11::array_t<float, 16>, pybind11::array_t<float, 16>}; <template-parameter-2-2> = void; _Elements = {pybind11::array_t<float, 16>, pybind11::array_t<float, 16>, pybind11::array_t<float, 16>}]’ {n_views, 1, patch_height, patch_width})};

is there some wrong in preprocess.cpp?

has no attribute 'get_sampling_map'

Hi,
Thank you very much for sharing your code. But when I run your code, the following error occurs:

/FreeViewSynthesis/exp/dataset.py", line 165, in base_getitem
sampling_maps, valid_depth_masks, valid_map_masks = ext.preprocess.get_sampling_map(
AttributeError: module 'ext.preprocess' has no attribute 'get_sampling_map'

Hope you can help answer, thank you.

Prepare customized dataset

Hi Riegler, thank you for this great work in view synthesis. It is really appreciated that you have provided you preprocessed dataset and training/eval code. May I know how to preprocess our customized dataset (a few images) so that we can apply this method directly on it?

IndexError while running exp.py

while running

python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5

I get the error: IndexError: index 221 is out of bounds for axis 0 with size 0

Screenshot from 2020-12-12 16-33-24

I modified my path to point Playground dataset in config and commented truck, M60 and train eval tracks.

method to generate camera path

Hi @griegler, thanks for your generous share, I have succeeded to generate some new view images using your scripts. but I also curious about how to generate some random camera path like pitch angle instead of just interpolate the original camera(def interpolate_waypoints) path so that the new views are more abundant and variable. thanks~

Confused about why counts numpy array is not symmetric

Hello intel-isl team,
FVS paper is certainly inspiring and the same for SVS as well. I am trying to preparing a customized dataset. I realized that the counts.npy is not symmetric, which seems like an adjacency matrix to me. To me, I feel the number of pixels that can map from target view to source view should be the same as the source to target. Could you clarify a bit more? just hoping I am not doing something wrong. Thanks in advance!

How to obtain consistent image and depthmap size

Hi, I'm trying to reproduce your preprocessing steps since code that converts colmap output to algorithm input is not provided.
I have two questions.

  1. Will the full preprocessing (that converts sequence of images to preprocessed dataset) code available soon?
  2. Colmap changes sizes of images and depth maps while dense reconstruction. (It calls image_undistorter).
    But all images and depth maps in your preprocessed datasets have consistent widths and heights.
    How did you preserve the sizes (or compensate size changes)?

Thank you.

Reconstructed mesh from COLMAP

Hi,

Thanks for sharing the preprocessed dataset.

I am wondering if you still have the reconstructed meshes for the dataset produced by COLMAP? Would it be possible to share them?

Thanks a lot.

Proxy-geometry generation pipeline

Hi, This is great work!! Thank you for making your code public :)

I'm trying to use your proxy-geometry generation pipeline using the scripts in co/colmap.py but am not a 100% sure I'm doing it right. Is this how you generated the meshes in your paper? Thanks in advance!

colmap = Colmap(...)
colmap.sparse_reconstruction_unknown_calib()
colmap.dense_reconstruction()
colmap.delaunay_meshing()

how to set tat_root in config.py

Hi, thanks for your great work.
I just want to eval the result by python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5 in exp folder.
I'm confused about the following code in config.py:

# TODO: adjust path
tat_root = Path("/path/to/colmap_tat/")

And I notice that config.tat_root was used in exp.py

    def get_train_set_tat(self, dset):
        dense_dir = config.tat_root / dset / "dense"
        print(dense_dir,'------------------------------------------------------------')
        ibr_dir = dense_dir / f"ibr3d_pw_{self.train_scale:.2f}"
        dset = self.get_pw_dataset(
            name=f'tat_{dset.replace("/", "_")}',
            ibr_dir=ibr_dir,
            im_size=None,
            pad_width=16,
            patch=(self.train_patch, self.train_patch),
            n_nbs=self.train_n_nbs,
            nbs_mode=self.train_nbs_mode,
            train=True,
        )
        return dset

I'm curious about why exp.py will load tat_train_sets when I try to eval the result with the given pretrained model. It seems that the path list in

tat_train_sets = [
    "training/Barn",
    "training/Caterpillar",
    "training/Church",
    "training/Courthouse",
    "training/Ignatius",
    "training/Meetingroom",
    "intermediate/Family",
    "intermediate/Francis",
    "intermediate/Horse",
    "intermediate/Lighthouse",
    "intermediate/Panther",
    "advanced/Auditorium",
    "advanced/Ballroom",
    "advanced/Museum",
    "advanced/Temple",
    "advanced/Courtroom",
    "advanced/Palace",
]

isn't exist in this rep.

Cannot find net_*.params in net_path

When I run
python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5

I got this error:
[2020-10-30/12:22/INFO/mytorch] [EVAL] no network params for iter last

And I add some print, I found in get_net_paths function, the net_root is exp/experiments/tat_nbs5_s0.25_p192_rnn_vgg16unet3_gruunet4.64.3, and in this directory, there is no file named after params, only have one .log file and one .db file.

How can I fix this problem? 3ks

Pretrained model.

Hi,

Thanks for sharing and I'd like to ask where the pretrained model could be downloaded. It seems that training takes quite a long time. (12days with a single gpu)

Thank you!

Cannot construct eval paths

I tried to follow the instructions in the README but can't get the code to run. Here's what I did:

  1. updated config.py to point to the downloaded data:
dtu_root ="/home/poorboy44/Downloads/free_view_synthesis_data/own_data"
tat_root ="/home/poorboy44/Downloads/free_view_synthesis_data/tat_data"
colmap_bin_path ="/usr/local/bin/colmap"
lpips_root = None

I was not sure what lpips_root is. Also the last line in config.py says path/to/colmap_tat but I don't know what that means so I commented it out.

# TODO: adjust path
#tat_root = Path("/path/to/colmap_tat/")

Then I run the command listed in the README from the experiments directoryr:

(py36_free_view_synthesis) poorboy44@poorboy44-XPS-8930:~/FreeViewSynthesis/exp$ python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5

But I get this error:

[2020-12-07/13:28/INFO/exp] Create eval datasets
Traceback (most recent call last):
  File "exp.py", line 467, in <module>
    worker.do(args, worker_objects)
  File "../co/mytorch.py", line 423, in do
    self.do_cmd(args, worker_objects)
  File "../co/mytorch.py", line 411, in do_cmd
    worker_objects, iters=args.iter, net_root=args.eval_net_root
  File "../co/mytorch.py", line 583, in eval_iters
    eval_sets = self.get_eval_sets()
  File "exp.py", line 240, in get_eval_sets
    dset = self.get_eval_set_tat(dset, "subseq")
  File "exp.py", line 205, in get_eval_set_tat
    dense_dir = config.tat_root / dset / "dense"
TypeError: unsupported operand type(s) for /: 'str' and 'str'

So, this line doesn't seem to be valid Python or there is an assumption about a different format in the path?

 dense_dir = config.tat_root / dset / "dense"

I am using Python 3.6. Is there an assumption that the user is using another Python version where this syntax is valid?

I'm really interested in this project and looking forward to resolving these issues.
Scotty

How to change the dense outputs put from colmap into those in dataset.

Hi! When I tried to test my own image data, I was blocked for the gap between dense file output by Colmap and the downloaded dataset. It seems to be hard for me to make this translation. Could you give me some hits for it? For example, a script. Sincerely thanks.
The output of the colmap is as following.
image
The official output describe can be found here:
+── images
│ +── image1.jpg
│ +── image2.jpg
│ +── ...
+── sparse
│ +── cameras.txt
│ +── images.txt
│ +── points3D.txt
+── stereo
│ +── consistency_graphs
│ │ +── image1.jpg.photometric.bin
│ │ +── image2.jpg.photometric.bin
│ │ +── ...
│ +── depth_maps
│ │ +── image1.jpg.photometric.bin
│ │ +── image2.jpg.photometric.bin
│ │ +── ...
│ +── normal_maps
│ │ +── image1.jpg.photometric.bin
│ │ +── image2.jpg.photometric.bin
│ │ +── ...
│ +── patch-match.cfg
│ +── fusion.cfg
+── fused.ply
+── meshed-poisson.ply
+── meshed-delaunay.ply
+── run-colmap-geometric.sh
+── run-colmap-photometric.sh

Windows Build - DLL load failed

Hi,

I am trying to get this repository to run on a Windows 10 machine.
Maybe unsurprisingly (since the platform isn't officially supported) I ran into some issues.

I wanted to ask if there is any plan to support the plattform or if somebody has an idea on how to resolve the error.

The error I got stuck upon was that python was complaining about missing DLLs when I ran the program with the default options:
python -v exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5
(with verbose output)

Traceback (most recent call last):

File "exp.py", line 8, in
import dataset
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "C:\Users\admin\Ordnung\Studium\Forschungsprojekt\FreeViewSynthesis\exp\dataset.py", line 8, in
import ext
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in call_with_frames_removed
File "..\ext_init
.py", line 1, in
from . import preprocess
File "", line 1035, in _handle_fromlist
File "", line 219, in _call_with_frames_removed
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in call_with_frames_removed
File "..\ext\preprocess_init
.py", line 1, in
from .preprocess import *
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 670, in _load_unlocked
File "", line 583, in module_from_spec
File "", line 1043, in create_module
File "", line 219, in _call_with_frames_removed
ImportError: DLL load failed: The specified module could not be found.

Since I have no idea why DLL is reffered to (and don't know how to find it out) I got stuck here.
I tested it two different Windows machines so it does not seem to be an issue related to DLLs that should normally be there.

To get the preprocess module to compile in the first place I had to do some changes. I don't think they are related to this issue but I wanted to list them for completeness:

  • When I ran the cmake PkgConfig couldn't find my Eigen installation.
    Since I found no straigthforward way to tell pkgconfig where it was I just ditched it and linked the Eigen via an Environment variable:
find_package( PkgConfig )
#pkg_check_modules( EIGEN3 REQUIRED eigen3 )
include_directories( $ENV{EIGEN3_INCLUDE_DIRS} )
  • When I compiled the resulting project in Visual Stuido 2019 the line 37 in common.h threw a compiler error:

template
py::array_t create_array1(const std::vector& data)
{
...
return py::array_t({data.size()}, new_data, free_data);
}

The Visual Studio compiler complained about something related to narrowing conversion.
I just removed the curly brackets around the data.size() to "fix" that.

I moved the resulting 3 files preprocess.exp, preprocess.lib and preprocess.cp37-win_amd64.pyd to ...\ext\preprocess so that the program could find them.

Image Undistortion and Proxy-geometry Reconstruction

Hi Gernot, thanks a lot for sharing the code of your great work. I have a follow-up question regarding the mesh reconstruction after #22. As you mentioned

then use sparse_reconstruction_unknown_calib using only the images/poses that should be included in the source set. The rest remains the same.

If I understand correctly,

  1. you first call sparse_reconstruction_unknown_calib on all images to get all images's camera parameters.
  2. you run the COLMAP pipeline again (sparse_reconstruction_unknown_calib, dense_reconstruction, and delaunay_meshing) only on training images.

I have three questions about how to fit the setting mentioned in the paper:

  1. I am wondering how you undistort all images? Since image_undistorter is called under dense_reconstruction, does it mean that evaluation images will not be undistorted? If this is not the case, can I know what your pipeline is to undistort all of them?
  2. since you run the whole COLMAP pipeline from scratch only on training images, even if you first run sparse_reconstruction_unknown_calib on all images, the camera parameters will not be the same across two runs due to bundle adjustment? I am confused about this and hope you can provide some guidance or explanation.
  3. in the processed dataset you provide, we have depth maps for all images, no matter whether they are for training or evaluation. In the paper, you mention the depth maps come from MVS. However, if we only run COLMAP pipeline on training images, how can we get that high-quality depth maps for evaluation images from MVS?

Thanks a lot in advance.

https://github.com/intel-isl/FreeViewSynthesis/blob/33a31ee214a77a2fa074d3a10cedc09803ec2ceb/co/colmap.py#L864-L867

co package version

Thanks for this great work!
When I was running the evaluation command, following error was shown in this line of code:

class Dataset(co.mytorch.BaseDataset):
AttributeError: module 'co' has no attribute 'mytorch'

It seems that the version of co package installed is not correct, since I install it simply by pip install co. But what exact co package does this repo use? I cannot find any official document about this package.

Creating a own dataset

Hello, I am very interested in your work. Can you provide relevant code and instructions on how to create a dataset from some photos?I want to use some of my own photos to obtain a dataset, but I have encountered some difficulties. I hope you can provide relevant codes and instructions. Thank you!!

Could you provide New Recording 1.00 datasets?

Hello!

We use the COLMAP to re-calculate the depth maps, but it fails for numerous views. We think the low-resolution (0.50) of the views makes the COLMAP fail. Therefore, could you provide New Recording 1.00 datasets or provide the details of using COLMAP?

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.