Giter Club home page Giter Club logo

flowmap's Introduction

FlowMap

preview.mp4

This is the official implementation for FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent by Cameron Smith*, David Charatan*, Ayush Tewari, and Vincent Sitzmann.

Check out the project website here.

Installation

To get started on Linux, create a Python virtual environment:

python3.11 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

For pretraining, make sure GMFlow is installed as a submodule:

git submodule update --init --recursive

If the above requirements don't work, you can try requirements_exact.txt instead.

Running the Code

The main entry point is flowmap/overfit.py. Call it via:

python3 -m flowmap.overfit dataset=images dataset.images.root=path/to/folder/with/images

Make sure the virtual environment has been activated via source venv/bin/activate first.

Pre-trained Initialization

The checkpoint we used to initialize FlowMap can be found here. To train your own, download the Real Estate 10k and CO3Dv2 datasets and run the following script:

python3 -m flowmap.pretrain

Some of the videos in the Real Estate 10k dataset are no longer publicly available. Reach out to us via email if you want our downloaded version of the dataset.

Evaluation Datasets

We evaluated FlowMap using video subsets of the Local Light Field Fusion (LLFF), Mip-NeRF 360, and Tanks & Temples datasets. We've uploaded a compilation of these datasets here.

Dataset Details

NeRF Local Light Field Fusion (LLFF) Scenes

These are the LLFF scenes from the NeRF paper, which were originally uploaded here. We used all 8 scenes (fern, flower, fortress, horns, leaves, orchids, room, and trex).

Mip-NeRF 360 Scenes

These are scenes from the Mip-NeRF 360 paper, which were originally uploaded here. We used the bonsai, counter, and kitchen scenes. The original kitchen scene consists of several concatenated video sequences; for FlowMap, we use the first one (65 frames). We also included the garden scene, which is somewhat video-like, but contain large jumps that make optical flow estimation struggle.

Tanks & Temples Scenes

We used all scenes from the Tanks & Temples dataset: auditorium, ballroom, barn, caterpillar, church, courthouse, family, francis, horse, ignatius, lighthouse, m60, meetingroom, museum, palace, panther, playground, temple, train, and truck. We preprocessed the raw videos from the dataset using the script at flowmap/subsample.py. This script samples 150 frames from the first minute of video evenly based on mean optical flow.

Running Ablations

Each ablation shown in the paper has a Hydra configuration at config/experiment. For example, to run the ablation where point tracking is disabled, add +experiment=ablation_no_tracks to the overfitting command. Note that you can stack most of the ablations, e.g., +experiment=[ablation_no_tracks,ablation_random_initialization].

Generating Novel View Synthesis Results

We used a modified version of the original 3D Gaussian Splatting code that backpropagates into camera positions in order to generate the novel view synthesis results shown in the paper. You can find it here.

Figure and Table Generation

Some of the code used to generate the tables and figures in the paper can be found in the assets folder. We used this code alongside Figma and LaTeXiT to create the figures in the paper. You can find our Figma file here. See .vscode/launch.json for the commands needed to run figure generation.

BibTeX

@inproceedings{smith24flowmap,
      title={FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent},
      author={Cameron Smith and David Charatan and Ayush Tewari and Vincent Sitzmann},
      year={2024},
      booktitle={arXiv},
}

Acknowledgements

This work was supported by the National Science Foundation under Grant No. 2211259, by the Singapore DSTA under DST00OECI20300823 (New Representations for Vision), by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) under 140D0423C0075, by the Amazon Science Hub, and by IBM. The Toyota Research Institute also partially supported this work. The views and conclusions contained herein reflect the opinions and conclusions of its authors and no other entity.

flowmap's People

Contributors

dcharatan avatar geyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flowmap's Issues

where is inference code?

i tried python3 -m flowmap.overfit dataset=images dataset.images.root=path/to/folder/with/images
with Lighthouse image dataset but it takes huge time to got final result even on A100 Tesla.
may be i started training not inference ?
how to check it out ?

Frozen the Intrinsic

great job, thx for the code ~
I would like to know if I have accurate Intrinsic parameters, can I directly use?
and is there a corresponding config that can be modified?

OOM

Hi, I tried this code on my pc, and I used 400 images, this seems OOM, does this code load all images to GPU and run 400 images in mean time? COLMAP seems not OOM on over 400 images
image

how to generate the novel synthesis view videos?

Hi, I run the following command and generated images/ply in outputs/local folder successfully, I just have a question: I saw the novel synthesis view videos from your website cameronosmith.github.io/flowmap: could you let me know how to continue to generate that kind of novel view videos? Thanks!

(my command):
CUDA_VISIBLE_DEVICES=0 python -m flowmap.overfit dataset=images dataset.images.root=./datasets/flowmap/co3d_bench/ +experiment=low_memory

Visualize the flowmap results

How can I visualize the results from running the script python3 -m flowmap.overfit dataset=images dataset.images.root=path/to/folder/with/images ?

I see output as a colmap format of cameras.bin, images.bin, points3D.ply

Advice on how to run the code on custom videos

Hi, thank you for making this public.

I wanted to ask if I can run this on video frames in the wild or if there are any additional input constraints to satisfy, like resolution.

Is a 360 degree camera trajectory required?

My aim is to exact unknown camera intrinsic and camera pose of an ceiling mounted uncalibrated PTZ camera inside an industrial hanger.

Thanks in advance.

Inference demo

Thank you for your excellent work! I am very interested in it .
Is it possible to provide inference demo and pre-trained model. I wanna test the final extrinsic in the output.
Looking forword your reply!

I am now going through the pre-training part, but the error did not find the data set named acid, the specific error message is here, please tell me where I should find this data set.

python3 -m flowmap.pretrain
rm: 无法删除 'outputs/local': 没有那个文件或目录
Using cache found in /home/zmm/.cache/torch/hub/intel-isl_MiDaS_master
Loading weights: None
Using cache found in /home/zmm/.cache/torch/hub/rwightman_gen-efficientnet-pytorch_master
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA GeForce RTX 4080 SUPER') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

| Name | Type | Params

0 | flow_predictor | FlowPredictorGMFlow | 0
1 | model | Model | 21.3 M

21.3 M Trainable params
0 Non-trainable params
21.3 M Total params
85.382 Total estimated model params size (MB)
Loading CO3D sequences: 100%|███████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 15.71it/s]
Error executing job with overrides: []██████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 16.40it/s]
Traceback (most recent call last):
File "/home/zmm/anaconda3/envs/flowmap/flowmap/flowmap/pretrain.py", line 66, in pretrain
trainer.fit(
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 544, in fit
call._call_and_handle_interrupt(
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 580, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 987, in _run
results = self._run_stage()
^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 1031, in _run_stage
self._run_sanity_check()
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 1060, in _run_sanity_check
val_loop.run()
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/loops/utilities.py", line 182, in _decorator
return loop_run(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 110, in run
self.setup_data()
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 166, in setup_data
dataloaders = _request_dataloader(source)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py", line 342, in _request_dataloader
return data_source.dataloader()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py", line 309, in dataloader
return call._call_lightning_datamodule_hook(self.instance.trainer, self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py", line 179, in _call_lightning_datamodule_hook
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/flowmap/flowmap/dataset/data_module_pretrain.py", line 76, in val_dataloader
dataset = get_dataset(self.dataset_cfgs, "val", self.frame_sampler_cfg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/jaxtyping/_decorator.py", line 450, in wrapped_fn
out = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/flowmap/flowmap/dataset/init.py", line 34, in get_dataset
datasets = [
^
File "/home/zmm/anaconda3/envs/flowmap/flowmap/flowmap/dataset/init.py", line 35, in
DATASETS[cfg.name](cfg, stage, frame_sampler) for cfg in dataset_cfgs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/site-packages/jaxtyping/_decorator.py", line 450, in wrapped_fn
out = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/zmm/anaconda3/envs/flowmap/flowmap/flowmap/dataset/dataset_re10k.py", line 48, in init
[path for path in root.iterdir() if path.suffix == ".torch"]
File "/home/zmm/anaconda3/envs/flowmap/flowmap/flowmap/dataset/dataset_re10k.py", line 48, in
[path for path in root.iterdir() if path.suffix == ".torch"]
File "/home/zmm/anaconda3/envs/flowmap/lib/python3.11/pathlib.py", line 931, in iterdir
for name in os.listdir(self):
^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/acid/test'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Anamorphic lenses?

Does this code support images from anamorphic lenses?
Would the intrinsic data be calculated correctly?

Also, how many parameters are returned for the distortion matrix?

Thanks!

CUDA OOM error

Hi,

I am trying out the overfitting script on a scene from Mip-NeRF360 but I got OOM cuda error with a 24GB VRAM GPU. I wonder which GPU should I use to run the experiment.

Why not using GMFlow as the default optical flow estimator

Hi

Thanks for the great work and the decently-written, well-commented codebase!

I noticed you also have GMFlow as an alternative optical flow estimator but the default setting is still RAFT. Since GMFlow is a relatively new method with reported scores higher than RAFT, I wonder why you made this selection :)

Thanks in advance!

About compute_consistency_mask function

Thank you for your excellent work!
In the code, I think deltas is used for highlighting accurate flow, but I don't understand the way to get deltas in the code. I think the pixel value on the same position xy of [source] and [target_pixel] are different definitely, they can't be subtracted by position xy.
lookforward your reply!

Question about Depth Estimation in Your Paper

@dcharatan
Thank you for your excellent work. However, I am a bit confused about the depth estimation mentioned in your paper. In section 3, "Supervision via Camera-Induced Scene Flow," is the depth obtained through backpropagation of the loss gradients? It seems to me that the depth is directly obtained from an off-the-shelf monocular depth estimation method. Could you please clarify this for me?

Randomness in Results

Hey @dcharatan, Thanks for the nice code!

I've been running the code through the overfit.py file on my own data. Even though I seed everything, the code produces different results each time (sometimes the performance delta is large).

I was wondering if there are any elements of the code which have randomness that can't be seeded? What is the best way to seed the code for benchmarking? For the paper evaluations, how do you go about this issue?

I tried setting the PyTorch Lightning Trainer to determinisitic but there were a few operations like 2D grid sampling which didn't have a determinisitc implementation.

I'm attaching snippets of my evaluation here across 6 back-to-back runs with the same command and seed:
image
image
image
image
image
image

Dynamic Setting

Dear Authors,

Thanks for the amazing work! I know that this work deals with static setting. I am curious to know if this work can be extended to dynamic setting with optical and foreground and background (static) segmentation available? Please share your thoughts.

Thanks!

Custom data

hi, im trying to run flowmap on custom data. I wonder what config i need to change to do that? I am getting
RuntimeError: stack expects each tensor to be equal size, but got [3, 1008, 756] at entry 0 and [3, 864, 1152] at entry 1

when I simply use dataset: images

Thanks

About focal length and pixel size

Hi, author! It's really an impressive work! Thanks for your awesome coding.
I have a question about focal length and pixel size.

From here, it's observed that the pixel size(the actual size of per pixel) is calculated as 1 / (h * w) ** 0.5, then the f_x of intrinsic matrix is computed as focal_length / pixel_size.
The computation of pixel size is somehow strange for me, can you explain it for me ?

Depth as only free variable

Thank you for the insightful work.

I found it interesting that expressing pose as a function of depth (instead of a separate optimizable variable) leads to significantly better performance. This is supported in your ablation study.
image

I was wondering if you have some intuition / reasoning for why this is the case? Why does this lead to better results?

Use FlowMap instead of COLMAP

Could your code generate a transform matrix that can be used in other Nerf frameworks or methods, such as Nerfstudio or Neuralangelo? These methods require a transform matrix in JSON format for the camera positions of the frame videos. Thanks

The bug when running the Code

Hi, thank for your share.
I meet the bug when I run the code: "python3 -m flowmap.overfit dataset=images dataset.images.root="/xxx/flowmap/mipnerf360_garden"" Do you have any method to solve it?
image
image

About results

I run the overfit.py using 15 frames of co3d_bench, but THE 3D results is wrong. Can you explain it ?

image

Relative camera pose estimation from 2 images

Hi, I have successfully run the overfit.py file with my image folder, which contains 2 images.

However, as I am completely new to SfM and camera estimation, I want to ask how to estimate the relative camera pose of 2 images.

Can you give me some guides on this? Thank you!

Predicted depth range and metric depth setting

Dear Authors,

Thanks for the amazing work and the amazing code base!!

Could you please provide information on whether there are any constraints regarding the minimum and maximum values outputted by the depth network for the MiDaS model using the 'exp' setting? Specifically, I would like to know if the depth values range from 0.01 to infinity, or if they are normalized in some way. Additionally, could you clarify if any scale or shift adjustments are applied to the output? I am interested in adapting the output for use in a metric depth setting.

Also, what are the advantages of using depth midas 'exp' vs. midas 'original'? Did you find any difference in your experiments?

Thanks!

OOM issue

First, thanks for your work!

I was wondering if it was possible to know how much VRAM this code needs.

I tried running it on a dataset of 300 1920x1080 photos but even though I'm using a 24GB 4090 it goes OOM.
I also tried reducing the dataset to 200 photos but the same problem occurred.

how to vis points3D.ply?

Hi,Great job!
I want to know how to visualize Point3D.ply, I try to open it with cloudcompare, but there's nothing;

Seg Fault in "Compute optical flow and tracks"

Hello,

I successfully ran the subsampling preprocessing script, however when i run the overfit script:

python3 -m flowmap.overfit dataset=images dataset.images.root=/.../.../.../.../flowmap/frames

Computing RAFT flow: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:43<00:00,  2.41s/it]
Computing RAFT flow: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:43<00:00,  2.42s/it]
Using cache found in /home/dev/.cache/torch/hub/facebookresearch_co-tracker_v1.0
Segmentation fault (core dumped)

I get this segmentation fault, unsure how to proceed here. Any help would be greatly appreciated.

Segmentation Fault in flow_predictor_raft.py script

Hello! I've been facing a segmentation fault error right after running the flowmap.overfit script and it seems to be from this block of code in flow_predictor_raft.py (line 41):

flow = [
            self.raft(
                source_chunk * 2 - 1,
                target_chunk * 2 - 1,
                num_flow_updates=self.cfg.num_flow_updates,
            )[-1]
            for source_chunk, target_chunk in zip(
                bar(source.split(self.cfg.max_batch_size)),
                target.split(self.cfg.max_batch_size),
            )
        ]

Here is the stack trace:
python -m flowmap.overfit dataset=images dataset.images.root=/home/hadil/Work/flowmap/data
rm: cannot remove 'outputs/local': No such file or directory
Precomputing optical flow.
Computing RAFT flow: 0%| | 0/1 [00:00<?, ?it/s]Fatal Python error: Segmentation fault

Thread 0x00007e39449e3640 (most recent call first):
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/threading.py", line 331 in wait
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/threading.py", line 629 in wait
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/tqdm/_monitor.py", line 60 in run
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/threading.py", line 1045 in _bootstrap_inner
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/threading.py", line 1002 in _bootstrap

Current thread 0x00007e3a3e73c740 (most recent call first):
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456 in _conv_forward
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460 in forward
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520 in _call_impl
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511 in _wrapped_call_impl
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217 in forward
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520 in _call_impl
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511 in _wrapped_call_impl
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torchvision/models/optical_flow/raft.py", line 160 in forward
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520 in _call_impl
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511 in _wrapped_call_impl
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torchvision/models/optical_flow/raft.py", line 492 in forward
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520 in _call_impl
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511 in _wrapped_call_impl
File "/home/hadil/Work/flowmap/flowmap/flow/flow_predictor_raft.py", line 46 in forward
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/jaxtyping/_decorator.py", line 450 in wrapped_fn
File "/home/hadil/Work/flowmap/flowmap/flow/flow_predictor.py", line 87 in compute_bidirectional_flow
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/jaxtyping/_decorator.py", line 450 in wrapped_fn
File "/home/hadil/Work/flowmap/flowmap/flow/init.py", line 33 in compute_flows
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/jaxtyping/_decorator.py", line 450 in wrapped_fn
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115 in decorate_context
File "/home/hadil/Work/flowmap/flowmap/overfit.py", line 67 in overfit
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/hydra/core/utils.py", line 186 in run_job
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/hydra/_internal/hydra.py", line 119 in run
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/hydra/_internal/utils.py", line 458 in
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/hydra/_internal/utils.py", line 220 in run_and_report
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/hydra/_internal/utils.py", line 457 in _run_app
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/hydra/_internal/utils.py", line 394 in _run_hydra
File "/home/hadil/anaconda3/envs/flowmap/lib/python3.11/site-packages/hydra/main.py", line 94 in decorated_main
File "/home/hadil/Work/flowmap/flowmap/overfit.py", line 160 in
File "", line 88 in _run_code
File "", line 198 in _run_module_as_main

Extension modules: yaml._yaml, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, charset_normalizer.md, google._upb._message, psutil._psutil_linux, psutil._psutil_posix, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, matplotlib._image, PIL._imagingft, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._flinalg, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, PIL._imagingmath (total: 68)
[1] 27723 segmentation fault (core dumped) python -m flowmap.overfit dataset=images

About results

I ran the code with default config and changed max_steps.
image

The command is [python3 -m flowmap.overfit dataset=images dataset.images.root=flowmap_dataset/test3 +experiment=low_memory]
The image is flowmap_dataset that you uploaded. I just choose one set of images.
This is process:
image
image

But result is bad
image

Did I miss something? In addition, for lower consuming, I just use 15 frames not 150 frames of [ flowmap_dataset/co3d_bench ]

Using Gaussian Splatting that backpropagates into camera positions

Dear Authors,

Thank you once again for the incredible work you have done!

I have a question regarding the Gaussian splatting technique mentioned in your README (https://github.com/dcharatan/flowmap/blob/main/README.md). Specifically, you mention that this method backpropagates into camera positions.

Could you please clarify whether the extrinsics and intrinsics obtained from Flowmap are robust enough to be used directly with 3D Gaussian Splatting without the need for further optimization of the camera positions while emploing 3D GS? Additionally, could you share any observations on the differences in NVS performance, in terms of PSNR, with and without camera position optimization when using Gaussian splatting?

Thanks!

Generate input data?

Hi, and thanks for this code! Would you be able to link to a workflow to generate the flow maps and point tracks to use as input?

Or, does this repo include a full end-to-end pipeline that can simply take an rgb sequence?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.